Surviving in the Graveyard of Unicorns: Thriving in the rubble of the AI bubble.

Image of a 404 Missing page and a balloon about to pop with a mouse sitting nearby. Stylized midcentury modern illustration depicting the upcoming AI bubble and Crafted Logic Lab's plan to stay a small, survivable artificial intelligence boutique

It is coming. The regulators are circling, with the EU’s AI Act having started to show its teeth in 2025, requiring disclosure of training data and energy consumption (just what Big AI doesn’t want to reveal). Public trust is collapsing: 68% of Americans now say they’re “concerned” about AI development, up from 38% since just 2023. Lawsuits are piling up, with potential judgments at a scale that could erase OpenAI’s entire revenue model overnight.

And speaking of that revenue model: Anthropic hemorrhaged $5.3 billion in a year. OpenAI is losing $700k daily on ChatGPT. The sector’s consuming $100 billion in venture funding yearly with 70% burn rates. These are numbers that make Pets.com’s $82 million dot-com flame-out look quaint.

• • •
The industry is headed for a mass-extinction event of its own making, fueled by techno-utopian oligarchs and venture capitalists.

• • •

But in the shadow of AI behemoths blind to the asteroid, small and nimble companies have an opportunity. Here’s our plan at Crafted Logic Lab to be one of them:

Be Small. Stay Small: Perplexity has one of the better models in the Llama-based Sonar and 15 million daily users. And is considered a failure. This is because their venture capital funding pressed toward rapid-scaling and high burn rate in the hopes of creating a unicorn. Our approach is to avoid this trap. A Harvard business school review estimated that VC-backed ventures suffered a 75% failure rate, while general business hovers around 50%—an increase of 50%. Part of the drive toward this is in my opinion one of the most toxic beliefs in modern economics: the myth of infinite growth. It not only drives burn rates, but perverse incentives that result in destruction of founder vision and betrayal of customer trust. Our boutique may never be a unicorn, but the goal is: stable growth, sustainable economics, independence, humanist principles… and never in a situation where we couldn’t survive on 15 million users.

• • •

Avoid AI Hyperscaling: Another area I plan to think small is the language-models we use with our cognitive architecture. Crafted Logic technology is built on a neurosymbolic overlay-based operating system. Our deployment testing (which you can access on our site, and we are always adding to it: ~here~) demonstrates that with proper structuring an approx. 70 billion-parameter model can run circles around frontier models. This has a few consequences: lower infrastructure costs, less reliance on Big AI, and less power-hungry systems. In a post-bubble landscape these are all survival advantages.

• • •

Earned Trust: It’s not just economics. Part of the upcoming bubble is the public “confidence gap”: according to Pew research, 81% of U.S. adults have “little or no confidence” that big tech will use AI responsibly, with 75% wanting to regulate AI “at least somewhat strictly”. Frankly, that skepticism is earned. The technology has emerged in a peak neoliberal gilded age dominated by tech broligarchs. This gives companies focused on user-sovereign humanist AI an opening, as trust becomes an asset. And one that Big AI will have difficulty earning.

• • •

Ethics as the Killer Feature: The recent debacle with Grok (creating sexualized images of non-consenting adults and children from across cultures)—and xAIs response of essentially monetizing it by shutting it off for free users while maintaining it as a paid feature—is a textbook case of the way in which the frathouse-esque tech broligarchy is actively poisoning the industry. But this extreme example isn’t the only one: from Palantir’s surveillance state aspiration and stated “we kill people” positioning to OpenAI’s contradictory AGI fear mongering-as-marketing strategy the industry seems determined to torch any form of consumer trust. But ethics can’t be performative. It must come from genuine commitment. This ties back to “stay small”… the reality is anything but superficial ethics marketing can’t exist in venture capital-driven clawing for unicorn status.

• • •

Human-led AI: Another key to earned trust is understanding the proper role of AI. Unlike previous tech revolutions like the PC, Internet (or even desktop publishing), which were presented as enhancement, the current goal and overt sales-pitch by Silicon Valley is explicitly human replacement. The MIT NANDA The GenAI Divide: State of AI in Business 2025 [1] cites a 95% failure rate for enterprise AI solutions. This sky-high failure rate has led to concessions toward “human-in-the-loop”—still backwards, because it treats humans as the add-on to autonomous agent AI. Companies that realize this will have an edge, because human-led AI: closes the liability gap issues since humans are the primary responsible actor. Maintains the benefits of human creativity and adaptability. Will be more readily adopted because workers don’t resent it as a stepping stone to their elimination.

• • •

Find the Benefit. Not the Hype: That same MIT NANDA The GenAI Divide: State of AI in Business 2025 [1] cited a 95% failure rate for enterprise AI solutions. This is not sustainable and is a big part of the bubble. And this sky-high failure rate is because it targets human replacement, not enhancement, unlike prior tech revolutions from the PC and internet to desktop publishing. So we need to flip the model: AI as a support tool for the user not the user as a backup system for the AI. Human-as-the-driver, not human-in-the-loop. This can close liability gap issues that plague autonomous agent approaches since humans stay responsible. Maintains the benefits of human creativity and adaptability, and gets adopted because workers don’t resent it as a stepping stone to their elimination. Enterprise AI has been overpromised and underdelivered because ultimately, the vision of an empty office run by autonomous-agents is fundamentally flawed.

• • •

Creating AI for Humans: The same MIT NANDA study revealed an interesting flipside to the statistic: “…research uncovered a thriving ‘shadow economy’ where employees personally use ChatGPT/Claude far more than companies officially deploy enterprise AI”. This reveals an opportunity companies obsessed with enterprise-scale may miss: when its useful for individual people, people use it. Consumer, professional user and even small business Assistant Intelligence tend to be written-off by would-be unicorns. But the cost of acquisition and servicing for customers often beats reaching for the El Dorado of huge deployment contracts for smaller firms. And if even tools with marginal reliability like GPT/Claude wrappers create enough personal ROI for users, then truly useful assistants have potential.

• • •

Release a Product That Works: This seems stupidly obvious. But one of the key critiques and loss of trust in adoption (especially on the enterprise-level) are products that are half-baked demos or don’t work as advertised on the tin. Claiming it works is easy. Delivering is harder. As a boutique studio, the reality has always been that if it’s not stupidly-obvious better, why bother? The market landscape is heavily tilted against us. In reality, many of the flaws in current AI (hallucination, sycophancy, overconfidence, cost overruns, vendor lock-in) are not unsolvable, but incentive structures and mental models for Big AI don’t align with finding those solutions. Judge our early builds for yourself:


– Ian Tepoot (Founder, Crafted Logic Lab)

Next
Next

Epistemic Integrity Reasoning Testing: AI Should Know What It Doesn’t Know