Six possible things before breakfast
• 5 minute read
By David Toms, Head of Research, Hg
“Why, sometimes I like to believe as many as six impossible things before breakfast.”
Lewis Carroll’s famous line is often invoked in periods of uncertainty. But can we invert the statement in a search for greater certainty around what IS possible?
The long-term potential of AI is increasingly obvious to anyone who experiences its capabilities. We witness it directly across Hg and our portfolio. The technology is real. The productivity gains are tangible. The pace of innovation is extraordinary – and still accelerating.
However - AI does not merely substitute one vendor for another; it dramatically expands the remit of software. Entire new categories of work become automatable; this is no zero-sum, fixed-pie, tightly-bounded opportunity, it is a massive expansion of the value pool from the $1tn software market, into the $50tn human labour market.
Read Matthew Brockman's essay Everything, everywhere, but not all at once for a more fulsome understanding of how Hg views the impact of AI on software.
Recent market behaviour implies a different, more binary interpretation – put simply, AI wins, incumbent SaaS loses. The IGV software ETF is down 22% year-to-date . Salesforce, ServiceNow, and Intuit have each lost a quarter to a third of their value in a matter of days. At the same time, the AI companies supposedly causing this disruption are chasing the same attributes that investors are rapidly devaluing in “traditional” SaaS businesses.
Possible thing #1: AI startups will penetrate new markets, and so will incumbents.
OpenAI at $500 billion or more. Anthropic at $350 billion. Cursor from $3 billion to $30 billion in a year. These are valuations that imply that AI is driving a massive expansion of the total addressable market. But if AI genuinely opens up new markets, then at least some incumbents, with their customer relationships, domain expertise, proprietary data, and distribution – will also be able to capture these opportunities. Current valuations seem to assert that AI creates vast new opportunity and that incumbents will win none of it. Hg’s Matthew Brockman is often quoting Alex Rampell: “The battle between every startup and incumbent comes down to whether the startup gets distribution before the incumbent gets innovation.” We see plenty of instances of the incumbents innovating fast, whether it’s commerce(2), accounting(3) or ERP(4).
Possible thing #2: AI becomes ever more dependent on deterministic software.
As tasks get more complex, LLMs are evolving beyond answering questions based on their “knowledge”, to invoking other software with domain-specific capabilities. And as models get smarter, we see more of these “tool calls”, not fewer – the smarter the LLMs get, the more reliant they are on third-party deterministic workflow software (see the excellent OpenRouter study(5) for more details around tool calls). The increasing roster of LLM partnerships highlighted earlier isn’t a marketing story, it’s a recognition of the relative strengths of different approaches. And it is about more than just accuracy, it’s also about power efficiency – for all their brilliance, LLMs are staggeringly inefficient at numerical tasks. On its own calculations, an LLM like Claude Opus consumes approximately 10 billion times more power to perform basic multiplication, than is actually required for the mathematical operation(6). There’s a valid debate to be had around where the customer relationships sit in this emerging world, but much less debate around who can perfectly execute repeatable, auditable, efficient workflows, millions of times a second.
Possible thing #3: Convergent evolution.
Incumbents have revenue, profits, customers, data, domain expertise, and distribution, but are punished because they "don't have AI." AI companies have frontier models but many are immature in other areas - check out the sub-50% product-level retention rates in the aforementioned OpenRouter study. The market is applying completely different evaluation frameworks to two sets of companies that the market seems to think are ultimately competing for some of the same customers and workflows.
Possible thing #4: AI companies are racing to build exactly the moats that incumbents already enjoy, suggesting such moats have enduring value.
The bear case says switching costs, proprietary data, and workflow integration no longer matter. Yet Harvey is investing hugely to be deeply embedded in legal workflows. Cursor wants to be indispensable to developers. Anthropic and OpenAI are building enterprise sales teams and specific integrations in regulated industries like legal and medicine . Their new agent architectures aim to build the same deeply embedded, workflow-aware platform that incumbents spent decades constructing for their customers. These approaches are all valid, well-proven ways to build long-term value – approaches that have been tested and refined by incumbents. AI systems are themselves software - sold via subscriptions, integrated through APIs, deployed on cloud. Non-AI software won’t disappear because it’s gone extinct, but because it is evolving.
Possible thing #5: Investors fear that LLMs will commoditise software; commoditisation cuts both ways.
Open-source models are closing the gap on proprietary ones(8). Inference costs are falling fast(9). New foundation models emerge every few months, each eroding any advantage the last release enjoyed. GPU pricing data shows enterprise prices converging rapidly towards AI-lab discount levels. The AI layer is arguably commoditising more quickly than the application layer because it lacks the attributes that create enduring customer relationships at the application layer. The durable value sits in domain-specific data, workflow integration, and customer relationships - exactly where incumbents have always built their businesses.
Possible thing #6: The companies building AI will themselves remain users of deterministic software.
The job postings for the largest AI providers, feature many roles that require usage of “traditional” applications like Workiva, Salesforce, Workday and Netsuite – see anthropic.com/careers for examples. These AI-first firms have world-class engineering talent, unlimited access to frontier models, and every incentive to build alternatives. They have chosen not to do so. If the companies building frontier AI do not see a need to replace enterprise software, it is hard to see why the rest of the world (with fewer capabilities) would; and it highlights that there will remain key workflows that best suit deterministic, not probabilistic, engines.
It feels like investors are fitting narratives to share-price movements rather than interrogating whether those narratives are internally consistent. The current consensus pitches arguments against each other, but the data suggests that what is good for one, can be good for all – that far from being a zero-sum game this is a market expansion opportunity that can benefit many types of software providers.
For investors in vertical enterprise software – typically companies with deep domain expertise, regulatory advantages, proprietary data, and loyal customer relationships – we recognise that investor behaviour is telling us something, but we don’t think it’s telling us everything. The potential from AI is huge, it will be used to address many white-collar workflows; and in so doing it will enhance the value of software in executing tasks. This in turn may mean fewer employees are needed to execute those same tasks, but those tasks still require complex data, industry expertise, 100% accuracy, regulatory compliance, customer trust and much more. The opportunity is evolving, not ending.
The Queen told Alice she'd sometimes believed six impossible things before breakfast. But maybe all six things are actually possible for new entrants and incumbents.
(1) As at 13/2/2026
(2) https://www.shopify.com/news/shopify-open-ai-commerce
3) https://www.intuit.com/chatgpt/
(4) https://news.sap.com/2025/11/sap-mistral-ai-new-alliance-european-sovereign-ai/
(5) https://openrouter.ai/state-of-ai
(6) From Opus 4.6, with the prompt “How much more power does an LLM like Claude Opus 4.6 consume to perform a basic multiplication operation, vs a pocket calculator or intel i9 processor?”
Orders of magnitude: LLM vs calculator: ~10¹² × (a trillion times more), LLM vs i9 instruction: ~10¹⁰–10¹¹ × (tens of billions times more)
Why this matters (and doesn't): The comparison is a bit like asking how much more fuel a 747 uses to deliver a letter vs walking it next door — true but slightly misleading. The LLM's energy cost buys you generality: the same infrastructure that wastefully "multiplies" 7 × 8 can also summarise a legal document or write code. The calculator can only multiply. That said, it's a useful frame for thinking about when LLMs are and aren't the right tool, and why hybrid architectures (LLMs that call calculators/code interpreters for arithmetic) make so much sense.
(7) https://www.anthropic.com/news/healthcare-life-sciences
(8) https://cmr.berkeley.edu/2026/01/the-coming-disruption-how-open-source-ai-will-challenge-closed-model-giants/
(9) https://futuretech.mit.edu/publication/the-price-of-progress-algorithmic-efficiency-and-the-falling-cost-of-ai-inference.