Orbit 57

Everything, everywhere, but not all at once: Matthew Brockman on what's really happening in software right now

In this rare internal conversation, Matthew Brockman, Hg's Chief Investment Officer, offers an insider's view on what's actually working versus what's still hype. Far beyond speculation, this is private equity deploying real capital into real companies with real customers, watching what happens when agentic software meets regulatory compliance and established workflows.

Brockman reveals why vibe coding is being used in sales processes but not production, why venture money is subsidising inference costs that must eventually turn into labour market economics, and his "virtual Matthew" Turing test for when we've truly reached AGI. From Hg's Catalyst program - parachuting developers into portfolio companies at Hg's expense to accelerate AI product builds - to standing up in Silicon Valley urging CEOs to invest and move quickly, this conversation cuts through the noise with data, deployment examples, and the hard economics of building enterprise AI that actually ships. Essential listening for anyone trying to separate signal from hype in February 2026

Watch video

Listen on:

Episode Transcript

David Toms

Welcome to Orbit, the Hg podcast series where we talk to leaders and hear how they've built some of the most successful technology companies in the world. I'm David Toms, Head of Research at Hg, and in this special episode, I'll be interviewing Hg's Chief Investment Officer Matthew Brockman to talk about the future of software in an AI world.

Matthew, we're sitting here in February 2026, mid-February 2026 because days seem to matter now in the public markets. What's going on in software?

Matthew Brockman

Lead in with the big question. I mean, we can talk about what's happening in technology, and we could talk about what's happening in the markets that obviously value software. Maybe if I start with the latter, because that seems to be the most sort of immediate topic.

I think you're at an interesting confluence between the sort of sustained momentum of the model providers and the sense of IPO model providers, and some of the applications have been built on the basis of that technology and a real sort of sense of excitement about what the potential will be. I would characterise it as that. I mean, it's evident in some areas like coding and so on, but there's a lot of areas in application where it's still coming, frankly, versus what's been achieved versus the counterfactual for existing SaaS, where people are looking to see what the reaction is going to be like, what AI products are going to get built?

And that at the moment is sort of almost a bit of an unproven sort of negative, you know, how do I know something which I haven't yet evidenced? So you've got this sort of sharp sell-off, frankly, in the public markets for existing software, complemented with continued excitement, perhaps peak excitement, dare I say, for sort of, you know, what the model capability is and where that's going to influence, you know, enterprise software in particular over the long term.

David Toms

And do you think that there's an element of catch up going on that there's been a lot of progress for three years and people have only now realised, have we passed a tipping point?

Matthew Brockman

I think there was some degree of an acceleration in the model capability during the latter part of last year, and it was - some of it was around obviously computer use and data, and a lot of it was around sort of the learning process for the models. And so, I think with the Claude 4.5 release and same with OpenAI and then the Claude 4.6 release. People have seen the performance of that step up meaningfully in application.

So, I think there was always an expectation. Certainly, in Hg we had an expectation this was coming, and you would see this kind of level of performance arriving. Question was kind of when. And so when I talked to people in our team and I talked to people in our portfolio companies, the sort of capability, particularly in software development, but obviously that will then start to cascade into other applications, has sort of meaningfully stepped up really, even in only those periods.

And so I think what you're - the sort of catalyst, if you like, for, you know, for recent events has been that on one hand and then sort of announcements, if you want to call it that from Anthropic and others on co-work and these sort of collaboration spaces where essentially it's trying to make it easier for the average executive in a company to essentially build agentic application.

And I would say that's still early to know exactly how that's going to play through. But that's what the market is trying to interpret. And then comparing that with the news and information it's receiving from its incumbents.

David Toms

I guess one of the things I've noticed is that as the models have been getting better and better, one of the things they've been getting better at doing is using other software themselves. So, does that give us a hint as to how software might evolve over the next few years?

Matthew Brockman

Yes. I mean, obviously you can talk about the analysis of data calls and how much stuff is being taken from existing systems, and that's going up exponentially as these agentic tools are gradually being rolled out.
It's very difficult to build an application in an SME or in an enterprise that doesn't access existing information, existing data. This is obviously where a lot of the moat comes from for the incumbents. And so, it's difficult to think that you're going to build agentic tools which have some degree of probabilistic thinking and a lot of thinking, a lot of intelligence, but a lot of it is probabilistic. That's not going to interpret the actual deterministic mechanisms of software and the underlying data of the software. And frankly, the workflow that's been built into software for many decades in many cases. This is a familiar set of patterns of behaviour in many industries, which reflects the characteristics of the job that's being done.

David Toms

And if we look at pace of change and then you take sort of all those established systems, one of the things I've observed as well is that we're three years on from sort of the original ChatGPT launch. Actually, the number of AI systems that are capable of writing to any existing system seems to be vanishingly small. They can read data out in some cases. In most cases, they're working off a knowledge base. So, do you think we get to a point where AI can actually do anything? Are people actually going to let it access a payroll system and make a change to a payslip?

Matthew Brockman

I mean, this is the key question, right? Which is, are you going to allow new systems that somebody built, you know, the sort of classic sort of tenor at the moment in particular sort of in the venture community. And the podcasts we all listen to is, you know, I could whip up a payroll system in an afternoon using, you know, Claude Code or Replit and that thing will work. I can use an open-source machine that will give me the taxation rules for a European country. And off I go.

That is pretty hard, right? We know that's pretty hard for many reasons, right? There's a whole bunch of judgements baked into some of those assessments. There's a whole bunch of habit in terms of the way people do things. There's a trust factor, which is obviously huge in terms of how people use the product, whether they're really going to pay people, how the tax office gets notified, all those things.

And it sort of assumes also, I think that the existing providers of that capability, the existing software companies, which, you know, at its crudest have been portrayed as electronic filing cabinets don't do anything and they sort of sit and wait and see how this plays through. And then we all wake up in 5 or 10 years’ time. And it turns out that actually that Replit-fed agent or the Anthropic-built enterprise tool is doing all of the work. And I'm now just a kind of dumb database of, you know, some old payslips.

David Toms

So, I guess if we look forward in software, five years, ten years’ time, what do you think an enterprise software application - what does a winning enterprise software application look like?

Matthew Brockman

I think it's critical that you do have existing software building into the agentic layer. So, I think the prospect that the capability of AI does not end up essentially at least augmenting and frankly, substituting labour in many applications is real, right? And you hear that spoken about by a lot of very, very smart people. It's going to take time. This isn't something that happens in six months or 12 months as we're evidencing, but over the course of ten years or five years. So, if you fast forward to 2031 or 2036 - are a bunch of those jobs being fulfilled by accounts clerks or payroll clerks or, you know, anybody in a kind of white-collar role, frankly, services roles? I suspect a bunch of that will look very, very different.

The opportunity for an incumbent software player is to say, how do I use my existing, you know, data advantage, distribution advantage, domain knowledge and its deterministic nature, the fact it actually calculates to essentially start to build into that agentic layer. So how do I build in the workflow that can be substituted for the human?
I was with one of our portfolio companies this morning, which is just launching agentic software into their customer base. And one of the things that customer is very quickly looking to do is say, look, I have, you know, several hundred people dealing with processing invoices. I can pretty much automate that with your product, but I need you to deliver it for me because I need reliability, I need quality, I need the existing processes and existing data. It's not something I want to build myself. It's not something I really want one of my techies building with some Replit code.

But I do want somebody to start thinking about how I could take, you know, a couple of hundred heads out of that piece of functionality because, actually, invoice matching, reconciliations, posting, emailing is pretty automatable. So, I can see how that can be done now. And that probably wasn't doable in quite the same way even a year ago.
So the pace I think of that and what we've obviously done a huge amount of work on is say, how do we both encourage our companies to grab that, but also encourage them to spend money, adopt product, build capability so that they're moving on that very quickly. And I would say that's been the urgency we've had to really fire up within the portfolio.

David Toms

So, what do you think has changed about what you're telling the portfolio you want them to do versus a year or two ago?

Matthew Brockman

Yeah. Great question. So, I think a year ago - we have an event every year in about this time of year where we go to Menlo Park and we try and expose ourselves to the very latest. And it's not the only event, but it's a way of trying to really get crisp. Last year, a lot was around experimentation, which is sort of understanding potential, trialling, testing, seeing what the art of the possible was with a sense of kind of what was likely to happen.

This year very much more around, you can now see what's happening. You can see adoption in some of the most advanced use cases, obviously encoding being one, customer support. And for the others being there, you can see how that sort of application is going to play through, how people are going to build tools that work, how you do see some prospect for labour substitution now, which is going to grow over time.

And if you're not essentially building product, re-engineering your business to think about AI first, product build and product development and customer engagement - so both amount of money you're spending, capability, organisation, and I would say a very product-first mentality like, how are you thinking about what the customer might do with this product, what they need to see from it, how you build it with them, how you put that back then into the core product so that can be deployed amongst other customers with the same set of products, the same set of problems.

I would say that is the theme that we pushed on very hard and it is taking the adjustment. I say I was literally with a company this morning who were saying, well, actually we are going to spend a few extra million euros this year on AI build in a way that probably 3 or 6 months ago, they've been very reluctant to do, because we're encouraging them to think about what that's going to look like, and they're doing it with their customers.

David Toms

And those are two quite interesting comments within that. When you say product-first and you say you're investing money, product-first kind of means EBITDA second. So, to what extent are you giving management teams the license to say, yes, you can go and spend some money. We're prepared to see investment go through. And maybe you miss the EBITDA numbers.

Matthew Brockman

We stood up on stage in Silicon Valley two weeks ago and very clearly laid it out. We've been laying it out for a while, but we wanted to repeat the prospect, which is this is a point in time to think about how your product development, your product augmentation can change, and you can set yourself up for that 5 or 10 year view.
If you are not investing money now and like real money, not like a few thousand for some licenses and a bit of cloud and this, that and the other. But actually, I need developer teams. I need, um, you know, I need to think about go to market strategies. I need to think about pricing architecture. There's a bunch of work I need to be doing now.
If I'm not spending that now, then I do have the prospect I could be behind in 2 or 3 years’ time, because this will move and this will move quickly. And it requires a forward-looking CEO, but also a forward looking, you know, product department, tech department. And that's where we have to have that sense of urgency and that willingness to invest.

David Toms

Do you think it's quite important that we're also prepared to be wrong at stages in that process? I'm just thinking through our own evolution within Hg and we were all in on one large language model 2 or 3 years ago. Then six months later we switched to another one, and then six months later we switched back. And now we have both of them. Do you think that experimentation has changed people's attitude or the preparedness to experiment?

Matthew Brockman

I hope so. I really think we tried to send a message of experimentation and trial is going to matter. I think across a number of the portfolio companies I've interacted with, there is a huge amount of experimentation going on. They are talking very closely. They've got CPOs and CTOs talking to their customers, understanding what problems customers might want to solve. What an agentic workflow would look like for those customers. Which particular pieces of their workflow they would like to essentially replicate via software, and they want that in front of them kind of now.

And it's a matter of months for us to deliver that. I mean, in many cases it's already there. But even if you started today, most of those companies could deliver something within a month or two. That is a proxy or a trial for what could be done. And then you're into roll-out.

And I think that is - so some of that stuff will work great. And some of that stuff will come with teething problems. And so, it won't be as effective in substituting activity or increasing the value of the activities it might have been. But it's hard to see that over the medium-term, you know, one, two, three, four years, it doesn't have a very meaningful impact on the value of the software and how much of the value of the software spreads into the overall workplace.

David Toms

Yes. I guess when you talk about sort of it being quite easy to get early prototypes to market. And the reality is that's kind of been the story of software for 20 or 30 years. You can get a prototype in a customer's hands quite quickly. Getting a working, production, regulatory compliant, maintained piece of software out there. I mean, it's that famous quote from 20 years ago, “free software is like a free puppy”. I prefer to turn it into teenagers. As a parent of two teenagers. It’s a very similar question. Anybody want a free teenager? Anybody want a free teenager, they're welcome to one.

But yeah. How do you deal with the process of actually keeping the software going afterwards?

Matthew Brockman

I mean, so much of what I hear is we can build a prototype or a product version which customers are interested in doing. They can see how it improves the value of what they're doing in an enterprise. But they are worried about, you know, the governance process, the security of it, how it deploys against, you know, other requirements.
Which is why I think a lot of the value does come from the knowledge of the workflow that you exist and you support already. Like, you already sit there, you know, dealing with a filing or a payment or something which is regulated, which has to be correct where there's a representation probably from a CFO or from an auditor or from a professional, which is representing that sort of activity.

That's why it matters so much, I think that this sort of capability is built on top of that experience and that knowledge base, and not in substitute. It's such a difficult thing to naturally substitute quickly.

David Toms

Yes. I mean, that's something that we talked about in the podcast with Professor Lawrence. And I know we talked about it at the AGM with him as well. The whole concept of accountability that we are selling to people who are accountable for the outcome of what they do and that accountability, whether they're a lawyer or a medical professional or an accountant. In the end, they own the output.

Matthew Brockman

Yeah. I mean, so much of the workflow that you can see happening here is the volume activity, right? The invoice cross-checking, the payroll processing, the data entry, all that stuff is - there are many, many, many thousands of people employed in those roles, as we know. And most of them are working on software that a lot of our portfolio ship.

So, there is huge potential to make that much simpler and much easier in the same way that it has been in coding, right? You can now run a, you know, an instance of Claude for a day. And it produces very high-quality code. Even six months ago that was an hour. And before that it was five minutes. And so, the productivity effect I think is very pronounced.

And we've got many examples of how that then flows fully into the workflow and fully into the final process that is then signed off by the CFO, by the auditor, by your tax advisor, by your lawyer, by your senior lawyer. That's the deployment effect of what we're trying to see with the portfolio companies.

David Toms

And I guess we've covered quite a lot of things that go well, the positive attributes and so on around software. What about the flip side? Where are the challenges? Where can this go wrong? Where can software lose to AI?
Matthew Brockman:

I think if you're shipping a product, which is really very much a workflow. So essentially, if you're not sitting on much incumbent data, much incumbent knowledge, you've got a more verticalised go-to-market with the actual product you're shipping is relatively broad and generic in its application, and it's in something which is relatively or less deterministic, you are exposed, right.

And you can sort of see it in the markets. I mean, markets have been pretty extreme the last few weeks, but before that you can start to see some categories where there was a sense of this doesn't feel that hard to substitute because of the characteristics that are built into what the product does. So, I think there's a set of categories there.

I would say any software company in any workflow that is not taking this seriously and thinking about how it develops its processes, how it thinks about product, how it thinks about its organisational structure. It won't be immediate, but in 5 or 10 years’ time will lose value, right?

The one thing we have seen, I guess in particular in the last months is the cost of building software is now zero or very, very marginally low. And so the prospect that I can experiment, build stuff as a new entrant, as the classic sort of person in a garage or an adjacent player in a market space which is just adjacent to where I naturally operate, or I'm a large enterprise saying, well, actually, I can spend some of my own money and build some of that functionality, and I may choose to do that in certain parts of my activity, and I may not choose to do it in some of my back office.

That competitive dynamic, I think is moving. So, on one hand, there's this great opportunity to increase the value, think about how it improves productivity and labour and so on. On the other hand, I think the competitive environment will evolve. It has to become more active, more activity, more action in just within these ecosystems.

David Toms

If I look at the three particular areas of pushback I get from our investors, I'll run through three and then maybe we can pick them off one by one as to where they see potential challenges or where they ask us a lot. The first is vibe coding. So, what you've just talked about, can anybody in a garage now vibe code the latest ERP system?
The second challenge we get is around spend displacement, which is if money is being spent on AI, does that mean it's not being spent with you?

And then the final challenge we get is what happens in the new world, even if deterministic software, if payroll, tax and accounting is on, remains. But what happens if the user interface has switched to being a large language model? So, you are simply an API call from everyone else's piece of software.

So, we kind of run through those: vibe coding you've just touched upon, but are we going to see an impact from vibe coding? Are we seeing an impact from vibe coding?

Matthew Brockman

We haven't seen anything so far. Well, sorry. What I would say is I know that there are, you know, CPOs and product leads and CTOs in our organisations using vibe coding to wireframe something for customers that can evidence what a product could be. And that's in the sales process. That is not in production, right? That is leading to a decision around a product to be properly developed. Now it can be developed very quickly because the coding agents are there.

But it's being used in a way which is advancing the customer engagement, the customer understanding of what's possible. And also, frankly, some of the economic decision around how much are you going to pay me if I go build this thing? So that's where I see it happening. It's the apocryphal, I built this in an afternoon, and you can do that stuff. And there's plenty of smart people out there playing with this stuff, and it is very powerful. I don't want to take that away.

But you've still ultimately got to go and deal with complex customers who want to be regulatory compliant, who have established business processes, who employ thousands of people, who need organisational change, who have existing data migrations. There's a bunch of stuff in there which is just a natural moat, if you want to call it that, for how quickly something could be whipped up in a garage and rolled out.

David Toms

Yes. And I know when I talk about vibe coding with our investors, I tend to pick a pain point that they're all aware of, which is something like writing quarterly reports, which frankly, is a bit painful for everybody who has to do it. And I say, if I give you an AI tool that does that, would you use it? And the answer, of course, is yes. And I say, okay, well, if I need to give that AI tool access to all of your portfolio data, all of your customer data, and all of your internal specifications and all your views on the market and so on. I want to give all of that to this new startup AI tool. And immediately they say, well, there's no way I would allow that. It needs to be coming from those tools to do the quarterly report. We don't want to put all that data into something else.

But clearly if it isn't delivered in the end, customers will start to trust those tools. They will build a head of steam. But right now, as soon as you start talking about interfacing to existing data, people immediately start to get worried, at least at an enterprise level.

Matthew Brockman

Also, you need to manage the instance of the model that you're using. So maybe using an older version of the model or an open-source version of the model, and it's using—it's doing a very specific application. It's not using a very broad definition of what an Anthropic model might be. You know, it's a contained use case because the data integrity, the processes need to be controlled and there needs to be a high degree of predictability about how that workflow is going to actually deliver.

David Toms

Yes. And I guess it still leaves you with all the same problems that a traditional software vendor has, which is you still need to integrate, you still need to get permission from IT and so on.

Matthew Brockman

Yes. Yeah.

David Toms

So, I guess on the second one then—displacement of spend. Are we seeing that? Do you expect to see it? Where is all the money that Anthropic and OpenAI are, or at least all the revenue that they're reporting coming from?
Matthew Brockman:

I think we've seen some effect of this. I don't think it's huge. I think it's a couple of points of growth. And I think it's probably flattened out. So I think it was a pattern over the last year or two where there was a great sense of urgency in many enterprises, in SMEs who essentially say, we've got to be on AI, we need to experiment.

I think a lot of these budgets were incremental, actually, so I don't think it was a straight substitution effect. You know, they were coming out of R&D. They were coming out of other product sets. They were coming out of different ways in which you might find the funding. But, you know, at a point in time where you're going to find the formula that meant you were going to buy a bunch of Anthropic licenses and put them in through your organisation when that was coming down from the CEO, who insisted that that was going to happen. Somewhere in the budget that was going to get found.

I don't think it was a particularly huge effect. You've also got, frankly, just the effects of, you know, the labour force has been coming back off of the Covid bump, right. And so, there's other things in that, in that trading, which is noise as well as this particular effect.

So, I don't think it's been very pronounced historically. I mean, the key call really is, as you go forward, if Anthropic is going to go from, you know, whatever the numbers are now, 20 odd billion of revenue at some point and growing and growing and IPO and get to a trillion. And that spend has to come from somewhere in theory. The question is how do we extrapolate that and really think where that matures to?

David Toms

Yes, I guess I mean, the ultimate question for most businesses will be whether this comes from the labour budget or the IT budget.

Matthew Brockman

Yeah, I don't think the numbers - my view - is the numbers for investing in Anthropic, for OpenAI, for frankly, even some of the more directly application businesses that are AI-first, the return on the valuations doesn't work if you don't access the labour market. You can't make enough economics out of the software provision in some of the ERP categories if you're Anthropic or in some of the legal categories if you're Harvey, unless you actually do access labour.

So, at some point you need to persuade lawyers to employ less associates and to charge their performance in another way. That that feels to me like an absolute must if those businesses are going to justify the kind of valuations that we're seeing. And that's clearly the fundamental bet that people are taking, right, which is that most of these sort of application markets are, you know, even relatively bounded, right? They're not huge software markets. You've got to break out of that, I think.

And I think this year in particular, is the year we're about to see whether this really works, right. We're saying we've got this sort of enormous sense of the power of the technology and the hype and the sort of cycle of what's being developed. And now I think a lot of that was early adoption, people trialling, people saying, well, I need the licenses, I need to mobilise. I can sort of see it working in coding. I think it maybe could work in other areas, but I need my teams to be trialling and experimenting.

And when we look at some of those companies, they've done very well. They've grown incredibly quickly because pretty much their entire market has adopted them in one go. They've gone from zero adoption to 100% adoption in 24 months. So, guess what? They've grown super quickly, but a lot of it is still trialling and testing and experimenting. And now you're starting to build workflow in some of those, even the most mature categories for some of these applications and not many right now, right? It's mainly legal and a couple others.

I think the question is what products are really being built and are customers really adopting them, and what is that really going to do to take you from spending, you know, $1,000 a month on something, 2 or $3,000 a year or something to $10,000. Like those economics need to play through to get the return on some of the investment that's happened.

David Toms

There's quite an important point in there, because we've both been doing this for quite a long time. And if I go back to sort of one of the previous massive technology adoption waves in the early 2000s, it was quite hard in the early days to distinguish the true winners from the apparent winners, because everybody's revenue was growing nicely. Some people were shipping product and some people were just shipping quite a lot of services while they tried to build the product underneath to justify it. Do you think we're seeing some of that going on in the AI world at the moment?

Matthew Brockman

I think in some of the application areas I sense there's been enormous growth, but a lot of it is this super rapid adoption of a basic sort of capability that people are now trialling and testing how it works. And the bet from those players, and they're supportive - some of the customers who are supporting them - is we can build workflow out that is ultimately going to substitute labour and that will justify the economics. And that's the cycle we're now in.

There's an interesting point on sort of, you know, revenue. How much revenue, how many dollars per lawyer do I get to achieve if I really roll out? There's also the sort of the input costs and some of the gross margins. And what does it really take? Because you've got a situation at the moment, I think, where a lot of that venture money is almost subsidising the inference costs of users, right? And that's working because it's getting the growth. But at some point, it has to turn into economics back to the investor.

And again, I think that calculation is quite bound up, right? You need ultimately this product to be very valuable, to both sustain the growth and deliver the maturity of where these businesses are projected to end up, but also, frankly, to pay for the capability which is delivering the service in the first place or the products in the first place.
David Toms:

Well, you could probably do another whole podcast on the circularity of revenue in the industry, but we'll skip over that one for now. When you think about our own portfolio, how do we get our share of that? If there's an opportunity to get into the labour market and there's a big pool to go for, like all the AI native companies are implying or their investors are implying. How do we get our share of that?

Matthew Brockman

I mean, the core that I've seen so far is there's how do you set your organisation up to be AI-first and understand what a sort of AI built capability really looks like? There's a serious amount of work with your either your end customer saying, here's what we now think we can do, here's what we think our product can do, which is beyond software, which is beyond processing or data collection and verification and calculation and delivery. Here's how we can automate the process that the user is doing.

And that's where we're pushing very hard, right. So certainly most of the companies I've interacted with are spending a lot of time with their customers. They're spending a lot of time to understand what is exactly the workflow that those customers are using. And obviously they start with a huge head start because that's what they've been shipping for a long period of time.

And they're saying, okay, using that capability, what could I build for you that is delivering some of that. And how might you think about access to data, governance, security, supporting me in developing that? Because there's an economic interest in you as a customer doing this with me. And then if I build it, how do we think about the compensation and the economics of that?

So, I think there's a huge amount of focus on that. And I see a lot of companies in that space right now, either with products in market or basically building products or testing stuff. And obviously we've invested very heavily in our business in what we call Catalyst, which is a bunch of developers and product capability to accelerate this in our companies, right. So, we have people we can parachute in to accelerate this process to give enhanced capability, which isn't on the company's P&L, frankly, it's on ours, but which accelerates that capability. So, I think there's an enormous amount of focus there.

And then I don't want to take away from the operational piece, which is just understanding that an AI first software company in 2026 has a very different profile in terms of how it manages its products, how it manages a developer organisation, what its developer organisation looks like, what its sales organisation is probably going to look like, what its commercial proposition to its customers is looking like in terms of how they get paid for their product. All of that is moving as well. And that comes from the CEO.

If you have a CEO who fundamentally understands, is building, is trialling, is experimenting, is developing the organisation. Again, all of that sort of capability translates as well, because I think one of the things we have seen, just to be frank, is you can put developers in and start building product very quickly, as we just talked about, right? You can get something that customers are interested in, and you can probably get to market and start earning some revenue from it.

How it then deploys at scale. So, you're now shipping it to 1000 of your customers, not one of your customers. How are they paying you for it? How is that economic balance feel correct? How is it going to evolve over time? How does your sales organisation move from basically selling a product or selling a SaaS product, which sold on an annualised basis, and they kind of earn the commission and then would have to sell another one to, alright, now this feels a bit more at the moment like a sort of solution sell. How am I going to solve a problem for the customer? And then how do we think about prioritising that. And then how do I think about getting paid for that?

That is quite a big operational shift for companies, which I see a number of them have done, but a number of them are really working through right now.

David Toms

And all that talk of customer probably sort of brings us neatly onto the third point of where does the customer relationship sit in the new world? If the customer is interacting with an agentic layer, is it the agentic layer that sits with the current software systems? Is it a completely separate agentic layer? Is it OpenAI? How do we think about that?

Matthew Brockman

I think in the vast majority of the companies that we invest in, which are largely, you know, SME, small enterprise and deal with sort of back-office sort of compliance style of applications, it will sit with the software layer, right. There will be a very clear use of agentic capability, and they will be accessing models and capability in various ways to provide the capability. But ultimately, this is a lot of software that doesn't ultimately make your beer taste better, right? It doesn't ultimately make your business function better.

And there's still domain in it, and there's a lot of data in it, and there's a lot of knowledge base built into it, and there's a lot of externalities in terms of how you interact with, you know, the physical world or with other providers or other systems. And so, it's possible if you're very slow as a software company, that you get displaced, that somebody comes in and displaces you in some way from below or from the side.

But I think if you've got that incumbency, I would be very surprised if most of our companies, if they don't - if they act quickly and with, you know, with deliberate intent, they will become the natural agent provider. Why would I not be working with an agent in that way, even if it's interacting with another agent in some way in the future to instruct that process?

David Toms

Yes, I was talking about a similar angle on this earlier on today with one of our investors about something as simple as invoice ingestion, which you can do very easily with your ChatGPT or almost any LLM. You can take a photo of the invoice and say extract the key forms from this and so on. You can do it pretty much for free. Yet, most of our companies are having quite a lot of traction, offering document and invoice ingestion into the core application as a paid-for module. So typically customers are paying a premium to have it integrated into their core system, even though they could go somewhere else and get it done by a third party tool.

Matthew Brockman

I was with a portfolio company this morning and they do a lot of ingestion of this kind of stuff. Would they rather we did it at some economic payback, which is - it's cost us very, very little to provide the service. But we have it within our process, within our governance structures, within a deterministic process that we would naturally follow, looking for, you know, exceptions, errors, tax reporting, whatever it might be - a natural decision for someone to say, well, I'm opting into that as long as you can provide it to me.

If you sit on your hands for three years and I feel like I'm losing traction, well, I will go somewhere else. But if you're providing it for me and you're telling me how this is going to work and I can see this works, and frankly, it's a part of my business, it's a part of my back office, but it's not my absolute day job. This doesn't make my entire business work better. It just makes it work a bit more efficiently. I think it's an obvious proposition to go to the incumbent as long as the incumbent is building.

David Toms

And are we seeing proof points of that coming through the portfolio in terms of product launches and so on?

Matthew Brockman

Yeah, yeah, I would say so. So, we have well over 100 projects that we've supported as Hg in the portfolio. And I would say there are real revenues coming from, you know, many of those projects. And then we have a whole load in build that I'm aware of. And it is that theme. It is there is something that is now doable with the technology that probably wasn't doable 2 or 3 years ago. Customer is keen to evaluate and develop it and build it, but they're looking for their existing technology provider or existing software provider to think about how that gets built and deployed in a way which is consistent with how the business operates.

David Toms

And if I think about what all this means for us specifically as Hg rather than the industry broadly. From a capital deployment, from an exit perspective, as CIO, how does it affect your thinking about what we invest in and what we don't invest in?

Matthew Brockman

I mean, my main focus at this stage is how do we - we recently completed a fundraising cycle - how do we reflect what's happened. And I think the key thing we're looking for is a bit of patience, right? So, we're not someone who's going to try and call cycles. We are going to try and patiently build technology companies. I think there's a huge opportunity in the next 5 or 10 years for us to build some very successful technology companies.
In many ways, our sort of competitive set, I think, will actually start to thin for the last ten years since I've been doing this, like we've had every year, we have more competitors because more people will go, oh, I like being in software. I should definitely have a team that does SaaS software. They should definitely do some deals. And actually that might start reversing a little bit, which would be quite interesting in itself.

But I think what we need to do is then think in the next 3 to 5 years of that investment cycle, how do we see these patterns emerging? What capability do we need as a business to not only identify businesses that succeed in this world, but then how do we work with them over time to build this out? Because I think a lot of what we will see in this point in the cycle, and I'm saying the next 3 to 5 years is, you know, a lot of our skill will be around finding companies that are in these market spaces, that have got inherently strong products that could have the opportunity to build and gain economics from deployment - more than financial engineering – buy at X, sell at Y.

This is the moment where this kind of deep understanding of markets and products and company organisational structures really counts.

David Toms

If I think back to the last big transition, the SaaS transition, I mean, one of the things that really mattered there was getting some early proof points to get internal confidence about how these things worked. So, you could then go and deploy more and more capital with confidence in the success before other people could get confidence in their own success in the space.

Matthew Brockman

Yeah, I would say our confidence that over the long term - I do think this is 5 to 10 years, not one year or two years - our confidence over the next 5 or 10 years of the power of this technology, and what it's going to do in terms of automation is huge. And we already have some proof cases where we have essentially built product and deployed products in market, and it's being paid for by customers. And that is only going up one way at the moment.
So, I guess we're investing real money in two ways. We're asking our portfolio to put, you know, millions of dollars into this product stuff actively and thinking about how it frankly, you know, impacts the P&L as a result. But to not worry about it or not take it as seriously as it would have been previously. And we've spent a lot of money of our own P&L on saying, how do we have this capability in-house? How do we deploy it? How do we get it in, you know, in action quickly?

And I would say probably across both of those sums, we're talking, you know, hundreds of millions of dollars basically of saying, look, there's a commitment here in terms of just how much we're going to focus on this across all of our activity, across the whole of Hg, you know, 190 billion of enterprise value, whatever it is. I mean, that's what you'd be expecting us to spend, I suspect if you're really thinking about what could be done at this stage.

David Toms

And I guess we may have a slight intrinsic bias on the sort of private versus public debate. But what you have highlighted, there is an advantage, not just of us owning private companies and being able to pivot them quite fast without recourse to analysts and investors to explain why the EBITDA is not going to make the target this year, but also as a private business ourselves, how we can take the call to invest tens or hundreds of millions of dollars of, frankly, your and the partners' money into activities like this.

Matthew Brockman

Yeah. I distinctly remember going to our board when I was CEO 18 months ago, asking for some money to hire some software developers to work with Chris Kindt and his team in our organisation, and being given a very sort of quizzical look by my fellow board members, but being basically given the sort of license to go hire.

And that turned out to be a very, very good idea. And it wasn't my idea. It was Chris Kindt's idea, but it was a very smart thing to do at that stage, given that we were now going to accelerate this. And so, I could afford to do that because I didn't have a public stock, and I didn't have a public earnings target that I had to hit. And I had flexibility. And I think that actually catalysed our program, right. We call it Catalyst. But that was the kind of origin of like, shall we go and try this and see how well this works? And now we're, you know, doubling, tripling up on that as quickly as we can.

David Toms

Yeah. So rapid decision making to address an expanding TAM and a big shift in the market.

Matthew Brockman

Yes. Yeah. And the flexibility to do it.

David Toms

Yes. Yeah. So, in terms of sort of bringing some of this to a close, I was thinking through AGI as one of the themes that gets talked about a lot, sort of the rush for AGI. And clearly, in the last few years, we've seen software go through the Turing test in a very impressive way. A test which I remember from my younger days in coding was viewed as quite a difficult target. So, I was thinking through what would be a Turing test from your perspective, if you're looking at this world, how would you determine that we've actually got there that this is now an AI-first world and everything we see?

Matthew Brockman

Well, I'm entertained a little bit with some of the commentary in the podcasts around, you know, could you build a $1 billion company with one person? And obviously this is a great sort of idea that at some point that can be done, right, with the power of AI. And I would think about it in the context of, could I raise a $1 billion fund with no people in it? So, it had an AI - and I went to investors and said, don't worry, there's a virtual Matthew, and there's a virtual set of activity, and there's a virtual team that's going to deploy all of your capital for you. They're going to monitor at the board, and then they're going to work out the exit.

I think that would be the ultimate sort of PE-Hg-Brockman-Turing test of whether this was something that investors felt comfortable to say. We're not looking you in the eye, Matthew. We're looking at you saying that you believe the model is good enough to do this. I don't know if that's a good answer or not.

David Toms

That's quite impressive. I mean, of course, you could argue that maybe we've got to that point when people watching this aren't sure whether we are avatars or real. But I can assure them for those who are watching that this is very much the 3D Matthew.

My own version of it, when I see it through, was I think we've got to proper AGI at the point at which Sam Altman is prepared to get in a plane which has been designed entirely by AI software, fly to 10,000ft and jump out wearing a parachute that has also been designed entirely by AI software built by one of Elon's robots. At that point, we'll know we can really trust AGI with everything. Until then, I think people probably want their tax return and their payroll having some degree of human oversight, even if there's a lot more automation of the process and their pension investment money that they would like us to look after for them carefully.

Matthew Brockman

Yes, yes, the accountable Matthew is the one that really matters, not the virtual one.

David Toms

Yeah. Great. Well, thank you very much for your time, Matthew. It's been a very enjoyable discussion and we look forward to seeing more of you on future podcasts.

Matthew Brockman

Same time next month, right?

David Toms

Yes. Thank you. Thank you.

The views and opinions expressed in this podcast and transcript are those of the contributor and should not be taken to represent the views or positions of Hg or its affiliates.

Statements contained in this podcast and transcript are based on current expectations or estimates and are subject to a number of risks and uncertainties. Actual results, performance, prospects or opportunities could differ materially from those expressed in or implied by these statements and you should not place any undue reliance on these statements.

Orbit episodes

Latest

Orbit Podcast

A certain level of chaos is healthy: Franz Faerber on fighting bureaucracy and the importance of deep domain knowledge in AI

Episode details

Orbit Podcast

The corporate immune system: Google Cloud's Daniël Rood on building Europe's first AI team

Episode details

Orbit Podcast

Skin in the game: Professor Neil Lawrence on vulnerability, accountability and why the next generation will thrive.

Episode details

Orbit Podcast

The 3 speed problem: Oji Udezue on CPO leadership in the age of unlimited engineering

Episode details

Orbit Podcast

Fevered determination: Building Zalos from zero to enterprise in 5 weeks

Episode details

Orbit Podcast

Trust, velocity, and building the Answer Engine: Dmitry Shevelenko of Perplexity speaks to Farouk Hussein

Episode details

Orbit Podcast

The long road to the last mile: Nic Humphries and Matthew Brockman reflect on 25 years of Hg

Episode details

Orbit Podcast

AI, Control Points, and the Next Wave of Vertical SaaS with Tidemark Capital founder, Dave Yuan

Episode details

Orbit Podcast

Refounding in the face of AI with Des Traynor of Intercom

Episode details

Orbit Podcast

A golden age of software engineering with Russell Kaplan of Cognition

Episode details

Orbit Podcast

Quick and dirty prototypes with Andrew Ng

Episode details

Orbit Podcast

A glimpse of the next generation: Zoe Zhao and Annalise Dragic of Azlin Software

Episode details

Orbit Podcast

Incubate, experiment and implement: the real business case for AI today

Episode details

Orbit Podcast

Risk-taking & resilience with Sukhinder Singh-Cassidy, CEO of Xero

Episode details

Orbit Podcast

Vulnerability as strength in business: Nick Mehta of Gainsight

Episode details

Orbit Podcast

Thousands of small experiments: Merete Hverven of Visma

Episode details

Orbit Podcast

The art of pattern recognition: Darren Roos of IFS

Episode details

Orbit Podcast

Do tech leaders have to be tech experts?: Elona Mortimer-Zhika of IRIS

Episode details

Orbit Podcast

The business case for AI: Brent Hayward of Salesforce, David Carmona of Microsoft & Nagraj Kashyap of Touring Capital

Episode details

Orbit Podcast

The greatest tech comes when we ignore ROI: Raghu Raghuram of VMware

Episode details

Orbit Podcast

Mastering the billion-dollar software playbook: Joe Lonsdale of 8VC & Eric Poirier of Addepar

Episode details

Orbit Podcast

What drives business quality in an era of AI and digital platforms?: Jonathan Knee of Evercore

Episode details

Orbit Podcast

LLM AI, the fourth pillar of software

Episode details

Orbit Podcast

Insurance, the O.G. Data Business

Episode details

Orbit Podcast

Sustainable IT and IT for Sustainability

Episode details

Orbit Podcast

Unlocking real-time behavioral data in SaaS

Episode details

Orbit Podcast

Why Tech is Deflationary in an Inflationary World

Episode details

Orbit Podcast

Data Science in Drug Development

Episode details

Orbit Podcast

You are only as strong as your weakest point

Episode details

Orbit Podcast

Volcano Cat Bonds and Other Innovations

Episode details