Orbit 27
What does AI mean to you?
A catch-all term sweeping the tech world, ‘Artificial Intelligence’ couldn’t be more exciting, daunting and, inevitably… vague. Thankfully Hg’s Head of Data & Analytics, Amr Ellabban, learned his trade alongside Nebula.io Co-Founder and the host of the SuperDataScience podcast, Jon Krohn.
Listen on:
Episode Transcript
Amr Ellaban
Welcome to Orbit, the Hg podcast series where we speak to leaders and innovators from across the software and tech ecosystem to discuss the key trends that change how we all do business.
My name is Amr Ellaban. I’m the Head of Data & Analytics at Hg, and I’m thrilled to be joined today by Jon Krohn, the Co-Founder and Chief Data Scientist at Nebula.io, author of ‘Deep Learning Illustrated’, a bestselling introduction to AI, and host of the SuperDataScience podcast, the most listened-to data science podcast in the world.
We’re speaking today, tucked away in a little side room at Hg’s Digital Forum here in London, where Jon will be giving the opening keynote tomorrow morning.
Jon, thank you so much for joining us.
Jon Krohn
Yeah, my great pleasure, Amr. Thank you for inviting me. We’ve known each other for an absurdly long time.
Amr Ellaban
Sixteen years, we were saying.
Jon Krohn
Years, yeah, and I’d like to think we don’t look that old but thankfully people are just mostly listening to this so they don’t have to decide for themselves.
So we were doing PhDs at the same time and both of us basically doing AI research or data science research before those were widely used terms to describe what we were doing. Although maybe your stuff – Machine Vision stuff – I guess people had been calling that ‘AI’.
Amr Ellaban
They weren’t even at the time right? I didn’t hear the term ‘Data Scientist’ until I’d been working in the field for a few years post-PhD. So we were doing it before it was cool.
Jon Krohn
Yeah, if doing PHDs in the Sciences is cool. And yeah, so absolutely have always loved analysing data, looking at data, building models with data, trying to make predictions with them… During my PhD, I was doing it in genomics and brain imaging and then afterwards worked a bit in finance myself. So I was doing high frequency trading at a hedge fund using algorithm-driven strategies – sub-second kind of trading strategies – and then I worked briefly in marketing, automating digital advertising: a very common data science application – the most common data science application?
Amr Ellaban
We’ll be hearing about it tomorrow at The Forum.
Jon Krohn
Yeah, I bet. Then I found my love for Tech startups. So, for the last nine years I have been working in Tech startups: two of them.
So one, called Untapped, was acquired in 2020 and now I have the joy of being the Co-Founder of the startup, Nebula, and I’m working with the Founder of the previous company and our third Co-Founder is the CEO of the holding company that bought the previous company that I was working at.
So, areally close group of Founders, we get to have intellectual property continuity and have been able to get to Market relatively quickly into private beta with our first product just in the last few months.
Amr Ellaban
That’s very cool. What is the product do?
Jon Krohn
So it is for finding people with natural language. So there are lots of tools out there for finding talent that you’d like to hire or sales leads. All of them are keyword-based. But we use natural language processing algorithms, which are AI. I guess? Everything seems to be AI.
Whatever AI is, NLP seems to be squarely in the middle of it. Especially with recent applications that we see – and I know that we’re going to be talking about that later on the show – but basically with our platform you can use these really high powered, really nuanced natural language processing algorithms that understand the meaning of what you’re looking for, as opposed to using keywords.
So you can find twelve times more of the most relevant candidates or sales prospects or other kinds of people that you’re looking for relative to a keyword-based approach. Twelve times is a lot.
Amr Ellaban
That’s super cool and it’s definitely a problem that I know lots of companies struggle with but, actually, I want to pick up on something that you said around NLP and where it sits in the world of AI. One of my personal bugbears is the term ‘AI’, right and just how generic and vague it is. What does AI mean to you?
Jon Krohn
So AI is a buzzword. The popular press seems to use it to describe whatever innovation is coming next. So in the 90s, it might have been optical character recognition or being great at chess. But today those kinds of things are humdrum for machines. And so AI is used to describe maybe like self-driving cars. I think generally when you talk to laypeople about AI, they’re thinking about something humanoid, robotic. And that’s almost never, you know, what we’re doing with AI.
So AI is a buzzword with moving goal posts. But within the field of AI there are specific disciplines that are definitely real and easy to define. So in the 90s, that chess computer that beat the world’s best chess player, Gary Kasparov: IBM’s Deep Blue was that machine. It was what they call an expert system. And so with these kinds of expert systems that were popular then, what that meant for AI was that how the machine responded was determined specifically by lots of IF statements and FOR loops that programmers specifically wrote, based on consulting with, in this case, chess experts.
Today the approach that is by far the most popular within artificial intelligence is machine learning. So with machine learning, unlike the expert systems, we don’t prescribe how the machine should produce outputs based on the inputs. Instead we provide it with training data to learn from and figure out what the optimal outputs should be given some input.
Amr Ellaban
I mean that makes total sense. Where have you seen applications of machine learning in the last decade and have businesses been using it to drive impact similarly?
Jon Krohn
Yeah. So in 2012, there was a watershed moment in the history of AI when a particular algorithm, called AlexNet, was made public. So AlexNet was put into a machine vision competition and it absolutely crushed all of its competition. There was no comparison.
The remarkable thing about this was, all of the other teams that the AlexNet architecture crushed were huge teams – international collaborations. I think you know a lot about this – having written a lot more about this – than I do, Amr. So, you can tell me what I get wrong and correct me on some of the context. But most of the submissions into this machine vision competition were done by huge collaborations where the researchers – although it was machine learning so we were training on data – the researchers were coming up with features that they were trying to extract from the raw data.
So some of the images in this machine vision competition are dogs. So you would have people that specialise in coming up with features that would allow you to identify what a dog is. This AlexNet architecture did away with all of that. And so instead of having these huge teams of people coming up with all these features, the AlexNet people used a specific kind of machine learning called ‘Deep Learning’ wherein you allow the algorithm, not only to learn from data, but to extract all these features automatically.
So Deep Learning architectures have this inbuilt hierarchy. That’s why it’s called Deep Learning because it has these layers of processing and the more layers that you add, the more complex, the more abstract the representations that that machine learning model can learn.
And so this AlexNet architecture was created by three people at the University of Toronto who had no expertise in the kind of data that they were working with – unlike all the teams they were competing against – but they just allow the Deep Learning model to learn the features automatically and it absolutely crushed all the competition.
So that was 2012 and, since then, Deep Learning has turned out to be enormously useful across a number of different application areas. So Machine Vision is one. Another one is a field called Deep Reinforcement Learning where you have machines make a series of actions in a kind of a complex space. So like a board game or a video game or a humanoid robot navigating the real world are examples were Deep Reinforcement Learning goes really well. But in the last couple of years the most exciting applications of all have come from Deep Learning being applied to solving natural language problems: like Nebula does.
So that’s things like the ChatGPT algorithm that is taking the world by storm. But even in the last few years, like five/six years ago, speaking to your Amazon Alexa or Siri on your iPhone had a much higher error rate than once they started incorporating in deep learning. So deep learning has started to become ubiquitous within your devices and it has allowed them to take on lots of magical seeming properties just in the last couple of years. ChatGBT probably being the most exciting recent example.
Amr Ellaban
I do actually remember when AlexNet came out and smashed every computer vision algorithm that existed. I was pretty much wrapping up at that point. So I felt…
Jon Krohn
Well almost everything I just did is useless!
Amr Ellaban
Pretty much! It was a little bit crushing for me personally as well but, hey, life moves on. But I mean you touched on GPT and you have to be living under a rock to have missed the huge amount of hype that that’s been getting lately and we’ve seen all kinds of sensationalist headlines about how it’s gonna replace people’s jobs. You don’t need software engineers anymore. You just need chat GPT to write your code for you. I mean how realistic is that? Are we going to replace software engineers in the next ten years?
Jon Krohn
No, we are going to replace some tasks that software engineers do that lots of other kinds of roles do. It isn’t just highly technical roles like software engineers. People who are generating copy for social media posts or digital ads, you know, those are examples of things.
Now ChatGPT can draft a script for you quickly but ChatGPT makes mistakes. It will lie very confidently. So you need to have a human at least work with the results and confirm that they’re accurate before you want to be publishing them publicly or making big business decisions based on ChatGPT comments. So I don’t recommend that you take the human out of the loop as it were. However, what ChatGPT is great for – just like all preceding automation technologies going back centuries to mechanised farming or, you know, robots appearing in factories – machines take away, piece by piece, the most repetitive, boring parts of work that you don’t really want to be doing anyway as a human.
Like all of the preceding enormous innovations in automation and AI, ChatGBT will take away some of the drudgery of work that people do, freeing up more of your time to do stuff that people generally like more. Like landing deals, negotiating them and being able to think at a high level about strategy and creativity.
That also is why the software engineer isn’t going to be replaced because even while ChatGBT is able to suggest some lines of code to you. It isn’t able to help you architect the entire system. Well, I mean, it could help you to generate ideas as to how to do it, but it’s not going to be able to weave together all the different parts of the system that you have. I would be shocked if it could be architecting enormous software applications in the next ten years.
Amr Ellaban
Yeah. I mean, I completely agree with everything you’ve just said there. ChatGPT essentially is a good aid to people as they try to start a problem. I may have even used it to generate some of the questions I’m asking you today.
Jon Krohn
He’s actually just typing into a ChatGPT terminal in real time here. We’re recording everything that he says is based on…
Amr Ellaban
You’re not really speaking to a person at all. It’s just ChatGPT being fed into one of those text-to-speech software tools.
Jon Krohn
Yeah. We shouldn’t tell the listeners is actually just two ChatGBTs talking.
Amr Ellaban
I guess in all seriousness, it’s kind of picking up a lot of traction and getting a lot of excitement but you sort of touched on this a little bit, given the rapid growth of AI usage, what are the potential risks? I mean can we really trust AI to be used in high stakes environments? I’m thinking things like healthcare security.
Jon Krohn
Yeah. So let’s get into those high stakes environments in a moment. But let’s tie this back into the preceding question first. So while I said that automation and AI won’t displace software engineers, there are certain careers that will be displaced because so many tasks that they perform will be done by automation increasingly, and so it is important to them.
I think a big risk as a society is that, with how quickly technology changes, we can’t rely on somebody learning some trade or getting some university education or diploma or whatever. Some kind of formal education. And that lasting them until they retire. There’s no way. So, governments around the world need to be thoughtful about having affordable, well marketed, retraining programs available for all kinds of people and I think that, you know, along with globalisation, automation has played a major role in the backlash, the right-wing backlash against… even the rise of nationalism that’s been happening in politics, automation plays role in that. So there’s a big risk there to society at large.
So yeah, so I think that’s something that we need to be wary of. So even though study after study after study suggests that historically, as well as probably with the current AI innovations, more jobs are created than are lost, some specific people’s jobs are lost and they’re not always going to be happy about it.
I think all of us get, you know, habits and then you’re not always super-inclined to see change, but I think that those kinds of motivations can be put in place and you can say “look you can come out of the mine and you can do this other kind of role where you don’t, you know, it doesn’t have to be physical labor all day. It’ll be easier. It can be more creative, be more social and you can make more money. You just, you’re gonna have to learn this new thing”.
Anyway, so your original question was about risks. But yeah, you’re talking about like healthcare and security specifically.
Amr Ellaban
Or I guess having AI making decisions and high stakes environments.
Jon Krohn
In high stakes environments. Yeah, so, I mean there are situations like: you should not have an algorithm, a black box algorithm. So this AlexNet architecture that we were talking about, this deep learning architecture. While, it is extremely powerful and extremely nuanced, it has so many, what we call, ‘model parameters’ that it is impossible to understand as a person, to interpret, how some particular input is directly relating to an output. There’s just so many permutations on what could happen across all these millions of parameters. In the biggest models today, we have hundreds of billions or in some cases trillions of model parameters. And so, with these, we call them black box models – because even though there are some, what we call, ‘explainable AI tools’ that give us some sense of what’s going on there – for decisions that have a really direct impact on a human’s life, like how long a prison sentence should be or whether somebody should be approved for a home loan, whether somebody should receive some very expensive kind of medical treatment that the government will be paying for or not. You know, these are enormously life-changing or potentially life-threatening situations. So in those circumstances, it might make more sense to use a lower powered model where we understand exactly how every change to an input impacts the output and an expert committee of people has approved and audited that specific approach.
So I don’t know if that kind of answers your question.
Amr Ellaban
I think it does and it kind of echoes what we see a lot in the companies that we have in the portfolio. We could talk about model explainability and links to causality as well for hours. But, when it boils down to it, we need to make decisions, sometimes you need to have that simpler model so that people can believe in it more.
Jon Krohn
Absolutely and that ‘believe in things’ is critical, there’s an effect and I’ve been trying to dig up the name of the effect. If someone’s listening to this and knows what it is, please add me on LinkedIn and, in your adding me on LinkedIn, mention in that message the name of this effect…
But in the last couple of years there was an effect that was named for this phenomenon that we observe in humans. This bias that humans have against machines. They did studies where a human would make a mistake in experimental conditions or machine would make a mistake in experimental conditions: the same mistake. And the subject in the study, once a machine made a mistake once they didn’t trust it anymore, but when a human makes the same mistake you give them other chances. And so there’s this bias against machines making decisions that we have kind of innately. So yeah, when situations are critical, we want to make sure that they can believe in the outputs, that they make sense.
Amr Ellaban
It’s a super interesting effect that you mentioned and yes, please do let us know if you know the name of that effect.
Jon Krohn
I actually did a podcast episode on it on my own show, a five minute Friday episode, but I couldn’t immediately find it based on the titles looking back.
Amr Ellaban
Because I remember it back when we were doing our PhDs, speaking to a professor who was working on self-driving cars which, given that that was, let’s not say just how many years ago that was… they had really impressive tech that was working really well. But we still don’t see those cars on the road today. He was saying exactly what you’re describing, if a human causes a car accident, they might get in trouble, but they’ll probably be back on the road in a few months, a couple of years. They can still drive a car but if a robot makes one mistake, that’s it for all self-driving cars. So the bar is very different.
Jon Krohn
I think with that thing in particular – you know, we don’t need to spend a whole bunch of time on this – but I think that it’ll be insurance companies where they just have the raw hard data if, per mile driven, a self-driving car is ten times safer than a human driving it then the insurance premiums once we have fully self-driving cars could be ten times as much if you’re gonna be a human driver. But if you go to the car dealership and select one of the cars without a steering wheel, your insurance premiums are a tenth as much, well, it’s not public opinion that will be shifting things. It’s like the actuarial sciences.
Amr Ellaban
That’s an interesting perspective on it. I want to get your opinion. It’s a difficult question given ChatGPT as a technology is something that seems to have come out of nowhere for folks. But what could the future hold for AI? What could we expect to become commonplace in the next five and twenty years?
Jon Krohn
So my keynote talk at this Hg conference is all about how rapidly technology changes. And how over the course of a human lifespan, for example, the changes are so dramatic these days that you have no chance of predicting what the world’s gonna be like a century from now.
Even a decade is really tricky. There are places that I hope we have enormous success, even outside of AI in the next decade, like Fusion Energy. That’s something that’s really exciting for me. It might be the only thing that I’m more excited about than AI. I don’t know anything about it, really and you know, I have no expert knowledge on it. But this idea of abundant energy being safely created. I think it would solve an enormous number of problems that we have on the planet: geopolitical problems.
Anyway, other than that, I think AI is the most exciting technology today and, over the coming years, in terms of how things are going to change over the next ten years, it’s crazy how rapidly things are changing today. I did not think, when ChatGPT came out at the end of 2022, that it was possible for a machine to be able to carry on conversations of that length with that level of human nuance. And I’m supposed to be an expert in this space!
So it just it just completely blows my mind, some of these innovations that are coming out in recent years are the biggest, most exciting things. Like Dall-E 2, the image generation algorthm that takes in natural language and then outputs a high resolution image corresponding to that natural language. Or Google’s Imagine video algorithm which does something analogous for video where you pass in natural language, you say ”I would like, you know this particular video” and it isn’t as high quality as the still images created by Dall-E, but it does create a few seconds of contiguous sensible video based on that. And so yeah, so these kinds of approaches – and also GPT-3 or ChatGPT – these GPT series architectures, all these examples that I just gave, they rely on what we call Foundation Models. And these Foundation models are absolutely enormous. So I was talking about kind of model size. So we’re talking about hundreds of billions or trillions of parameters. And the authors of these papers themselves when these new approaches are released are blown away by the breadth of capability of these models.
To make ten times bigger, a hundred times bigger than anyone else is doing requires a lot of ingenuity in the ways that you engineer it; on the way that you have all of the graphics processing units – the GPUs – that you need to be training. How do you have them all work together, how you manage the data flows?
There’s innovations around that, but these huge huge model, they actually don’t really have much in terms of data science innovation. In terms of the way that the modeling is done. So these enormously powerful models that we’ve seen emerge over the last few year, we’re going to see that continue to happen over the next few years, based just on having a hundred times more, a thousand times more, a million times more… There’s already models that we have with a trillion parameters… and we’re going to see really powerful emergent effects from that.
But there’s going to be something, other than deep learning maybe or other than deep reinforcement learning, that is going to be a game changer, that we can’t see coming, that will bring forth probably even more accurate capabilities, but at a fraction of the model size. Because it’s not even environmentally sustainable to have our solution be, “Okay, let’s just make these models bigger and bigger and bigger and bigger and bigger.”
So I’m going on and on a bit here, but the idea is, over the next few years, models will continue to have even more powerful, more nuanced, human-like capabilities or better-than-human capabilities on a broad range of general tasks. Thanks to adding more and more and more and more model parameters.
But at some point in the next few years, something that we can’t see today, somebody somewhere is working on a lab somewhere in the world, who is going to have a completely different kind of approach to modeling data that will displace deep learning. We just don’t know that is yet and, yeah, hopefully he does it with smaller model sizes.
Amr Ellaban
Hopefully we’ll be able to retrain everyone to adapt to that.
Jon Krohn
Yeah. Yeah, it’ll be interesting because a big gap that needs to happen next is some understanding of cause and effect which none of these deep learning approaches can do and now that’ll be game changing.
Amr Ellaban
That’s a topic for another day. I think we’re all out of time. So thank you very much John for joining us and, for folks who want to get in touch and discuss this more, how can they reach you for?
Jon Krohn
So if you add me on LinkedIn but you mention specifically in the connection request that you heard me on the show, I’d be delighted to accept your connection request. There’s a limit to how many requests I can take so, if it doesn’t include that specific message that relates to the Orbit podcast, the Hg podcast, then I might not accept it, but I’ll accept it.
Otherwise you can just follow me on LinkedIn or Twitter. It’s impossible to keep up with all of the direct messages that I get on these platforms. But if you have something that you’d like to ask me that you think other people might also be interested in the answer to, or even if you don’t think they’ll be interested but you’re willing to put it in a public forum, I will answer that for sure, a hundred percent. I always answer all public posts that tag me so that’s a way to ask me anything you want.
On LinkedIn, I’m very easy to find. Just John Krohn. Twitter is @jonkrohnlearns and I have a YouTube channel with lots of videos introducing what deep learning is as well as the underlying mathematics that you need to be expert at machine learning. So things like linear algebra and calculus, you can get those all on my channel, which is also John Krohn Learns.
Amr Ellaban
Classic well, thank you so much John for joining us today and thank you everyone for listening.
Orbit episodes
Orbit Podcast
The business case for AI: Brent Hayward of Salesforce, David Carmona of Microsoft & Nagraj Kashyap of Touring Capital
Episode detailsOrbit Podcast
Mastering the billion-dollar software playbook: Joe Lonsdale of 8VC & Eric Poirier of Addepar
Episode detailsOrbit Podcast