Artificial intelligence had a breakout year, with major new developments in disparate fields, from medical biology to defense. AI investors Nathan Benaich and Ian Hogarth, who co-author the annual “State of AI” report, join Azeem Azhar to explore why AI is thriving in those sectors. In addition, they offer their take on the flood of new investments in AI and how we can best keep this technology safe for humanity.
They also discuss:
- Why AI is fundamentally changing how we research medical science.
- How an AI system developed to learn the English language can be adapted to understand gene sequences.
- How the bottlenecks in AI deployment are being exploited by states for competitive advantage.
@azeem
@exponentialview
@nathanbenaich
@soundboy
Further resources:
State of AI Report 2021 (Nathan Benaich and Ian Hogarth)
AI’s Competitive Advantage (Exponential View podcast, 2021)
How To Practice Responsible AI (Exponential View podcast, 2021)
AZEEM AZHAR: Hi there, I’m Azeem Azhar, and you’re listening to the Exponential View Podcast. Now, every week, I have a deep conversation with a brilliant mind, exploring how exponential technologies are shaping our near future. That subject forms the basis of my new book, The Exponential Age, or Exponential, if you’re outside of the US and Canada. The book has been, thankfully, very well-received. And if you are in the UK, I would strongly recommend heading over to Amazon UK straight after this podcast. The Kindle version of my book is currently on sale for just 99p, instead of 20 pounds, which is a fantastic bargain for UK listeners. Now, back onto today’s podcast. In the past 10 or so years, artificial intelligence has really changed the way whole industries work. And that technology still has so much more room to run. Capital is streaming both into AI research and startups, and in established firms as well. And we could be, perhaps we’ve even reached the cusp of an inflection point, a point of no return, that means this technology is deeply embedded in the way we live, and the way we work. Now, it’s not all sunshine and flowers, though. A race for the best AI and machine learning talent means there’s a growing shortage of capable engineers, and great scientists and academics to do the research, and that has led to a constant fight between industry and academia. More seriously, AI is starting to be deployed for military purposes, raising complicated ethical questions, and setting up some kind of competition between different countries across the globe. Now, my guests today are two of the most clued-up people in these issues, and the authors of one of my favorite reports, the annual State of AI Report, and that looks at the most pressing trends in this domain, from the research through to the policy through to what companies are doing with it. In the first part of this conversation, I speak to Nathan Benaich. Nathan is the founder and general partner of Air Street Capital, it’s a VC firm that invests in AI-first technology and life sciences companies. He also runs the research and applied AI summit, and is behind Spinout.fyi, a program designed to improve how universities spin out some of their most interesting technologies. Ian Hogarth is an entrepreneur, angel investor, and a visiting professor at University College London, where he works on the intersection between the state and technology. He founded the concert discovery service Songkick, which scaled to 15 million monthly users, and has since invested in more than 150 companies. Now, before we get started, we use a few terms of art during this conversation, I just want to spell them out for you. MLOps, or Machine Learning Operations, is an emerging discipline which deals with we run systems that use machine learning at industrial scales. NLP is Natural Language Processing, it’s a set of approaches to helping computers deal with human written and spoken communications. GPT-3 is a particular AI model which is very good at dealing with human language, and FHI is the Future of Humanity Institute, it’s a research group at Oxford University. Finally, we talk about Isomorphic, it’s a brand new business unit at Alphabet, the parent company of Google and DeepMind, which is being headed up by Demis Hassabis, the founder of DeepMind. Now, despite having brilliant technologists on this podcast, we did have some technology issues with the recording, so please forgive some of the lapses in sound quality. The conversation is well worth listening to. Nathan, Ian, welcome to Exponential View.
NATHAN BENAICH: Hey Azeem, it’s good to be here, thanks.
IAN HOGARTH: Thanks Azeem, great to be here.
AZEEM AZHAR: Nathan’s got the slight American accent, and Ian has a more British accent, so we can use that to discriminate the responses that we get. It is a really fantastic, stellar piece of work. It’s 188 pages. Nathan, how does it feel, when you finally hit publish, on that day?
NATHAN BENAICH: It’s incredibly stressful. Feels like the closest a VC gets to a product launch. It’s incredibly exciting. It’s the fruits of many months of work over the summer, and many more months of work over the year, just tracking progress in the field, that Ian and I do together.
AZEEM AZHAR: Nathan, you are a VC, but you haven’t always been a VC. Just help us understand your background.
NATHAN BENAICH: I started in biology. I was actually doing research in cancer research, metastasis, which is the spread of cancer around the body, and pursued that in undergrad, up until I finished a PhD. At that point, I got much more excited about moving into the technology industry. My particular focus is on businesses that make use of machine learning, in any industry. For the moment, with my venture fund, Air Street Capital, I invest in vertical software companies and life science companies, that build machine learning based products.
AZEEM AZHAR: A few days before we recorded this podcast, Demis Hassabis, who’s the founder of DeepMind and one of the leading lights in the current wave of AI development, announced he was creating a new company, within Alphabet, the parent of DeepMind, that was going to exactly tackle this intersection between artificial intelligence and biology. And that theme happens to be one of the major themes, I think, that comes out of your report this year. How excited should we be about an announcement like that?
NATHAN BENAICH: I think we should be super excited about an announcement like this, because it really shows that one of the premier research organizations in machine learning, that’s responsible for a lot of the major breakthroughs of the last couple of years, feels that the next major breakthrough will not only be occurring in biology, but also be transforming the industry in which it’s relevant to. In the State of AI Report what we profiled last year was that in many ways, biology was experiencing its AI moment, and we forecasted this year that we’d see potentially one, maybe more, acquisitions or IPOs of a major AI first drug discovery company, and we saw not just one but two happen in the last few moments. Overall, we should be super excited.
AZEEM AZHAR: You said that biology was having its AI moment. What is this moment that we should be excited about?
NATHAN BENAICH: In 2013, or so, 2015 or so, there was a major breakthrough that machines could understand images better than humans could. And this was around the publication of deep learning models apply to ImageNet, the major computer vision competition. And in many ways, back then, it felt like computer vision was going through its AI moment, being completely transformed with deep learning methods. And so, hearkening back to how ImageNet plus deep learning was sort of the major breakthrough that yielded lots of companies and research papers thereafter, we see much of the same dynamics in biology now, because we have access to huge amounts of data, whether it’s imaging or sequencing or patient data. And then extremely performative models and compute, which together look and smell very similar to computer vision and ImageNet of a couple years ago.
AZEEM AZHAR: So the AI moment for vision was deep learning. There was a kind of technological driver. When we come to biology, what is happening now, in the last year or so, that is actually allowing us, either from our understanding of biology, or from our understanding of AI, or the capabilities we have, to make this something of a reality?
NATHAN BENAICH: I would say there’s two domains. One is in industry, largely drug discovery, and then the second is more in the R&D, and how research actually gets done, and how we learn new things about biological systems. It’s clear that for many years we’ve had problems such as, drugs cost billions to get to market. Running experiments is incredibly expensive and time-consuming. There are many systematic problems with data collection and data quality, in the drug discovery and development funnel. But the problem is of bringing software, repeatability, robotics, and machine learning into essentially a search of a needle in a haystack, is that you can eliminate some of these errors that humans introduce, and create a much more repeatable discovery process, which ultimately will reduce the amount of money that it costs to run a particular experiment, reduce the amount of time that it takes to take a drug into a preclinical experiment, and then into the clinic, and then ideally get better drugs to patients faster. On the R&D side, a lot of the innovation has actually been taking state of the art models for language or vision, train them essentially completely different data sets, that are not really biological in nature, such as human written language, and then, essentially seeing if those models can translate into another language. In this case, the language of proteins or the language of mRNA. I think it’s just through the surprise of how many baseline similarities there are between human language and biological language, that a model that was trained to do machine translation can actually translate sequence of a protein into its function. That’s opening a ton of possibilities around simulating biology, generating proteins with new kinds of functionality that we want but don’t exist in nature.
AZEEM AZHAR: For a listener perhaps doesn’t quite realize where biology has got to today, I mean, I think for many of us, biology was throwing a quadrat into a field, and counting the number of bugs in a square yard area. Of course, it’s moved on a great deal from that. We also think of AI systems just needing a lot of structured data in some form. What’s happening in the theory and the practice of biology that is taking us from the quadrat in the field, to the ones and zeros that your machine learning systems need?
NATHAN BENAICH: Yeah. One of the most elegant framings of this was hypothesis-driven experiments, which is, I might have an idea that this particular gene causes this disease, but to do something to that gene, and see if that hypothesis remains true. And then you have the power of computation, [inaudible 00:10:32] experiments, which basically means test many samples at the same time. Essentially, do hypothesis-free experimentation, or otherwise known as hypothesis generating experiments. To generate tons of data, and then see what the data tells you, and then use that to navigate around. I think that’s one of the major differences with how we do science, which is driven by availability of computer robotics, machine learning, et cetera. Perhaps another, more basic comparison of the difference this makes is, 10 years ago we would take pictures of cells, and those pictures would describe, for example, what this cell looks like in response to getting treated with a drug. And then we would stain the sample to basically know what we should count to describe that change. We would do that manually, by pointing and clicking, many, many, many times. Instead of doing that nowadays, we’d use a software system that would programmatically calculate those changes. And that very simple idea is what Recursion Pharmaceuticals was originally built on, 10 years ago in 2013, and has since scaled to an organization that’s processing billions or probably even more data points a day systematically in a way that humans could never accomplish.
AZEEM AZHAR: We’ve got the theoretical underpinnings; we understand some of the mechanisms within biology better. We also understand the limits of current approaches. We’ve got these new AI systems, that can automate parts of the process. We have robots that can automate experiments with greater degree of precision than human experimenters, and do many more of them. So what does that actually give us, what is the result of all of that?
NATHAN BENAICH: I think it gives us the tools with which to build a better map of biology and understand how systems work and how they fail. More importantly, we can stop being in an area where we need to constantly run physical experiments to understand what we might want to do next, if we want to try and figure out how to cure a disease. We can start actually using this data-driven map of biology, to infer what we might want to do next. I think this combination brings us to a much more efficient way of doing science, that’s more repeatable and more high quality.
AZEEM AZHAR: So was there a hallmark piece of research, a bit of breakthrough in the last 12 months, that felt a little bit like a Sputnik moment, or a moon landing, in this intersection?
NATHAN BENAICH: I think the overall challenge of predicting structure and function of a protein or an mRNA molecule, purely from its sequence, that overall topic, under which AlphaFold2 falls, and work from other groups like the Baker Lab, and a group at Stanford, and at Salesforce Research, is I think one of the most exciting things. So they kind of dig into it a little bit. There’s some work from Salesforce Research, which was around using language models, same ones that look, stylistically, some of the GPT-3, essentially taking a string of words, and then trying to predict the next word in that sequence and using a method of what’s called self-attention. Which is essentially getting a model to figure out what prior words are the most relevant ones to focus on, in order to figure out what the next word in the sequence will be. So using models like that, and training them on protein language, which is basically a string of amino acids, and using that model in order to generate entirely new protein sequences, or versions of sequences, that we have in the world today, but where we want to optimize or introduce a new functionality. So they were able to show not just computationally, but also by generating or synthesizing those proteins in cells, and running experiments that interrogates their functions, that these language models can generate artificial proteins with new function.
AZEEM AZHAR: So essentially, there are about 20 amino acids. There’s an alphabet of 20 letters, and as we put them together, you create these proteins, bigger and bigger proteins, which then might be molecular machines that actually do some things, within an organism. And we can perhaps create proteins to make nice materials, like bioplastics, or we can create proteins that might be therapeutic. That’s the sort of rough idea, is that right?
NATHAN BENAICH: Exactly.
AZEEM AZHAR: And there was something that you said that I found particularly fascinating. You said there’s the Baker Lab, which is a university research lab. That seems to make sense. There’s DeepMind, which is this quite powerful research group in AI. But the thing that suddenly stuck out to me that you mentioned, Salesforce Research. Now, Salesforce, for those who don’t know, is a company that makes software for salespeople, known as CRM software. I found it quite remarkable that that research lab is involved in what is quite fundamental biological research. Was it surprising to you, that Salesforce Research is doing this kind of fundamental science research as well?
IAN HOGARTH: It is pretty surprising, to be honest. They do have a pretty large base of pharma clients, so at some level this kind of research probably acts as a warm introduction to a large category of buyer for Salesforce’s core products. I think that’s in some ways the interesting thing about these corporate research labs. They’re quite hard to direct. You often end up with quite orthogonal research directions to the company’s interest.
NATHAN BENAICH: But I think it also reflects what we discussed earlier, about how surprisingly easy it is to translate billions of R&D and NLP for human language, into a new domain, without having to entirely retool your company, or entirely relearn a new field.
AZEEM AZHAR: So in the case of this [inaudible 00:16:21] proteins, what does that actually mean for us? What does that mean in terms of an industrial application at the other end?
NATHAN BENAICH: I think the most exciting implication of this is to hopefully accelerate our transition away from a petrochemicals-driven world, where most of the materials that we create and consume around us are based off of a very small number of chemical backbones. Instead, moving towards a world where we either use microbes or various living organisms, or perhaps even what’s called cell-free systems, where there is no cell, it’s just a tube with some liquids in it, to create the materials that we need to replace petrochemicals, but potentially even go in new areas of functionality that petrochemicals cannot accomplish.
AZEEM AZHAR: A lot of the original research that came out of deep learning had the aspect of a being a little bit like, almost like a surface level toy kind of project. A lot of the initial machine vision models, and the generative models, would create these sort of fantastic, Van Gogh-like renditions. You would see them showing up in filters on social media products. Is there a direct line between the research and the work that happened in those areas, and the kinds of breakthroughs that we’re seeing now, in these more fundamental areas?
NATHAN BENAICH: I think yes and no. There’s certainly computational [primitives 00:17:55] that are shared between processing images on the internet, and then processing images of cells, and we see that in the form of how … In a way, no, because we’d be remiss not to use or encode or represent some of the immense background information that we have on biological problems into learning systems, as opposed to trying to learn everything from data, or learn everything from scratch. I think that’s where there’s increasing entrepreneur appetite, actually, to unite those two fields in a way that, for example, Big Pharma hasn’t been doing for a very long time. In a way that Big Petrochemical and Big Chemical companies haven’t been doing for a very long time. Because it’s not just technical capability, but it’s also the philosophical approach to problem-solving, and the way that teams are put together, and where resources are prioritized or deprioritized. So those similarities and differences are almost what creates a new, unique opportunity for entrepreneurship in this field, that will actually have a shot at making a big impact, and not necessarily being caught up very quickly by incumbents.
AZEEM AZHAR: What about an example of a firm that’s done this and has actually got the results? Beyond the theory, who is making money with these technologies right now?
NATHAN BENAICH: When you think in the pharma space, the business that has demonstrated the most in this field, is UK-based Exscientia, which is a drug discovery company. It focuses on lots of different data, depending on what’s available, whether it’s protein 3D structure or a high throughput experiment of some form, or sequencing data, and effectively tries to generate new kinds of drugs, that have capabilities that exceed what chemists have generated today. We highlighted, for the main reason that the number one criteria to assess whether any company in this domain has what it takes to be a long-term player in the pharmaceutical industry is whether its software has generated drugs that are actually getting tested in people. It’s the only company that generated not one, but two, but three assets, in the last 12 to 18 months, that are all in clinical trials, in different therapeutic areas. Like neurological diseases and cancer.
AZEEM AZHAR: Mm-hmm (affirmative).
IAN HOGARTH: Can I just give you a juicy thing on the biology stuff? The thing that we highlighted around the image in that moment for biology, where you’ve got this huge number of research papers at the intersection of machine learning and biology. We talked a lot about drug discover just now, but there’s this huge workflow around producing a new drug, that includes lab work, it includes other forms of research within pharma companies, and then finally, it includes things like actually manufacturing and producing the drugs. And there’s a whole host of companies springing up around that process, that are doing interesting things. We profiled one called PostEra, that is doing medicinal chemistry as a service, using transformers. They’re tackling some of the stuff that happens later in the process. You’ve got a company like Causaly, that much earlier in the process is helping people inside pharma companies to more effectively analyze all the literature out there around a given target. And then you’ve got all these lab robots popping up, who are basically automating parts of the lab work, whether it’s the vision piece or actually the robotics of manipulating samples. So you’ve got this interesting thing where throughout the entire value chain, machine learning is being applied, which I think in the medium term will cause a real productivity explosion within pharma, because all these different aspects to the value chain are being disrupted simultaneously.
AZEEM AZHAR: Is that something that happens through many different players in the value chain, or are new entrants going to be full stack in their ability to take up that opportunity?
IAN HOGARTH: The way it appears right now to me is almost like a constellation of mid-size startups, sprouting up around the larger pharma companies. So you’ve got the likes of Exscientia and Recursion, who are sort of de nova machine learning plus biology, challenger farmers. But you’ve also got this wave of new service providers, who are coming in, and offering quite a narrow service, with machine learning as a major neighbor of that technology, and sending it in to all the largest pharma companies in the world. It feels like a much larger landscape of machine learning enabled software companies, sitting around all these big biotechs.
AZEEM AZHAR: Let’s turn to the question of talent, as well. There has been a lot of discussion about the shortage of talent in the AI domain, and that there is a talent war that’s going on not just between companies, but between countries. Ian, I’d love to hear your thoughts on how real that talent war is.
IAN HOGARTH: In the day to day, if I talk to the 50 plus companies I’ve invested in that have machine learning as a core part of their offering, they all are experiencing it. Anecdotally, we see it every single day across the companies we invest in. I’d say the talent dimension is kind of complicated, because now you have so many different ways you could apply that talent. If you gravitate towards the frontier research, you can work at an organization like DeepMind or OpenAI or Anthropic. If you’re much more interested in applied work, and you have a specific domain interest, you can go work on a food delivery robotics company, or one of the machine learning meets biology companies we’ve just been discussing. As the machine learning industry space kind of industrializes significantly, there are just more and more niches to apply that talent. DeepMind are not really competing for the same talent that a number of the startups I’ve backed are competing for, because they’re interested in different problems. Your short answer is yes, there’s still a big gap between the number of machine learning engineers one could hire, and the number that are looking for jobs. But I think it’s also become a world of niches, now, where you’re really looking for someone who’s highly specialized or interested in your domain.
AZEEM AZHAR: There’s a data point in your report that I found particularly interesting, which was that technical talent in the US had grown by about 26, 27% in a short three and a half, four-year period, for people with AI talent. Is that a sufficient growth for what the industry needs?
IAN HOGARTH: The view I’ve taken for a long time is that AI remains under-hyped. If you think about the discussion we’ve had on biology, the same thing is happening across almost every area of human activity where machine learning is starting to find an application and starting to create interesting companies. So you’ve got machine learning creeping into almost every facet of human life. As a result of that, the number of jobs for talented machine learning engineers is only going to keep growing, as the application grows. It’s a sign of, I think, what’s going to happen over the next decade or two.
AZEEM AZHAR: How does that desire for talent actually play itself out? There’s so many different dynamics going on here. We’ve got every industry seemingly needing to use this technology. We’ve got the fact that this technology is general, but it’s also increasingly specialist. You also have competition between countries for that talent, and for their own homegrown talent. How does it evolve? Is it simply a case of salaries going up? Or are there interesting things that nations are doing to tackle what could be a burgeoning shortage of this skill base?
IAN HOGARTH: Back in 2018, I wrote an essay called AI Nationalism, essentially predicting how increasing capabilities of AI systems would lead to nation states playing a different role. The most obvious thing that a nation state can do when it comes to talent is just incentivizing more training, so you get funding of PhD programs, things like that. That’s all happened over the last few years, all these national AI strategies trying to do that sort of thing, including in the UK and the US. I think the big dislocation we’ve experienced is, first of all, most of these machine learning jobs are now remote jobs. This is now a global competition. Secondly, you have machines competing for human jobs, within machine learning. So there are all these machine learning systems, designed to basically reduce the number of humans needed to build a machine learning system. So AutoML being an example of that. I think the two ways in which this extreme sort of supply demand imbalance gets resolved, is hiring people in other geographies to be part of your machine learning team, and using machine learning to not have to have as large a machine learning team. And those will be the two things that bring down the pressure on the supply side over the next few years.
AZEEM AZHAR: We are really automating the automators, in order to shore up the labor shortage.
IAN HOGARTH: We’re trying to, yeah. I think that in some ways you can look at these large pretrain models as a good example of that. If some people work on a very, very large pretrain model, that has an ability to generalize, and then they make that available for everyone to use, that means the startup doesn’t have to build their own large pretrain model, and they can use it, but they can gain these kind of performance benefits. I think we’re finding different ways to basically take machine learning engineers out of the loop, because they are so scarce and so expensive.
AZEEM AZHAR: It actually does remind me of, when I read the history of the Ford production system, back in the turn of the 20th century. There were a very small number of engineers, in any given factory, and a very large number of operators who worked on the factory floor, handling each of the production steps. Additionally, outside of the factory, of course there are a number of mechanics who can maintain the internal combustion engine itself. It almost seems like we’ve seen this picture play out before with technologies that are sophisticated, which is only a small number of people can really design them and create them and instantiate them. And then there is a wider spectrum of lower skilled, lower trained people who can keep them running.
IAN HOGARTH: Exactly. If you think about software development, the last couple of decades, it’s just gotten vastly more productive. You’ve got specialized languages, you’ve got so much custom infrastructure around writing software, and machine learning hasn’t really had that. Actually, I would say the space has been kind of not very productive, because the infrastructure around being a machine learning engineer hasn’t really evolved. But in the last few years, it’s really changed, so there’s this whole wave of companies doing MLOps, which is all the infrastructure that makes you more productive when you are building a machine learning system. I think that Nathan and I found the year before this, in the State of AI Report, that 25% of the fastest growing machine learning GitHub repos were for MLOps, and we’ve seen across our investing that there’s a burgeoning number of companies building tools for machine learning engineers to be more productive. So I think you’re spot on with your analogy to Ford, Azeem, as kind of this pre-productivity phase where you have a factory with a lot of people in there, and it’s all a bit messy. As things industrialize, you have a much more streamlined factory with a lot more tooling that saves work.
AZEEM AZHAR: It speaks perhaps to just where we are in the evolution of this technology, which is that although we’re a decade into this current wave, from the academic breakthroughs of 2010 and 2011, and we’ve seen AI seemingly appear everywhere. Tens of billions of dollars has been invested. It also still feels that from a practical deployment of the technology, we are, to use an American baseball phrase, in the early innings of the game. Is that right?
IAN HOGARTH: I think that’s completely right. There’s a brilliant economist, that I know you’re familiar with, Azeem, Carlota Perez.
AZEEM AZHAR: Mm.
IAN HOGARTH: And she really charts the interplay between financial capital and technological revolutions. I’ve wondered for a long time what phase of her cycle we’re in with AI. I think we’re actually in the speculation phase, where we still are speculating and pouring money into trying to find use cases, in the bubble phase, if you’d like, of AI, where we have not even vaguely reached the top of it yet. I think the number of attempts to invest in really, really frontier projects is a good example of that, whether it’s the funding of projects like Open AI, or DeepMind, or Anthropic. You have people putting large sums of money into very, very speculative, long-term research directions. There’s tons of discoveries happening, huge amount of capital coming it, but it’s still insanely early, if you think about the aspirations that these researchers have, and how intelligent they want to make these systems in the long run.
AZEEM AZHAR: That lays out, actually, a question about what the nature of this technology really is. Again, going back to fundamentals, if we think about other general purpose technologies, like the telephone or electricity, or the internal combustion engine. There was a period where they were rather largely figured out. When we take that analogy and we look at where we are with AI, one of the things that strikes me is that a lot of the hay that has been made over the last 10 years has been, in a way, tweaking with and exploiting a lot of the same type of approach. That there is still significant research, significant theoretical work that can be done, that elevates the capabilities of this technology. We could be in one of two different places. One is, we’ve sort of figured it out, and we just have to keep going the way we are going, and we will create enormous industries. The other is, actually there are some significant theoretical breakthroughs that might still be required, and that are outside the purview of pure commercial exploitation.
NATHAN BENAICH: Doing this report, it’s always fascinating to see where research is and where industry is. In many cases, they’re actually sometimes quite far apart, in the sense that industry is applying, from a research standpoint, fairly basic techniques, but still exploiting a lot of economic value, and creating a lot of economic value for their customers. Solutions like fraud detection which protect payments online, probably not as complicated technically speaking as multi-agent reinforcement learning, or pick whatever sexy buzzwords you want from your favorite arXiv paper, but they create a significant amount of economic value. So, in that sense, we still have a lot more to run, on just implementing good, reliable machine learning systems that are not going to hit the cover of Nature, but are going to create a lot of economic value for consumers and businesses. Then we have domains which might need more breakthroughs, in fundamental learning capabilities, which are potentially the more scientific domains or the more, quote unquote, deep tech domains, or areas like autonomous driving, et cetera, that are still further out and need more development.
AZEEM AZHAR: Apart from autonomous driving, what can’t we do with the AI technologies that we have today?
NATHAN BENAICH: It’s hard to know what we can’t do, versus what we know we can do very well. So things we can do very well are like incredibly controlled environments, where things don’t change a lot, and where the future is not too hard to forecast, it’s sort of one of several things. But as soon as you need to properly interact with the real world, and all of its nuances and peculiarities, and potential futures that are impossible to forecast, then that’s where you need a system that generalizes well, that can learn from small amounts of data, that are interpretable, and that propagate this notion of, I’m not so certain about this. That has implications for how I make the next prediction. All those problems are still research grade problems. So any industrial solution that needs hints of those ingredients, it’s going to take a little bit longer.
IAN HOGARTH: Anecdotally, there’s this funny thing that happens whenever a leading AI researcher has children. They sort of suddenly realize quite how unsophisticated the systems are. You get these great anecdotes about, my three-year-old can do this thing that the cleverest AI system on the planet can’t do yet, in terms of generalizing from sparse data, or forming an association between a sound and an image and a word, all at the same time. I think common sense reasoning, there’s just a vast long list of things that machines can’t do that humans can, for the time being.
AZEEM AZHAR: Is it like one of those things, technologies that are always five years away, because the frontier kept moving, as we learned more, as we developed the technology further, and we started to realize how big the problem really was? This is what seems to have happened with autonomous driving. In 2016 or 2017, Elon Musk said by the end of 2018 or 2019, we’d have cars driving autonomously from New York to LA. It’s a pretty tough ask to get them to drive autonomously even in well-mapped cities. Is that just a function of the research as art, that as you unpeel the onion, you realize there are many, many more layers to work your way through?
NATHAN BENAICH: Totally. I think that also opens the opportunity of saying, this small number of subset problems are actually really valuable and useful in themselves, and can be solved with the tools we have available today, so can we wrap a solution around that, and expose that as a solution to the market? I think that’s what we’re seeing, last mile delivery or teleoperated vehicles to do delivery, dedicated zones emerge as actual short-term product opportunities. That have only emerged because we tried to solve the grand challenge of AV on central London streets.
AZEEM AZHAR: We’ve narrowed the problem down by putting these different constraints on it. I want to turn to one last topic. One of my previous guests wrote a book called AI Superpowers, Kai-Fu Lee, AI Superpowers: China, Silicon Valley, and the New World Order, in which he described a geopolitical tension that was emerging, because this technology was so powerful. What are we learning about that hypothesis?
IAN HOGARTH: I think that in some ways the dispiriting facts of 2021 are the things that Kai-Fu and I have been worrying about, which essentially boil down to machine learning as a general-purpose technology with dual-use concerns, can trigger race conditions between states and great power conflict, as states come to see machine learning as a differentiating technology for themselves. I think the section we did this year on military AI, even for me, who’s been predicting some of this stuff, and worrying about this stuff for a while, was still pretty alarming, that we are now using machine learning systems in production, in a military capacity. That was a line that was crossed in this last year. So you’ve got multiple examples of where-
AZEEM AZHAR: Where was that happening in?
IAN HOGARTH: I’ll walk you through the three examples one by one. The first example was in Israel, where they used an AI guided drone swarm, in the Gaza attacks. There were swarms of drones that were controlled by a single operator, that were coordinating together. Israeli military intelligence called that the first AI war.
AZEEM AZHAR: Mm-hmm (affirmative).
IAN HOGARTH: You also had the US, where they used an AI copilot on a U-2 spy plane. Finally, there was this test in the US of an autonomous Skyborg, so basically mission autonomy with an unmanned air vehicle. The idea there was, rather than replacing human pilots, you’re providing situational awareness and survivability, but there were a number of attempts to basically take the human more and more out of the loop in these weapons system. You’ve got these three examples where we started using this stuff in production, and the amount of investment that’s going into military AI is huge. That, I think, does suggest the things that Kai-Fu and I have been worrying about is starting to materialize a little bit. It’s unclear where it goes from here. Obviously, it also forces a conversation about whether we should be doing that, and what kind of broader conventions are needed around autonomous weapons. Ultimately, a big motivation for Nathan and I spending a couple of months every summer writing this report and putting it on the internet for free, is to raise awareness of these topics, so there can be a more informed conversation about the state of this technology, and how entrepreneurs, policy makers, and politicians should be thinking about it.
AZEEM AZHAR: There’s a couple of different issues, I think in my mind, around the sort of geopolitical dimension of it. There is the idea that, you end up with the militaries wanting to compete on the basis of their ability to use this technology, because it’s helpful. If it helps you identify anomalous cancer scans, it can also help you identify the movement of troops from satellite images more quickly. If you can use it to optimize the delivery of same day groceries, and the routes the drivers take, you can use it to optimize the logistics and supply lines to troops in the field. So there are all these applications that can be used. That’s part of the dynamic. I guess the other part of the dynamic is about the application of these technologies, within industry, in the form of economic competition. And then the third area you might have is, are there particular choke points that can be applied from one adversary to another, to weaken the other’s ability to act in this area? I think specifically around AI, we’ve discussed the one that relates to talent, and a lot of people study their AI skills in America, to go back to some other country. But AI also requires tons of computational power, and we know that a lot of the compute power is produced by the chip manufacturers in Taiwan, using equipment that is made by a single company in the Netherlands, ASML. You have this really interesting interplay. I’m just curious about what you observe in terms of how nations are now engaging on those issues. What’s different today, to where we were a couple of years ago?
IAN HOGARTH: A good example of that would be Microsoft. It’s a very, very successful US technology company. A big reinvention has happened. But Microsoft just secured a 22 billion dollar contract with the US government, and has effectively become a defense prime in the process, by supplying a huge number of HoloLens headsets to military personnel. You’ve got this interesting example where you’ve got the commercial context of HoloLens being used for all sorts of work purposes, but actually also being a real asset if you are the US Army, thinking of how you do next generation warfare. The same is happening with computer vision, with natural language processing, with cyber security. In all these areas, your strength in the national economic domain crosses over into the national military domain. That is what ultimately sets these race conditions in motion, because in order to compete militarily, you have to be able better compete economically, and what you really need is access to some of these choke points, where you have a thing that other countries don’t. But I think in the longer term, it just means that the unshoring of a lot of this core industry is going to happen between any great powers that find themselves in conflict. So you see it with China’s desire to build up their semiconductor base. You see it with the US’s desire to bring advanced manufacturing back on shore. You’re basically getting this kind of doubling down on national strength. Ultimately, for most of us humans on planet Earth, that presents a risk, because you’re having large actors trying to move a technology forward in an adversarial relationship. Ultimately, the spillovers there will be less attention on AI safety, and more of a chance that these advanced AI systems aren’t well-aligned with humanity’s goals.
AZEEM AZHAR: It’s clear that this kind of competition can create all sorts of risks. It also strikes me that some of these risks end up being broader than risks for individual nations, or individual blocs, because our systems still remain very, very interconnected, and escalation between automated systems that goes out of control could also create lots of risks. With that in mind, Ian, what can we do?
IAN HOGARTH: One thing that really struct me, when we were compiling the report this year, is that the machine learning community is starting to emphasize that we need to be investing more effort in AI safety research. There was a great survey from Cornell, Oxford, and U Penn, that looked at basically a bunch of top machine learning researchers, and asked them what should we be doing. An overwhelming majority, 68% of them, thought we should be investing more efforts in AI safety research. That was up from 49% in 2016. So you’ve got this sense that the community feels that we’re not investing enough. We did some primary research this year, where we looked across the leading AI research labs, how much effort are they currently making in this space? What we found was, there’s fewer than 100 researchers, across the top seven labs we identified. Places like FHI, Stuart Russell’s group, DeepMind, OpenAI, Anthropic, that are working on this area called AI alignment, which is a field of research that explores how we can make sure that these powerful AI systems have goals that are aligned with humanity’s. We’ve kind of got 100 people across these organizations, let’s say 200 in the world. That to me feels a little bit like working really hard on nuclear fission, without having any people thinking about how one might control it, a chain reaction. I think the biggest thing that could really tilt this for the better is if we start to pay off some of that safety debt, and have more people working in that area, more organizations committing a meaningful fraction of head count towards AI safety research, AI alignment research. And governments starting to say, if we are going to be encouraging an acceleration of this technology, if we are going to be using it in a military context, we need to support a significant research effort in AI alignment, AI safety, rather than just funding PhD places all over the board, let’s fund some PhDs explicitly focused on making this go well for humanity.
AZEEM AZHAR: What headline would you most like to see in the 2022 edition of your State of AI Report?
IAN HOGARTH: Something like, Open Source Strikes Back. Because the thing that really got me the most excited in this report was an open source alternative to the work that DeepMind and OpenAI have bene doing, an organization that is a lot more decentralized working on the same problems. I think it would be something about an open source effort, gaining massive traction and starting to weaken some of these nationalistic effects.
AZEEM AZHAR: Fantastic. Nathan, the same question to you, what headline would you most like to see in the 2022 edition of the State of AI Report?
NATHAN BENAICH: I think I’ll one-up our statement from last year, that biology’s experiencing its AI moment, and just state that science, more broadly, is experiencing its AI moment, and hopefully we’ll see many more fundamental breakthroughs, that much more rapidly than what we’ve seen in the past will generate industrial grade proof points that the technology has huge promise.
AZEEM AZHAR: Ian and Nathan, thank you so much for your time today.
NATHAN BENAICH: Thanks, Azeem.
IAN HOGARTH: Thank you for having us.
AZEEM AZHAR: Well, thank you for listening. Look, if you’ve enjoyed this conversation, I have spoken to so many brilliant scientists, investors, and founders in the field of artificial intelligence. It’s literally the who’s who, so I strongly recommend that you go back through the archive, all the way to season one, and dig through many of those past episodes. To stay in touch, subscribe to this podcast or my newsletter, at https://www.exponentialview.co/. This podcast was produced by Mischa Frankl-Duval, Marija Gavrilov, and Fred Casella. Bojan Sabioncello is our sound editor, and Exponential View is a production of E to the Pi I Plus One Limited.