Calum Chace writes about two singularities: The technological singularity will happen when there comes a point when humans can’t keep up any more and an artificial general intelligence takes over. The economic singularity (which Calum says will happen sooner) is the point when technology takes over a majority of the work we are doing and there isn’t enough work for us to do.
Both of these greatly affect the future of jobs and what humans might be doing. We keep being promised a future of leisure, but we seem ever farther away from it.
Calum gives us a look inside what it’s important to answer some tough questions now so that they don’t surprise us later on.
What we learned from this episode
-Why millennials and Gen Z could end up being the greatest generation.
-Humans survived the industrial revolution because we had our brains to fall back on, but the horse didn’t do so well. What will we fall back on when all cognitive work falls over to machines?
What you can do right now
-We can’t make a foolproof plan for the future, but we can be prepared for the big decisions we need to make.
-We need to start dealing with the looming economic singularity immediately.
Key Quotes
“Science fiction is actually philosophy in a fancy dress.”
“I think a better solution is an economy in which humans don’t do jobs. And we find some other way of generating income, more access to resources for the humans. And humans do whatever we want to do.”
Links mentioned
Today, our guest is Calum Chace. He’s the author of “The Economic Singularity”. And this is Work Minus Jobs. Hi, Calum. How are you?
Hi, Neil. I’m very well. How are you?
I’m excellent. It’s always great to get into this topic of talking about singularities, things far in the future, automation and AI. And so, this is going to be a great topic for us. But I want people to know about you a little bit ahead of time. So, give us a little bit of background about who you are, and the types of things you write about.
I was in business for 30 years, and retired inadvertently, involuntarily in 2011. But that worked out very well because as I don’t play golf, I pursued another hobby, which is a strong interest in artificial intelligence. Back in 2011, very few people were thinking about that. But I had long thought that AI was going to have a huge impact on all of us, and it was going to come sooner than people thought. And so, I wrote a novel about it because I wanted to get other people thinking about it. And how do you get people thinking about something? You make a Hollywood movie. Unfortunately, I wasn’t in a position to make a Hollywood movie. So, I wrote a novel in the hope that that might get optioned. It didn’t. And some friends said, “It’s a good book. But there’s a couple of nonfiction books hiding it. You should take them out.” So, I did take them out and I published them separately. And so, I’ve written three main books on the future of AI, one fiction, two nonfiction.
And what’s the name of your fiction book?
The fiction book is called “Pandora’s Brain” and it’s about the arrival of the first super intelligence on the planet. And the two nonfiction books are called “Surviving AI” which is about the technological singularity, super intelligence, and “The Economic Singularity” not surprisingly is about the economic singularity, which is joblessness.
One topic I wanted to get into later, but may as well talk about now, is just about science fiction in general. For me, I’ve recently become aware of the great fun that’s there in science fiction, and these really deep topics that people try to wrestle with, but not, as you said, not just like, hey, here’s what’s coming next when you think about these topics, but hiding it in a story, or letting the story drive the narrative. What’s your background in all that?
Well, I’ve read science fiction from when I was small. And in fact, I think science fiction has a great deal in common with philosophy and philosophy is what I did at university. I like to say that science fiction is actually philosophy in a fancy dress. Philosophy tries to get at, tries to answer big questions about the most fundamental questions we ask as humans, things like what is truth? What is belief? What is knowledge? Do we exist at all? And it does that by conjuring up thought experiments. So, most thought experiments are often very similar to the things, to the ideas the possible worlds that science fiction authors write. So, I think there’s a strong commonality there. And that’s really why I wrote a science fiction novel about secret intelligence because I wanted people to think of say more about the huge impact that AI is going to have on us all in the future.
So, I want to go deep into this economic singularity. So, explain what you mean by that term.
So, the term singularity is borrowed from math and physics. And it means a point in a process where a variable becomes infinite. And the classic example is a black hole. At the center of a black hole, the gravitational field becomes infinite. What happens then is that the laws of physics break down and everything changes. It was first applied to human affairs by a chap called John von Neumann, who is one of the founding fathers of modern computing, a brilliant polymath, Hungarian polymath. And what he meant by it was that technology was moving so fast that there would come a point where humans couldn’t keep up. And he called that point the singularity. And I take it to mean simply the biggest possible change you can have, biggest possible kind of change you can have. So, it’s more impactful than a revolution or a disruption or a transformation. Everything changes. Now, the term has often been applied to the arrival of super intelligence. And that is when we create an artificial general intelligence which has all the cognitive abilities of an adult human. And because computers can be improved, they can be speeded up, they can be expanded, and we can’t, the AGI, artificial general intelligence, will go on to become a super intelligence. And quite quickly, it’ll be many times smarter than the smartest human who ever existed.
That’s called the technological singularity. I mean, obviously, we don’t know for sure it’s going to happen. Quite a lot of people are skeptical about it. Seems to me it will, it’s just a question of when. Some people think it’ll be quite soon, 20 years or so. Personally, I think it’ll be quite a bit longer than that but probably this century.
I have named another singularity, the economic singularity, because I think people underestimate the impact of joblessness, which if it’s going to come, will come well before super intelligence. And that will happen when machines get to the point where they can do pretty much everything that we do for money, cheaper, better, and faster than we can. That doesn’t mean that humans will be obsolete. It just means we won’t be able to earn a living. Now, it might be that when machines take over all the jobs, even when, we don’t know if it’s going to happen, if and when machines take over all the jobs that we currently do, that we will kind of magically create some new jobs that for some reason they can never do. And we might want to get into that. I’m a bit skeptical of that. So, I think we need to think about what a successful economy looks like if there are no jobs for humans. Lots of jobs for robots and for software, but no jobs for humans. And I think our job, this generation, the next generation, is to work out how to make that a wonderful world. And I think we can do that.
I like some of your writing where you talked about the greatest generation, which was a term that was applied back to the people who lived in that World War II era. But you’ve actually said that the decisions that will be made in the next 20 to 30 years as millennials, as Gen Z, and these people become empowered and start to make these decisions will pretty much determine the fate of humanity, correct?
Yeah, I think so. The nature of singularity is that everything changes. So, the generation naming is quite fun, and coincidentally, very apt. The greatest generation was the generation that fought in the Second World War. And many of them survived the Great Depression before that. Then there were the baby boomers, and I was born right at the end of the baby boom period, the generation that came after that was much smaller than the baby boomer generation, and nobody could quite figure out what they were for. And so, they got called Generation X. And that meant that the millennials, who were born between 1980 and 2000, were Generation Y, simply because it came after X. And then the generation after that, which includes my son, as it happens, were born after 2000, they’re called Generation Z. Now, I think this is terrific, because Generation Y, in a sense, will have to figure out why we are here, since they won’t be doing jobs towards the end of their careers. And Generation Z is probably the last generation, maybe the last generation whose members need to die, they may be the last generation of what we would recognize as normal human beings, because humans will start to change dramatically out of all recognition once AI and other technologies like nanotechnology and really advanced prosthetics and very advanced biochemistry take hold. So, I think just by purely happenstance. Generation Y and Generation Z may be very well named.
And I think it’s interesting, too, that seems like a lack of foresight. We start with X and then Y and Z, it’s like you see it coming, you see that there are no more letters after Z, in terms of the naming convention, which really reflects a lot of our attitudes toward all these things. Looking at it, it’s like, yeah, there’s probably a problem coming up. We don’t really want to come up with any solutions right now, which then further puts that burden on to future generations to figure out what comes after Z? Nobody knows.
I guess they’ll probably go back to the beginning and become Generation A of some new species, which could well be, if not immortal then at least death would be optional for them. And they would be, in many ways, probably largely unrecognizable to us.
Well, I’m going to bring up a common argument people bring, which is looking back to the Industrial Revolution, they say, “We brought in all this equipment, these machines, they took over farming. Now, we have very few people involved in that industry in terms of manpower, but we’re still doing fine. We still have all these jobs.” Why do you think that’s a bad argument for people to play that towards the future, and say, “Well, we’ll figure out other work to do”?
Yeah, I don’t think it’s a bad argument. It is a fact. And John Maynard Keynes, back in the 1930s, very smart man, leading economists of his time, thought that by now, by 2000, we would all be working very short numbers of hours a day, and we would have a leisure society. And he was wrong, it didn’t happen. I’m considerably less smart than John Maynard Keynes, and back in 1980, I thought that by 2000, machines would have taken over a lot of jobs, so I’ve been banging on about this for a long time. And I was wrong, they haven’t. And we are, in the UK, in the US, at least, nearly at full employment, although a lot of people think that a lot of the jobs we’ve got are lousy jobs, or gig jobs, or self employment, which isn’t really a job at all, because people are just pretending to be working. But we are nearly at full employment. So, it’s not a bad argument. However, what we have coming now is a different type of automation. In the past, almost all the automation we’ve seen has been mechanization. And as you say, the agricultural industry is a classic example. In 1800s, 80% of every American that worked was working on a farm, and that number went down to 40% in 1900, and now it’s down to about 1%. And that’s because the machines took over our muscle jobs, which was what most people did. And that was not disastrous for the humans because humans had another set of skills to offer, the cognitive skills. And now the grandchildren of those farm laborers are working in shops and offices and making podcasts. It didn’t work out quite so well for the horse because the horse didn’t have anything else to offer once the muscle jobs were taken, and there were 21 and a half million horses working in America in 1915. And that was peak horse. Now there are 2 million horses in America, most of them effectively pets. So, that’s very, very significant technological unemployment.
What we have coming next is a different type of automation. It’s cognitive automation, the machines are coming for our cognitive jobs. And we don’t yet know whether we’ll come across some third set of skills after the physical skills and the cognitive skills. Some people talk about spiritual skills or human skills, maybe we’ll all do caring jobs that the machines can’t do. I’m skeptical of that because what I’ve seen, and what we are seeing in the economies around the world is that machines can do caring jobs. In Japan, they are at the front, the sharp edge of the graying of society, and they’re quite technophile, and they don’t really allow much immigration. And so, they’ve got a shortage of people to look after their elderly people. And so, they’re using a lot of robots. And it turns out that elderly people really rather like being looked after by robots. Because all you have to do when you’re providing care to a human is provide a semblance of care, of empathy, but provide the services that are needed and provide a complete level of respect. And it turns out machines can do that often better than humans. I mean, some humans are brilliant at it, quite a lot of humans are not very good at it. And machines are always reasonably good at it. So, I’m skeptical that there’s some new caring economy in which we’re all each other’s therapists and doctors and nurses. I think a better solution is an economy in which humans don’t do jobs. And we find some other way of generating income, more access to resources for the humans. And humans do whatever we want to do. And I think that’d be really rather wonderful. The true leisure society, I think that’s what we should be aiming for.
So, you can easily picture a future where grandchildren are asking us, “You used to work every day? What was that like?” They wouldn’t even have a concept for that.
Exactly. Yeah. In a sense. I’m actually living proof that this is possible because although I do really have a new job, because my interest in AI has become a job. I take my writing seriously. And I spend a lot of time going around the world giving talks to people, which I thoroughly enjoy. But I have retired so I’m doing a job, which isn’t really a job. I’m working. I think humans will always work. We’ll always have projects. But it’s not, arguably, it’s not a job. Certainly I’m not an employee and I don’t take orders from anybody. And I think that’s possibly a decent model for people everywhere.
So, let’s assume that this world is possible where we can actually get to a place where we don’t have to have jobs, so to speak, that we’re tied down to that are connected to how we get resources. What are those decisions that these new generations need to make, as we take control of the technology, as we push the policies in different ways? Obviously, there’s one or two ways we could go. We can just let it go all out. And we can make really bad decisions that end up destroying our species. But then we could also make great decisions that bring out the best. So, what are those decisions you feel like people need to make over the next 20 years?
I think in the first instance, we need to take the issue a lot more seriously, I don’t think we’re likely to be able to plan a roadmap for the journey. I do have a proposal, but it’s not detailed. And it may well not be the right one. The future is, in some ways, foreseeable, you can see broad trends, and one broad trend you can see which is something we really must talk about in order to put this in context, and that’s exponential growth, the exponential growth of technology. I hope your audience isn’t bored yet with hearing about exponential growth because it’s not going to stop, as in, they’re not going to stop hearing about it. And it’s really, really important to understand it and to understand the power that it creates. So, most people have heard that their smartphone has more power, more compute power than NASA had when they sent Neil Armstrong to the moon, that is all of those computers in Houston, taught them all up less compute power than the smartphone in your pocket. And that is true, but it’s out of date. A good toaster now has more compute power than NASA had.
So that’s an enormous improvement since the late 60s. And as you move along an exponential curve, the rate of change stays the same. But the amount of change gets obviously twice as much if it’s an exponential power of two. And it means that in 10 years’ time, the machines we have will be 128 times more powerful than the ones we have today. In 20 years’ time, they’ll be 8000 times more powerful. In 30 years’ time, they’ll be a million times more powerful. Now, that’s why I say that it’s very likely that in 30 years’ time, not in 5, not in 2, not in 10 probably, but in 30 years’ time, it’s very likely that machines will take over pretty much all the jobs we currently do for money. And this exponential growth in the performance of computers is known as Moore’s law. Moore’s law has always evolved through its life. It’s had different underlying causes. And some people say that it’s dying or dead. But it’s not. In fact, if anything, it’s going slightly faster than it has for a while because software is also going through a Moore’s law. So, you can foresee that we’ll have very, very powerful computers. You can see that. How exactly that will manifest probably depends, to a large degree, on decisions that we take. And also things never work out quite the way you expect. I don’t suppose anybody could forecast the end of the 19th century, whether we were going to be using DC or AC for electricity. Nobody would have foreseen 40 years ago that our most powerful technology, artificial intelligence, we’d be carrying it around in a telephone because at the time a telephone was a rather ugly device the size of a small dog, and it was tied to a wall.
So, the precise way things pan out are unforeseeable. So, I don’t think we can plan this roadmap to the future and make wise or unwise decisions as we go along. But I think we should be thinking about it seriously, because if you don’t plan, even though all plans change as soon as the rubber hits the road, if you don’t plan, then nothing happens at all, and you get chaos and you get very bad outcomes. There’s an irony in that, I talked a bit about the technological singularity, the rival of super intelligence, there are four existential risk organizations, two in the States, two in the UK, which have got a growing band of very bright people working out how to deal with the technological singularity. And the big challenge there is to make sure that the first super intelligence on this planet really, really likes humans, and understands humans better than we understand ourselves. So, we are addressing that issue, rather, surprisingly, it’s quite a long way off. But there’s a decent number of people working at it. We need more. They need more money. But we started. We are not addressing the economic singularity, despite the fact that it will come a lot sooner. And one of the reasons why is that the people running the tech companies, and I don’t think these are bad people, but I think they have persuaded themselves that joblessness is not a real problem. Eric Schmidt, for instance, chairman of Google, or Alphabet, likes to say he’s a job elimination denier, he doesn’t believe it’s possible. Well, he may be right that it won’t happen, maybe something else will happen instead. But to say that it’s not possible is really complacent and dangerous. And we have to get past that. And we need to set up some analogues to those existential risk organizations looking at the subjects of the economic singularity.
The solution that people who do take the problem seriously tend to come up with is universal basic income. And there’s a bit of hand-waving that goes on in Silicon Valley, particularly people say, “Oh, well, if machines take over all jobs, we’ll just introduce universal basic income.” The trouble with that is that if goods and services remain expensive in the way they are now, to have a universal basic income, you’re going to have to tax the people who’ve still got jobs, and there probably will be a few. And the people who’ve got assets, and they’re called Mark Zuckerberg and Bill Gates, you’re going to have to tax them very, very heavily. And they will run away to the Cayman Islands and hide, that’s what rich people do if you tax them too hard. So, I don’t think that will work. And then there’s really nasty problem in the idea of universal basic income, which is the little word in the middle of it. Basic. It simply means that what we’re doing, at best, if it works, is to make everybody poor. And that will be disastrous, and a failure. We have to make everybody rich. We have to make everybody rich without them having to do a job.
The only solution that I can think of to this is to make everything cheap. If you make the costs of goods and services, all the goods and services you need for a very good standard of living very cheap, then you don’t have to tax the rich people very much at all. And that I think makes it all possible. And it’s called the economy of abundance, or the Star Trek economy. And it sounds like complete madness the first few times you hear it. But it seems to me the only way to make an economy that’s pretty much jobless a happy outcome. So, I think that we need a series of think tanks, a series of institutes to come up with some other solutions because I really hope that isn’t the only possible solution. And to analyze whether they are plausible, and to analyze how, whichever ones look plausible, we nudge things in that direction, rather than in any of the less optimal directions. Obviously, different countries, different jurisdictions, different states in America, etc., will try different solutions, and some will work, some won’t work. And hopefully, we’ll all narrow in on a consensus view that solution A, B, and C do work. And we should all adopt one of those. And hopefully, if we do that, we’ll have a good outcome. What I think would be really bad is if we just don’t start thinking about it at all until it starts to happen. That’s the thing that strikes me as being a real danger.
Which is, I think, a failure of us as humans is that we tend to put things off. There are people that plan well, but they seem to be outliers. It seems like as a people, we tend to say, “Oh, we’ll deal with that when it comes.” And this is definitely something where we need to plan ahead. So, speaking of that, let’s leave our listeners with a recommendation, something they can be thinking about. Specifically, we talked about philosophy, we talked about science fiction even. We’re at a stage where people are going to start programming self driving cars to make that infamous trolley decision. Do you pull off and you kill the one kid or you pull off and kill the 30 adults that are there? We have to decide that. It’s not just a fun thought experiment anymore. So, what would you leave listeners with in terms of some resources, if they want to get started up in science fiction as a way just to get their minds broaden about these things? Where should they start?
Well, I apologize in advance, obviously, they should read my book, “The Economic Singularity“. It goes into a lot of depth about all of this. I actually offer a free, shorter version of it called “Our Jobless Future” at my website. If you sign up for my newsletters, then I send you a free copy of that. My website is www.pandoras-brain.com. So, I think that’s a pretty good resource. There’s a whole load of books that have been written about the subject. There’s a chap called Carl Frey who was one of the authors of a report back in 2013, which got the current wave of conversation about AI and automation going. And he’s just produced a book called “The Technology Trap” which is about how the last wave of technology disruption in the industrial revolution played out and how that was really quite difficult. So, that would be another good book to read. But I suppose the most important recommendation I would make to people is have a think seriously about whether you believe that if machines continue to get twice as powerful every year, every 18 months or so if Moore’s Law continues, do you think that in 30 years’ time, there could be massive widespread joblessness? And if so, what are we going to do about it? I think more people should be taking that question seriously.
So, if you could invite any living or deceased science fiction writer and bring them on to the most important think tank on this, who would it be?
Greg Egan. He’s a brilliant science fiction writer. His early books particularly, “Permutation City” and “Diaspora”, really extraordinary. There’s very few people who write successfully, I think, about a post super intelligent world. And he’s one of the few. And he’s remarkable in a number of ways. One is that there are no photos of him on the internet. He’s managed somehow to keep his face off the internet.
That itself is a great feat. That’s good. Well, great. Well, Calum, thanks so much for being on the show. You mentioned your website, the book that’s there. Anything else you should leave us with?
No, I think we’ve covered it.
Okay, great. Well, thanks so much for being on the show. We appreciate it. It’s going to be good to explore this topic in depth a little bit later. But thanks so much for sharing all your insights.
My pleasure. Great fun.
Calum Chace is an author, speaker, and respected commentator on Artificial Intelligence and its likely future impact on individuals, societies, and economies.
After three decades of working with organizations such as the BBC, BP and KPMG during which he was a marketer, a strategist, and a CEO, he switched career gears to focus on one of his most intense interests: Artificial Intelligence. Calum Chace writes both fiction and non-fiction books on the possibilities and problems of increase use and reliance upon machines that are capable to learn like humans.
In Calum’s widely-acclaimed non-fiction book Surviving AI: The Promise and Peril of Artificial Intelligence, he exams what artificial intelligence is, where it comes from and where it s going, and provides a layperson’s guide to the likelihood of a strong AI and superintelligence (machines that posses intelligence far surpassing that of the brightest and most gifted human minds). In his book AI are The Economic Singularity, he explores the prospect of widespread technological unemployment.
In his latest book published in 2017, Artificial Intelligence and the Two Singularities, Calum Chace argues that through the course of this century, the exponential growth in the capability of AI is likely to bring about two ‘singularities,’ or points at which conditions are so extreme that the normal rules breakdown. The first is the economic singularity, when machine skills reach a point that renders many of us unemployable and will require an overhaul of the current economic system. The second is the technological singularity, when machine intelligence reaches and then surpasses the cognitive abilities of the adult human, reducing us to the second smartest species on the planet.
In the emergent, active debate over AI, there are as many that are excited by the possibilities as there are and those who are warning of disaster. As well as a brief historical look at the developments of the new world of deep learning AI, Calum’s talks provides evidence for both sides of the argument and guides audiences through his strange new world. Ultimately he makes the case that AI can turn out to be the best thing that will happen to humanity, making our future wonderful almost beyond imagination. But, he cautions, our future welfare and prosperity depends on our determination to learn about and meet the challenges that AI presents head on.