Work Minus Work with Theo Priestley

29 Jun 2018   |   Technology

Work Minus Work with Theo Priestley

29 Jun 2018   |   Technology

Welcome back to Work Minus, when we talk about what we need to drop from how we work and how to get closer to a better future of work. Today, our guest is Theo Priestley. He’s a technology evangelist and futurist, and this episode is Work Minus Work. Hi, Theo. How are you today?

I’m good. I’m good. How are you?

Very good. So we are delving into the world of AI, Artificial Intelligence, and looking at a possibility of a future of work where we don’t work. So you’ve been talking about AI for a long time. Give us a quick rundown of where we are with AI, especially in terms of the terms Weak, General, and Super intelligence.

Yeah. Sure. So I think there is a misuse of the term AI and Artificial Intelligence. I guess it is very excited. I mean, we think Siri, and Google AI or Google assistant and the likes are versions of AI and we should be scared of them. They are actually really good examples of what the industry calls Weak Artificial Intelligence, which is they’re basically designed to do one thing. And most of those things are just command and response kind of activities. If you look at, for example, the autonomous car and especially the Tesla being the predominant example there, that’s another example of Weak AI. We think it’s amazing that this car can actually navigate and drive for me, and to park itself, and avoid obstacles, and take the right route, and this is an incredible piece of machinery, and an incredible Artificial Intelligence doing this. On an industry level and from an understanding level, it is actually is still a Weak AI because it’s only designed to do one thing and that’s drive you from A to B. You can’t get it to do your homework, you can’t get it to switch the TV on, and you can’t get it to understand you from a conversational level. So those are examples of Weak AI and this is kind of where we are at the moment. We’ve got lots of examples of weak AI.

General AI is kind of the next level up where we start to see emergent behavior that is almost on par with human behavior and human sort of understanding. So it can perform complex tasks. It can perform some reasoning. There’s still no level of empathy, if you will. They can’t empathize with it unless it’s been artificially programmed to appear to be empathic. But most of the time, they probably comes across as sarcastic. Like if you’ve ever watched the film Elysium, when Matt Damon is sitting in front of his parole officer, that’s the kind of thing that we probably expect out of General AI. So we can hold conversation with it. It can do complex puzzles and math. It can do complex tasks at the same time. It’s almost on par with human understanding in terms of language, for example. But that’s some years away yet. The reason for that is, to be honest, we don’t know how humans behave properly. We don’t have that level of understanding in terms of data. So data is very sub binary. They can collect information. We can process that information and extrapolate, and had machine learning algorithms learn from that data, but it still doesn’t process the world around us from a human level.

Then we get Super Intelligence and I think this is one where we most equate to the end of the world scenarios where we are terminator and the all-powerful singular computer that will destroy mankind. That is a long time away. I don’t believe that we will — if ever happened, I don’t believe that we would actually recognize it as such, to begin with. I don’t think humans are actually capable of producing something that is vastly superior to themselves either. Again, because we know very little about how we work from a biological computer point of view. So we still don’t understand how the brain works, and a lot of research is done on mimicking the human brain to recreate something artificially intelligent. And personally, I think that’s the wrong way to go because we don’t know everything about ourselves and how the brain works. And to have a super intelligence that is beyond human understanding and also human empathy and emotional levels is very difficult, I think, to comprehend and very difficult to design. So I think super intelligence could very well be our happenstance or an accident or something that’s externally influenced that we have no control over but we are so many years away from that. I’d be lucky if I saw that in my lifetime.

Yeah. We’re talking about the idea of mistakes which is as all the science fiction novels say is what evolution is based on–these mistakes. So it’s possible that we might run into these things, but to try to create it seems very unlikely. It’s like trying to create another dimension that we’re not aware of, right?

Yeah. Yeah. Exactly.

So, Theo, is there any chance of backtracking with this? Is this just the path that we’re on towards AI? Or is there any chance of rethinking this and say, hey, maybe it’s not a good idea. We don’t want to get that far.

I think the genie is out of the bottle. I mean, if you look at what Amazon are doing, what Google are doing. Facebook have been trialing it with their M personal assistant, Elon Musk, I think uses a lot of AI machine learning. And it worked in its own environment. And Apple, obviously, and Samsung have got their own AI assistant. I think everyone is all chasing this dream now so the genie is out of the bottle. There’s no point in saying, “oh let’s just stop what we’re doing and let’s kill it now and we’ll just go on living the way we are without any kind of assistance.” I think the important thing is to start examining the ethics around AI. Know that it’s out there in the open and now that people are developing it. It’s time to kind of, not put constraints around it, but understand how to take those next steps to make it more intelligent. What should we be designing AI to do in terms of fulfilling, you know, our lives and providing meaning for them and assistance? Where can it add value and what are the ethics around, you know, how we create that, what data is needed to create that, and the ethics around AI in general as to what it should and shouldn’t be used for. Those are the questions that we should be asking rather than saying, oh, should we kill it off, yes and no?

So I want to come back to the ethics point in a little bit, but let’s come back to the idea of Work Minus Work. What can we think about this world that we live on where work isn’t the main purpose in our life? You know, we’ve gone on for so many years as every day you get up, you go to work, that’s who you are. So what does the world look like where we don’t derive our daily structure, our finances, or our purpose from work?

I think it’s important to say or it’s important to recognize that work will always be around. I think the AI and robotics will actually start to remove various layers of the work that is more mundane. It will also make life very difficult for unskilled professionals. So people who traditionally start work, say, call center environments or processing bits of paperwork like data entry and stuff like that, I think those bottom layers will disappear and what we need to do is actually find the system that helps elevate unskilled workers to the next rank in the ladder that actually gives them some sense of purpose in their life. From other aspects, I think, if we can actually automate a lot of the mundane parts of work in our daily life, theoretically it should actually allow us to pursue more meaningful things, both on a personal or professional level. And those things should actually add value to us as a species. I don’t think we were designed as a species to sit in concrete office blocks all day long. If you look at the Golden Age of Renaissance, for example, in terms of Greeks, Roman, Italian, all those kind of eras where there was a lot of thought going on, there was a lot of contemplation, a lot of philosophy, a lot of art, I do believe that humans will not only strive for something better which is, let’s go to the stars and let’s do something with ourselves as a species rather than sitting and processing mortgage applications. But I also think that there will be a return to doing a lot more creative things. So, you know, exploring arts, exploring humanities, philosophy, creation of you know music, art, human expression, I think, those are things that AI would allow us to explore again rather than, like I said, being stuck in mundane tasks that don’t really add value and we’re only really doing them as a basic need to survive. I think that the definition of what it means to be human could actually be turned on its head, if we want to think that way.

Yeah. And I think that’s a big “if”. I think if you talk to somebody 50 years ago, 100 years ago, if we would show them even the technology that we have now, they might think, oh, wow. You must be very relaxed. You have these nice lives where you don’t to do much work. You can just kind of sit around and we can get to that point. But it almost seems that we have an addiction to work. We need to do it. We need to do it either for financial reasons or just because that’s how we’ve been wired for so long. Do you think we can really break out of that? Do you think we can get to a point where we say, hey, let’s go back to a time when we can look more into the arts, look more into exploration? Or do you think we’re kind of stuck in this place where we are where we need to work all the time?

I think we can break out of it. I think it’s going to be a painful transition. It’s the same with universal basic income, for example. There’s a lot of kinks and that a lot of the Nordic and Swedish regions have already done something around this and it works. But we don’t know whether it’s because their philosophy towards life is different from other Western cultures, and whether that would even work you know in the Far East. So we’re in this kind of sort of — it’s not even a tipping point, but it’s kind of we’re in this imbalance where if we want to take humanity to another level, we have to recognize that there’s going to be a few painful steps along the way, one of which is understanding what we do with all these people who have been automated out of existence, if you will. How do we support them going forward? How do we make them feel valued in society? And then, you know, how do we design work around what’s left? And can we make that work that’s left, actually, again become part of the reason that we want to elevate ourselves beyond where we are right now? Because, like you said, we’re kind of stuck in the cycle where money drives everything.

It sounds very kind of Utopian and Star Trek where we all work to better ourselves, which is a lovely sentiment. And Gene Roddenberry, bless him, he kickstarteded that and he popularized that theory where the human race actually did something worthwhile again. I tend to think we’re going to be sitting in the middle of this sort of cyberpunk kind of dystopian future. And the utopian one that we want to aim for, this is just going to be painful along the way, that’s all. And I couldn’t even stamp a tiny point on that as well. Again, I might be a futurist and I might look towards the future but saying it’s going to happen in 20 years, no. I mean you’ve seen how long it’s taken to — the banking industry, for example, ever since the collapse back in the 80s and again earlier on, with the whole mortgage thing, we’re getting back on our feet but we’re all making the same mistakes in the financial institutions and they’re still being fined for doing wrong things. And I think that humans have a short memory span, especially, in business and they’re still driven by the same things which is make lots of money and then exit. And I think we need to snap out of this kind of Silicon Valley mindset, as well, which is fueling this rut that we seem to be stuck in.

Yeah. And now I want to take it back to the ethics part of it. As we see with AI, it’s going to have some kind of bias that’s built in. There’s just no way to avoid that. So we talked about different companies that are pursuing AI. This is just going to come down to whoever gets there first gets to decide what the ethics are, who’s going to kind of set the tone for what AI means. How do we think about that? Is it going to be Silicon Valley? Is it going to be the military? Is it going to be somebody else that gets to kind of set the tone for what AI looks like?

Where we are right now is a bit precarious because what we have is a very small group of individuals and companies who are almost setting the course at the moment. So you have Google, you have Tesla, you have Amazon, you have Apple, you have a few others as well who have vast amounts of wealth, who can grab vast amounts of data as well, and who can, obviously, point AI in the direction that they want it to go. We have various bodies at the moment who are trying to set up sort of ethic boards, ethical boards and examine the questions around, you know, what we spoke about before, which was what should AI be doing. Why are we designing it that way? What is the right amount of data? What is the data that is required to create an AI? Is it ethical use of that data? Do we need kill switches, for example, and constraints around what AI can do? These conversations are happening but they seem to be happening in, one, happening in small groups. Two, a lot of closed doors. Three, being influenced by the people who are actually creating these systems in the first place. And there’s no open accountability at the moment or, certainly, open discussion around the ethics side on a level that even the person in the street understands. And I think we need to start removing that control slightly away from the people who have the control at the moment and start having these conversations with people on the street because, at the end of the day, what we create now is going to impact everyone. It’s not just going to be people with enough money to afford these AI systems and robots and whatever. Whether it’s big businesses or rich affluent and wealthy folk, it’s the man in the street is going to be affected by what happens in the next 5-10 years and we can’t simply let four or five companies dictate that.

There’s a one point you’ve said in one of your talks that I want to get into just as we close out this conversation. You said that the first AI super intelligent form will learn to lie to protect itself from us once it learns what we are really like. That’s a huge statement, one that can be taken many different ways. Just unpack that a little bit for us about what do you think that means. Is that meant to be a scare tactic or what do you think about it?

Yeah. I kind of threw that in the TED talk. At the end, you know a bit of a curveball especially for the people who are devoted. If anyone’s not seen it, it’s basically, would you choose a robot over a person to be a leader of the country, for example? And given the state of some of the world leaders that we have right now, I think a robot could do a pretty decent job. But at the end, I threw that curveball which was you know a robot will learn to lie to save itself. And it’s almost like the twist, and our limits twist at the end of a Turing test, for example, where you basically have two people and one of them is a robot and then the robot purposely lies because it knows that if the human at the other end actually discovers that it is a robot and it is more intelligent, that it’s going to switch it off and they’ll be ice cold, would be dissected, etc. So it’s almost one step ahead of the people and it will lie to protect itself and throw little curveballs of its own. And I do think that this is where modeling AI on the behavior of humans is actually inherently the wrong thing because what we do is introduce those biases into the equation. We introduce our own traits into the equation. And ultimately, we’re as flawed as can be. And if we’re trying to design AI to be as flawed as us, then that’s a mistake that we’ll pay the price for.

Yeah. Absolutely. I think when we look at people designing robots that look exactly like humans or you’re trying to design a mind that it behaves exactly like a human, it seems like we can make humans. We figured that part out. It’s trying to figure out the next stage of life and intelligence, I think, that’s the next big thing.

Yeah. And like I said, I don’t think we’re going to be capable enough of producing something that’s more intelligent than us because we’d be scared to death of it.

Well, a perfect way to end this conversation is with a little bit of ominous music in the background. But Theo, thanks so much for being on the show. Work Minus Work is an interesting topic to think about, something we all need to be aware of. And maybe leave us with — we say this is all in the distant future, but what’s one thing somebody in a company right now who’s thinking about AI, they’re wondering about how it’s going to affect their life. In the next six months to a year, what’s one thing they can do to educate themselves?

I think the best thing that someone can do is actually understand what AI means for them. So read up what Artificial Intelligence actually means then look at the world around you and see and think to yourself, which aspects of my life would actually be improved by having some kind of virtual assistant? It doesn’t have to be robots. It can be anything incorporeal, for example, like Alexa or some kind of virtual assistant. What would you like AI to do to improve your life? Which part of your life do you think would be most catered for by having some kind of automated assistant to take the drudgery away from it? You can think about it in any aspect whatsoever because those are the things I think most companies are actually missing out on. What we’re having right now is conversations that make AI look really cool and super, and they do chess, and they could go and it’s very impressive but it’s all very peer driven. What we want people to understand is which part of their lives would they like most to be disrupted, in a way, by AI and then have those kind of conversations, again, at human level.

Oh great. Theo, thanks so much for being on the show. Where can people go to stay in touch with you?

You can find me on Twitter, my handle which is @tprstly, or LinkedIn. And those are the two kind of channels I normally hangout for real-time stuff.

Great. And I’ll make sure those are in the show notes along with that TED talk you gave which is a must, I think, for anyone thinking about this topic. Thanks, Theo, for being on the show and we appreciate you being here.

Thanks for inviting me.

Subscribe to The Digital Workplace

Join the journey to a better future of work