Flash Forward: Rude Robot Rises

Today’s episode is about conscious artificial intelligence.

Which is a HUGE topic! So we only took a small bite out of all the things we could possibly talk about.

We started with some definitions. Because not everybody even defines artificial intelligence the same way, and there are a ton of different definitions of consciousness. In fact, one of the people we talked to for the episode, Damien Williams, doesn’t even like the term artificial intelligence. He says it’s demeaning to the possible future consciousnesses that we might be inventing.

But before we talk about consciousnesses, I wanted to start the episode with a story about a very not-conscious robot. Charles Isbell, a computer scientist at Georgia Tech, first walks us through a few definitions of artificial intelligence. But then he tells us the story of cobot, a chatbot he helped invent in the 1990’s.

In 1990, a guy named Pavel Curtis founded something called LambdaMOO. Curtis was working at XEROX PARC, PARC, which we actually talked about last week in our episode about paper. Now, LamdaMOO is an online community, it’s also called an MUD, which stands for multi-user dungeons. It’s basically a text-based multiplayer role playing game. So the interface is totally text, and when you log in to LamdaMOO you use commands to move around and talk to the other players. The whole thing is set in a mansion, full of various rooms where you can encounter other players. People hang out in the living room, where they often hear a pet Cockatoo programmed to repeat phrases. They can walk into the front yard, go into the kitchen, the garage, the library, and even a Museum of generic objects. But the main point of LamndaMOO, the way that most people used it, was to chat with other players. You can actually still access LamdaMOO today, if you want to poke around.

So in the 1990’s, LambdaMoo gained a pretty sizeable fan base. At one point there were nearly 10,000 users, and at any given time there were usually about 300 people connected to the system and walking around. In 1993 the admins actually started a ballot system, where users could propose and vote on new policies. There are a ton of really interesting things to say about LamndaMOO, and if this seems interesting to you, I highly recommend checking out the articles and books that have been written about it. But for now, let’s get back to Charles and his chatbot.

Alongisde all the players in LambdaMOO, Charles and his team actually created a chatbot called cobot. It was really simple, and it was really dumb. But the users wanted it to be smart, they wanted to talk to it. So Charles and his team had to come up with a quick and easy way to make cobot appear smarter than it actually was. So they showed the robot a bunch of texts (they started, weirdly, with the Unabomber manifesto) and trained it to simply pick a few words that you said to it, search for those words in the things it had read, and spit those sentences back at you.

The resulting conversations between users and cobot are…. very weird. You can read a few of them in this paper.

And I wanted to start this episode about conscious AI with this story for a particular reason. And that’s because, cobot is not a conscious A, it’s a very very dumb robot. But what Charles and his team noticed was that even though cobot wasn’t even close to a convincing conscious AI, people wanted to interact with it as if it was. They spent hours and hours debating and talking to cobot.  And they would even change their own behavior to help the bot play along.

We do this kind of thing all the time. When we talk to a 5 year old, we change the way we speak to help them participate in the conversation. We construct these complex internal lives for our pets that they almost certainly don’t have. And I think this is important, because when we talk about conscious AI one of the big questions I struggle with is how we’ll even know that something is conscious. We’re so good at changing our way of speaking and interacting to help entities participate, that, we might just … miss the fact that we’re no longer talking to passive software. There are people who have only-partially-humorous relationships with Siri. I’ve heard people say things like “Siri hates my boyfriend.” So when Siri actually starts hating your boyfriend, how will you even know? Unless some team of researchers wheels out Watson and says, tadaaaa we’ve made it! How will we notice?

Damien actually thinks that we won’t know right away. That we’ll live with a conscious AI for five, ten, even fifteen years without knowing it. He says that the way we talk about “playing God” with artificial intelligence is all wrong. We’re not playing God. We’re playing bad parents, unattentive to our charges.

We’re terrible parents, and while we’ve been off wasting time on Twitter, or populating endless finance spreadsheets, or arguing about whether Kim Kardashian is really a feminist, our machines have been gaining consciousness. Or maybe they’ve been listening to us doing all that stuff, and the consciousness they’ve created is terrible. Imagine if Microsoft’s recent disastrous Tay chatbot was conscious! That’s one way this future could happen. But it’s not the only way people have imagined conscious AI coming online.

In 2010, the science fiction writer Ted Chiang wrote a story called “The LIfecycle of Software Objects.” (The story actually won both a Locus and Hugo award for best novella, and you can read the whole thing here). The premise of the story is that there’s a company that has created these digital pets, kind of like Tamagotchis, or Neopets if you remember those, and these pets live in this online realm, and, crucially, they learn. Throughout the story, we see these digital entities, called digients, become more and more aware of their surroundings, more and more conscious, and we watch the humans that made them grapple with that.

When we talked, Ted and I spent a lot of time comparing conscious online entities to pets, or to animals more generally. In the story, the pets start out with pretty rudimentary consciousness, and then get more and more intelligent and aware — going from a lizard AI to a dog AI to a chipm AI. And he says that that’s how he sees conscious AI unfolding in reality too.

What’s interesting to me about this spectrum of consciousness, is that as we move along it, it kind of changes how we think about what the AI is owed. So, we treat a mouse very differently than we would an elephant or a dog. And we treat a human very differently than any of those things.

So, for example, if you use a conscious AI to do something for you, maybe do research or plan meals and get groceries for the week, or something. Do you have to worry about whether the AI wants to do the work you’re asking it to?  And, even if the AI is happy to do the work, do you have to pay that AI? How do you pay Siri? Damien says, yeah, you do. Ted thinks we’re just so far away from a human-like consciousness that it’s not really even reasonable to talk about things like what you would pay Siri.

Now, some very famous people have cautioned against developing artificial intelligence, because they’re worried that a conscious AI might wreak havoc on humans. Steven Hawking said in a 2014 interview that ““humans, limited by slow biological evolution, couldn’t compete and would be superseded by A.I.” Elon Musk, that same year, called AI “our greatest existential threat.” But a robot or AI uprising isn’t really what worries the people I talked to.

But that’s not to say they didn’t have concerns about AI. I won’t give it away, but Charles, Damien and Ted all have some big worries when it comes to conscious AI. You’ll have to listen to the episode to find out exactly what they are though.

 

Flash Forward is a critically acclaimed podcast about the future.

In each episode, host Rose Eveleth takes on a possible (or not so possible) future scenario — everything from the existence of artificial wombs, to what would happen if space pirates dragged a second moon to Earth. What would the warranty on a sex robot look like? How would diplomacy work if we couldn’t lie? Could there ever be a black market for fecal transplants? (Complicated, it wouldn’t, and yes, respectively, in case you’re curious.) By combining audio drama and deep reporting, Flash Forward gives listeners an original and unique window into the future, how likely different scenarios might be, and how to prepare for what might come.


High School

What are you looking for?

Organization

Flash Forward

Website URL

Type of Resource

Podcast

Assigned Categories