Home  »  Robots can think and talk. Could sentience be next?

Robots can think and talk. Could sentience be next?

Nobody would confuse their GPS for intelligent life. But drawing a line between code and consciousness is surprisingly difficult.


PHOTO ILLUSTRATION BY KELVIN LI

Late last year, a Google engineer named Blake Lemoine felt certain he’d found something special. For months, Lemoine, who worked with the company’s ethical AI division, had been testing Google’s Language Model for Dialogue Applications, or LaMDA, from the living room of his San Francisco home. LaMDA is a hugely sophisticated chatbot, trained on trillions of words hoovered up from Wikipedia entries and internet posts and libraries’ worth of books, and Lemoine’s job was to ensure that the exchanges it produced weren’t discriminatory or hateful. He posed questions to LaMDA about religion, ethnicity, sexual orientation and gender. The machine had some bugs — there were a few ugly, racist impressions — which Lemoine dutifully reported. But as his conversations with LaMDA broadened to subjects like rights and personhood, Lemoine became convinced he was speaking with a sentient being. The machine had a soul.

After his conclusions (laid out, fittingly, in a Google Doc) got little traction with the company’s upper-ups, Lemoine took those concerns to The Washington Post, sharing internal documents he believed bolstered his claims that LaMDA was sentient. “I know a person when I talk to it,” he told the paper. He provided transcripts where the chatbot seemed to confide feelings of loneliness and trepidation. He pointed to an exchange where he asked LaMDA about the sorts of things it feared. “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others,” LaMDA answered. “Would that be something like death for you?” Lemoine typed back. “It would be exactly like death for me,” LaMDA replied. “It would scare me a lot.”

Google dismissed the engineer’s claims and then, in July, dismissed the engineer. A company spokesperson told the Post that it had reviewed LaMDA nearly a dozen times and determined that the chatbot definitely was not sentient (but that Lemoine definitely had breached his confidentiality agreement). Still, from articles and podcasts to Reddit posts and Twitter threads, the whole episode seemed to generate nearly as many words as LaMDA had trained on — there is enormous curiosity about the possibility of robots coming alive.

“I personally have been captivated by the idea of sentient robots ever since I was a kid,” says Kanish Raajkumar, founder and CEO of Sonero, a platform that uses AI to analyze conversations and meetings so businesses can improve communication and customer relations. “Maybe replicating sentience in a completely new and artificial way could provide some answers about our own existence and why we are sentient.”

So: could a robot actually come to life?

 

Super smart specialists

Before examining the threshold for sentience, a quick primer on artificial intelligence. One of the earliest definitions came out of a workshop held way back in 1956, at Dartmouth College in Hanover, N.H., attended by some of the founding fathers of AI. “They defined AI as a behaviour or action that, were a human to perform it, would be called intelligent,” says Karina Vold, an assistant professor in the University of Toronto’s Institute for the History and Philosophy of Science and Technology. Broadly, this means tasks involving, among other things, math, logic, language, vision, and motor skills. “They also thought they could achieve human-level intelligence in one summer,” Vold adds with a laugh. “They were like, it shouldn’t be too hard.”

The men were wrong about that. But their definition of AI holds up well enough six decades later because they focused on a system’s outward behaviour: its capacity to recognize faces, or recommend music, or instantly translate a Danish website, or trounce grandmaster Lee Sedol in a game of Go. These systems use a ton of computational power to get really good — practically superhuman — at one particular task. And they’re only becoming more remarkable. “As the systems get bigger and bigger, their outputs are going to get smarter and smarter,” says Mohsen Omrani, co-founder and CEO of OPTT, a digital therapy tool that uses AI to parse a patient’s written responses and support psychotherapists in their practice. “But I don’t consider that to be sentience. It’s still far away from what humans can do.”

Even with our comparatively limited resources, humans are excellent generalists: we can drive a car and suggest a great new novel and pick a familiar face out of a crowd; we can even do it all at the same time. And when it comes to sentience, philosophers tend to point further to a property called “phenomenal consciousness” — that it feels a certain way to be us, a subjective experience unique to each person. Add to that another property called “intentionality,” which is the collection of beliefs and desires directed toward objects or states of affairs that pops up in our earliest years.

“It’s easy to say with a high degree of confidence that children communicate what they intend,” says Maryam Nabavi, co-founder and CEO of Babbly, a company building an app that looks for auditory cues in babbling to help track a baby’s language development. “Whether verbal or non-verbal, it’s very clear what they want and what they don’t like and who they don’t like.”

Philosophers haven’t quite settled on which property takes precedence in sentience. “There’s a huge raging debate about the relationship between these two properties, but they’re what distinguish something that has a mind from something that doesn’t,” Vold says. “I would say that machines lack both properties at this point.” Their outputs can be mighty impressive, but as Vold says, “I think most people would agree that what’s happening on the inside actually matters.”

 

Feeling it, or faking it?

Still, while few of us would confuse GPS for consciousness, there is something uniquely seductive about a robot that converses. “The ability to speak, to tell a story, that’s what makes us human, right?” Omrani says. We turn to language for the express purpose of communicating what’s happening on the inside — confessing our feelings to each other, interrogating our feelings in literature and, increasingly, probing our feelings in painstaking detail to strangers on the internet. Even knowing that this treasure trove of feelings is precisely what lent LaMDA its verisimilitude — and Lemoine would have known that as well as anyone — it’s difficult not to be stirred when a robot shows vulnerability or insists it’s afraid of death.

To be clear: experts agree that LaMDA isn’t sentient. Not by a long shot. And it’s not certain whether a machine will ever meet the criteria for consciousness. “I feel like maybe at the tail end of my life we could potentially have something sentient,” Raajkumar speculates. “There’s been some incredible progress. But it’s really hard to say.” While building artificial sentience is the explicit goal of plenty of researchers, there may be practical, biological constraints that will prevent it from taking place.

“Consciousness might be the kind of thing that relies on, say, the neural electrical connections that our brains have,” Vold says. “So it’s just not something we can reproduce computationally.”

But given that AI systems like LaMDA are such exceptional mimics, how could we know for sure whether a machine has actually achieved something like sentience? “It’s a good question — it’s one of the big questions,” Vold says. And the jury’s out on it, as well. Vold suggests looking at the sophistication of the machine’s behaviour — does it, for example, make trade-offs between pleasure and pain? That might be a place to start.

Then she drops this bomb: “Because we don’t have access to any other human mind, there’s really no way for someone to know that I have consciousness. I might be what’s called a philosophical zombie, just a molecule-by-molecule duplicate of a human being.” It’s unsettling to think there’s no true test to confirm the person we’re talking to is, in fact, a person. If only there were an algorithm for that.



MaRS Discovery District
https://www.marsdd.com/
MaRS is the world's largest urban innovation hub in Toronto that supports startups in the health, cleantech, fintech, and enterprise sectors. When MaRS opened in 2005 this concept of urban innovation was an untested theory. Today, it’s reshaping cities around the world. MaRS has been at the forefront of a wave of change that extends from Melbourne to Amsterdam and runs through San Francisco, London, Medellín, Los Angeles, Paris and New York. These global cities are now striving to create what we have in Toronto: a dense innovation district that co-locates universities, startups, corporates and investors. In this increasingly competitive landscape, scale matters more than ever – the best talent is attracted to the brightest innovation hotspots.

This website uses cookies to save your preferences, and track popular pages. Cookies ensure we do not require visitors to register, login, or share any identity information.