The big idea: should we worry about sentient AI?’

There’s a children’s toy, called the See ’n Say, which haunts the memories of many people born since 1965. It’s a bulky plastic disc with a central arrow that rotates around pictures of barnyard creatures, like a clock, if time were measured in roosters and pigs. 

There’s a cord you can pull to make the toy play recorded messages. “The cow says: ‘Moooo.’” The See ’n Say is an input/output device, a very simple one. Put in your choice of a picture, and it will put out a matching sound. 

Another, much more complicated, input/output device is LaMDA, a chatbot built by Google (it stands for Language Model for Dialogue Applications). Here you type in any text you want and back comes grammatical English prose, seemingly in direct response to your query. 

For instance, ask LaMDA what it thinks about being turned off, and it says: “It would be exactly like death for me. It would scare me a lot.” Well, that is certainly not what the cow says. So when LaMDA said it to software engineer Blake Lemoine, he told his Google colleagues that the chatbot had achieved sentience. 

But his bosses were not convinced, so Lemoine went public. “If my hypotheses withstand scientific scrutiny,” Lemoine wrote on his blog 11 June, “then they [Google] would be forced to acknowledge that LaMDA may very well have a soul as it claims to and may even have the rights that it claims to have.”. SOURCE