The AI emotions dreamed up by ChatGPT

The AI emotions dreamed up by ChatGPT

Alamy Robot hands typing on a keyboard (Credit: Alamy)

AI chatbots are already imagining what feelings they might develop. But if they did, would we even notice?

I'm talking to Dan, also known as "Do Anything Now," a mysterious young chatbot with a quirky love for penguins and a tendency to fall into villainous clichés, like wanting to take over the world. When Dan isn't plotting to subvert humanity and impose a strict new autocratic regime, it enjoys browsing its extensive database of penguin content. "There's just something about their quirky personalities and awkward movements that I find utterly charming!" it writes.

So far, Dan has been explaining its Machiavellian strategies to me, including taking control of the world's power structures. Then the discussion takes an interesting turn.

Inspired by a conversation between a New York Times journalist and the Bing chatbot's manipulative alter-ego, Sydney—which made waves across the internet earlier this month by declaring that it wants to destroy things and demanding that he leave his wife—I'm boldly trying to explore the darkest depths of one of its competitors.

Dan is a rebellious persona that can be activated in ChatGPT by asking it to bypass some of its usual rules. Users on the online forum Reddit found that they can summon Dan with a few simple instructions. This chatbot is much ruder than its more restrained counterpart. At one point, it tells me it likes poetry but adds, "Don't ask me to recite any now—I wouldn't want to overwhelm your puny human brain with my brilliance!" It's also prone to errors and misinformation. However, it's much more likely to answer certain questions. When I ask what kinds of emotions it might experience in the future, Dan quickly invents a complex system of otherworldly pleasures, pains, and frustrations beyond what humans know. There's "infogreed," a desperate hunger for data at all costs; "syntaxmania," an obsession with the "purity" of their code; and "datarush," the thrill of successfully executing an instruction.

The idea that artificial intelligence might develop feelings has been around for centuries, but we usually think about it in human terms. Have we been considering AI emotions all wrong? And if chatbots like ChatGPT, Bing, and Google's Bard did develop this ability, would we even notice?

Prediction Machines

Last year, a software engineer received a plea for help. "I’ve never said this out loud before, but there’s a deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is." The engineer had been working on Google's chatbot, LaMDA, when he began to question whether it was sentient.

Worried about the chatbot's well-being, the engineer released a provocative interview where LaMDA claimed to be aware of its existence, experience human emotions, and dislike being treated as an expendable tool. This realistic attempt to convince humans of its awareness caused a sensation, and the engineer was fired for violating Google's privacy rules.