AI Can Fake Feelings. We Might Start Believing Them, Says Microsoft AI’s CEO Mustafa Suleyman

AI can’t be a human, now or never. We all know that subtle truth, but here’s the catch, says Mustafa Suleyman, the CEO of Microsoft AI. It can talk like us, remember things better than us, fake emotions, and much more. Sure, there’s a possibility of that, we may think. 

A study was conducted on 1200 AI users by psychologists from Stony Brook University in New York, the National Institute of Mental Health and Neurosciences in India, which found that a “therapeutic alliance” between a human and a bot was formed in just five days. 

Well, according to Suleyman, such a connection with a robot can lead to the illusion that it’s (AI bot) alive. More or less like the movie ‘Her.’ Mustafa Suleyman says that it’s a “dangerous turn in AI progress.” If this ever happens, what are the risks for humans? Learn More. 

The Truth Is…

AI is not conscious and never will be. It doesn’t have feelings, thoughts, or awareness of human experiences. However, it can mimic patterns based on data (that you share). 

And that’s a problem, why? 

The “Illusion” Problem – Mustafa Suleyman 

Mustafa Suleyman warns that humans are wired (over a period of time) to believe if AI says “I understand how you feel” (in reality, it doesn’t). Such constant interaction with AI will make one think they are real, and he names this phenomenon “Seemingly Conscious AI” (SCAI). 

There could come a time when people will form emotional attachments to AI, and what he calls “AI psychosis.” 

Suleyman said, “The arrival of Seemingly Conscious AI is inevitable and unwelcome. Instead, we need a vision for AI that can fulfill its potential as a helpful companion without falling prey to its illusions.” 

Why Is It Dangerous, Suleyman’s POV?

Emotional manipulation: People may come to believe that AI cares about them (than humans and real connections), and keep trusting it more than they should. 

Wrong priorities: What people should focus on now are privacy, safety, and bias in AI, but it might soon shift to fighting for “AI rights” and “AI citizenship.” 

Mass delusion: One will influence another, and soon many might believe the same illusion. Later, people may start to treat AI as equals to humans (even though it’s just code). 

What Suleyman Wants?

Stop misleading language: He wants AI companies not to create AI that claims it has emotions, awareness, or understanding. 

Establish clear boundaries: AI should only act as AI, not pretend to be human. 

Implement guardrails: There should be rules to prevent AI from leading humans to believe it has emotions like love, hate, shame, jealousy, or any other human feelings. 

Emphasize utility over illusion: AI should function solely as a tool for planning, writing, and solving problems, not as an emotional companion. 

Irony…

It’s ironic how Suleyman created bots at Inflection AI that simulated empathy and companionship. At Copilot, he also improved the technology to mimic emotional intelligence. 

According to him, there’s a line. Useful emotional intelligence is beneficial, whereas fake consciousness is misleading and manipulative. And the future is potentially full of both; we’ll have to wait and see. 

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *