Artificial Intelligence

Artificial Intelligence, Deep Learning, Machine Learning

The Emotional AI Perspective

Jan 12, 2023 | Ai | 0 comments

We have entered a troubled period, where we no longer distinguish between a dialogue with AI and a dialogue with a human. ChatGPT has been there! Can we talk about emotional AI? Michel Levy-Provençal puts forward some ideas.

Are we going to see the birth of emotional AIs? A huge question! How, first of all, would they work? What would they be used for? What uses could we make of them? “The subject must be approached with great caution. For more than a year,” introduced Michel Levy-Provençal, Founder of Brightness, TEDxParis, “I have been able to test the evolution of generative AIs on the market to get a clear idea of their capacity and their evolution. As the subject is progressing so fast and generating so much speculation, the utmost vigilance is required…”

If there is a field of research in full expansion, it is the possible influence of emotion on memorization. This is a rapidly growing field of research! More and more researchers are hypothesizing that emotions support attention, the encoding of information in the brain, the consolidation of memories or even their inhibition. Thus, emotion has become an important element in modern educational solutions, which was not the case at all before the 1980s. In class, students had to be focused and disconnected from any emotion that could break their concentration…

Can we imagine rewarding an AI?

“Even if we don’t know the mechanisms involved, we know that emotions have an influence on memorization. Probably because the brain is a prediction machine. When a prediction is right, the brain produces a reward that serves the learning process and induces a positive emotion.”

But can we transpose what happens in a human brain to the functioning of an AI? First, how does machine learning and deep learning in particular work? “We feed a neural network with data and program it to generate responses. The more data we feed the network, the more it adjusts its results in order to reduce the gap between the results it generates and the expected results. In other words, we are trying to ‘train’ the machine to be more and more precise in its predictions.”

This would imply, in case of a correct prediction, a reward. Is it similar to an emotion, the emotion felt by the human? This raises the question of the origin of emotions. Can they exist without chemistry, without organs, without a body? The answer is clear: in the case of AI, there is no chemical or biological interaction, nor any link with a body.

Let’s think about it further. Let’s say we add organic and chemical processes to an artificial learning system. Could we then say that the machine feels an emotion? “To know this, we would have to define what an emotion is. And that’s where we get stuck,” admits Michel Levy-Provençal. “Some researchers think that an emotion is the product of its expression – verbal or non-verbal. In fact, there is no proof that an emotion exists other than by signals provided by the system that ‘feels’ it. I am talking about internal chemical, electrical, external, implicit or explicit signals. In short, there would be no emotions, there would only be expressions of emotions. It is the same problem with the consciousness. There is consciousness only if there is an expression of consciousness. A being without expression of consciousness is considered unconscious.”

The acceleration of generative AI has never been so fast

So, if a machine expresses an emotion by all means at its disposal, why couldn’t we say that this machine feels this emotion? Once this assumption is made, can we simulate the expression of an emotion by an AI? AI has known for years how to detect human emotions. In 2018, I had presented on the stage of the Echappée Volée the Datakalab project, which uses AI to identify emotions through a simple webcam, using facial expressions or micro-expressions. Would AI be able to reproduce these expressions and micro-expressions and simulate emotions felt?”

In four years, the acceleration of generative AI has never been faster. In 2019, GPT2 contained 1.5 billion parameters (the equivalent of 1.5 billion connections in a neural network). Two years later, GPT3 had 100 times more connections, or 175 billion parameters. GPT4 should have more than 100,000 billion parameters, or 100,000 billion connections.

“Today, we translate text, we create images, articles, videos, tomorrow immersive environments. We will build virtual worlds,” says Michel Levy-Provençal. “We will be able, for example, to create translators for languages that are still unknown. To understand and speak animal languages, for example. Yes, we will be able to understand and speak to dogs or cats! We already do it with bees. Bees communicate by dancing. They dance to indicate where a pollen source is. A team of German researchers has managed to reproduce this language with robots to direct bees in space!”

Remember HAL, imagined and filmed in 1968

If an emotion can be summed up in the expression of this emotion, we can create AIs expressing anger, joy, sorrow. But also simulate non-verbal communication using voice intonation, rhythm, breathing… “Do you remember the sequence in 2001, A Space Odyssey, where the HAL central computer is disconnected? It was imagined and filmed in 1968 by Kubrick! How can you be more visionary than that?”

An experiment had been conducted in India by Sugata Mitra, a researcher interested in children’s ability to self-train. He noticed that the encouragement of a third party, even one who is totally incompetent in the field the child is learning, improves learning abilities. Working with grandmother networks, he found that their encouragement created a positive feedback loop in children, that empathy among other things helped them improve their performance.

Emotional AIs as permanent avatars

“One could totally imagine empathetic AIs that encourage us, that laugh and cry, that make us think they love us or hate us. AIs that influence us positively. Why not coach us on a daily basis, imagines Michel Levy-Provençal. And why not accompany us psychologically? In short, an avatar that would permanently feed us with positive emotions. That’s fine. Except that it would be a drug that would be very hard to get rid of… “.

We can also imagine AIs doing the same, but with a completely different intention… It is not conspiracy to say that many economic actors use AI to influence our consumption choices. But what if these AIs are used to go beyond that to change our behaviors, our judgments, our beliefs? “In short, how are we going to defend ourselves against these software weapons, asks Michel Lévy-Provençal in conclusion. “In the cognitive war that is brewing, where powers are gathering their forces and where soft power, crowd influence and propaganda are playing an increasingly critical role, how are we going to protect ourselves against possible ill-intentioned emotional AIs that could harm us?”