what-dangers-hide-behind-ai-and-chatgpt|opinions delibeRatio - What dangers hide behind AI and ChatGPT?

Support us

Become our subscriber and read any articles as you please

Support

What dangers hide behind AI and ChatGPT?

2023-05-08
Time to read: 8 min
A decade ago we were told that manual jobs would disappear and that only professions with a strong creative or intellectual character would remain. But with the revolution in Artificial Intelligence (AI), such as ChatGPT or Mid-journey, it seems that journalists, lawyers, computer scientists, analysts, doctors, cartoonists, webmasters, actors, authors, scriptwriters and artists are also likely to disappear en masse. To fully understand the ins and outs of this revolution, we need to understand what AI is in the current state of technological advancement. Contrary to what the term 'Artificial Intelligence' might imply, ChatGPT and other such machines do not reason - they 'predict'.

The most common approach to developing AI is machine learning, which allows a machine to automatically learn a set of rules from data. For example, when you put an email in your spam folder, over time your mailbox will learn certain rules from it. When an e-mail with characteristics similar to those you have classified as spam arrives again, your mailbox will put it directly into the spam folder. If, on the other hand, you put an e-mail that the system had automatically put in the spam folder back into your inbox, the mailbox will review its rules and correct itself. In this way, a machine can train itself to predict, with increasing accuracy, which messages you would place in the spam folder. After a training phase of a few weeks, it will do better than any human assistant, and work much faster. Deep Learning, a machine learning technique, is even better: the more data there is, the more complex tasks the AI can answer. For example, AI can predict to a perfume chain that three bottles of pink perfume will be missing from its Grenoble shop in the next 5 to 10 days and that no bottles of blue perfume will be sold. To do this (I simplify, as we are sometimes talking about several billion pieces of qualified information used), it will use sales history, current stock, trend analyses on social networks, loyalty card information, etc. In short, Deep Learning makes it possible to build predictive models from large amounts of data.

Taking the case of ChatGPT, based on Deep Learning, it uses an astronomical amount of text data to generate human-like text. It has been trained on a corpus of data containing about 45 terabytes of English text, which is billions of words, and learned to predict the next word in a passage. For example, after "in the sky, a bird", it will add "fly", because in most texts this is the case.

The first problem with this technology is the training phase, which creates many biases. Indeed, deep learning requires large quantities of qualified (encoded) data. Let's take the example of AIs that create true-to-life images of cats. At the beginning, a photo of a cat is not enough for the AI to know that it is a cat. It needs to be provided with thousands of qualified photos of cats so that it can learn what a cat is and what its characteristics are. Thus, the person who provides the qualified data can already introduce biases. If, for example, it only has photos of Brazilian cats, they will probably all be short-haired. For the AI, in this case, all cats will be shorthaired. Then, during the training phase, the human may favour, sometimes even without realising this, the AI's cat image creations that it finds beautiful according to its cultural representations.

AIs such as ChatGPT are not impartial, let alone truthful. ChatGPT's model is trained to predict the next word in a given text sequence. It predicts the best sequence of words. This is called automatic completion or plausible completion. ChatGPT has no deep understanding or awareness of what it is saying. It can produce coherent, well-structured text, but it cannot understand the meaning or significance of what it is producing. ChatGPT does not reason, it is a very realistic conversation simulator. For example, if it is given a lot of text from children's books, after the words "mice eat all the time" it will offer "gruyere". To support this example, I asked him if he knew me (Patrick EDERY). He replied that he did not. I then asked him when I had won the Second World War. He answered that Patrick EDERY had not won the Second World War, that he was not even born at that time. I then asked how he could know that I was not born at that time if he did not know me. He then apologised and said that as I was using ChatGPT, it was very unlikely that I was born before the Second World War. ChatGPT does not reason, but predicts what a human would have said. ChatGPT is a hyper-realistic pretender. That's why most of the sources he gives are false. They look very realistic because he makes them up from real source texts, depending on your request and how you phrase it. For example, I gave him this request: "I'm going to ask President Emmanuel Macron about the latest findings in astrophysics. How can I introduce him? Here is his answer: "Mr President, thank you for giving me this opportunity to ask you about the latest discoveries in astrophysics. As an astronomy and physics enthusiast, I would like to ask you a few questions to better understand recent advances in this fascinating field. I am convinced that your opinion and expertise on this subject will be of great value to the French and to the whole world. The ChatGPT teams will surely manage, over time, to correct this, but the technology based on this predictive model will remain the same.

According to Yann Le Cun, one of the fathers of AI, "artificial intelligence is a way of amplifying human intelligence, just as machines are a way of amplifying physical strength. But will the vast majority of humanity take the time to analyse things and think for themselves, or will they prefer to leave it to AI? Just as machines have relieved humans of many physical tasks, leading ultimately to the development of mass obesity, AI will probably result in mass dumbing down.

Already with current technologies, Western people are becoming less educated, and their IQs are falling dangerously. AIs in social networks tend to lock us into bubbles, into groups of people who have the same tastes, beliefs, certainties and convictions, where we are reassured in our representations of the world. Even within a family, everyone watches their own videos on their favourite platforms. We increasingly live in separate universes, alternative realities. Like the famous rapper GIMS in France, who claimed that Pharaonic Egypt invented electricity, that knights were African and that a great conspiracy had hidden this for millennia. Imagine when, in a few years' time, everyone will be able to get unique, personalised games and films from AI-customised worlds. You'll be able to have crazy nights with actors, models or people of your choice, remake the world and history, foil the worst plots - all without leaving your house.

If this happens, part of humanity risks becoming obese and decerebrate. They should not live long for reasons of health and inability to reproduce (let alone raise children). On the other hand, another part of humanity will certainly try to live longer, without disease, to increase its physical and intellectual capacities thanks to new technologies and genetics. Is this future of supermen dominating, like the gods of Olympus, mere mortals who are physically and intellectually inferior, realistic and inevitable? Or is it just GAFAM selling us a 'dream' to justify their huge stock market valuations?

Some fifty years ago, in his book "The War of the False", Umberto Eco took us on a journey to America in what he called hyperreality. There, Americans who wanted to live "real things" were served absolute falsities – like museums and parks where historical authenticity was replaced by visual identity via reconstructions or copies. He wondered about the meaning of all this, what was the hidden discourse of this industry of the fake, of these strategies of illusion, of absolute appearance, about the creation of new mythologies.

What is certain is that AI is already guiding our indignities, preferences, opinions, judgements, interests, and therefore our choices, and that it has already changed our lifestyles. However, it is always the victors who write history, and Europe is not even a player in this 'war of the false'.

 

Comments (0)

Read also

Six great misunderstandings about Russia

To read the thoughts of many French opinion leaders and their supporters on social networks about Russia is to despair of humanity. They base almost all their analyses on misunderstandings that can only lead to erroneous conclusions.

Patrick Edery

7 min

Holiday debates on immigration with French 'papys boomers'.

Every year, like many of us, I spend the summer holidays with my family. It's a time for endless political discussions. As someone who enjoys adversarial debate, I must admit that I'm often provocative. I particularly enjoy sparking discussions on immigration with my elders, as our opinions differ so widely on this issue.

Patrick Edery

7 min

Hamas is on the verge of handing Putin a strategic victory

In my September report (LINK), I explained how Moscow had regained the upper hand in the media, thanks in particular to the campaign launched by Nicolas Sarkozy, and then to the shady game played by Berlin.

Patrick Edery

6 min

It is time for Paris to stop believing the Russian anti-Polish narrative and clean up its act

Since Russia's invasion of Ukraine and the solidarity shown by Poles towards Ukrainians, negative articles against Warsaw in the French press have considerably decreased.

Patrick Edery

8 min