"The more we know AI, the less we undergo it"

Born in 1968, in New York, Antonio Somaini is a professor of theory of cinema, media and visual culture at Sorbonne Nouvelle University. Since September 2022, he has also been a senior member of the France University Institute. It is the co -director of the last issue of the annual review Transbordor“Photography and algorithms” (Macula editions). He is the curator of the exhibition “Le Monde according to AI”, at the Palume Game, until September 21.

What are the challenges of your exhibition to the palm game?

It is to see how artists for ten years react to the growing presence of algorithms and models of AI which infiltrate all over all the strata of society and culture, economy and science, technology and military operations …, because I think they are currently at the forefront in the attempt to understand how the IA transforms our visual relationship. An exhibition of this type also requires giving milestones and explanations of what certain obscure terms cover for many people.

Starting with the expression “artificial intelligence” itself a nebula …

It is the mathematician and computer scientist John McCarthy who made it for the first time in 1955 in his note of intention to the Dartmouth Summer Research Project on Artificial Intelligence which was held a year later and which devoted AI as a field of research in its own right. Since then, the term “artificial intelligence” has often changed its meaning, in connection with the development of computer technologies. Today, I prefer to use the acronym “AI”, more abstract because the expression “artificial intelligence” returns a little too much to the idea of ​​an intelligence which would become similar to human intelligence when in its current state AI is essentially a question of mathematical models which carry out increasingly complex operations and which are based on extremely energy -energy calculation processes. To train these models, considerable amounts of data are collected on the Internet without consent of their producers. In addition, millions of people in the sub-represented and exploited global south perform online tasks on the Internet, necessary for the training of AI systems and the content control they generate. It is an often hidden world that the works of Agnieszka Kurant and Hito Steyerl, with their attempt to make the contribution of these “click workers”. It is important to take the measure of the material and environmental dimension of the AI, to map these latent spaces.

Gwenola Wagon, Chronicles of the black sun2023. Work carried out with the support of the Hangar Y, in partnership with the Paris-PSL Observatory.

© Gwenola Wagon

What do you mean by latent spaces?

This expression refers to the abstract mathematical space in which billions of data (texts, images and sounds found on the internet) are encoded and transformed into vectors, and from which, by different mathematical operations, new images and/or new texts will emerge. In a culture increasingly crossed by AI models, latent spaces play a key role in the treatment and transformation of massive quantities of visual and textual content stored on the Internet. It is therefore a question of understanding their functioning given their deep impact on our relationship with images and our visual culture. Each AI model, developed by one of the major tech companies (Google, Meta, Openai, etc.), has its own latent space. Often these companies do not give any information on the architecture and the data used for the training of models because they are competing. The inaccessibility of latent spaces is also linked to the fact that these abstract and mathematical entities have dimensions that no researcher could himself explore in their entirety nor establish the chain of calculations which leads for example a prompt in Chatgpt to the generation of an image. Everything that happens there is too complex and engages too many calculations, of inaccessible parameters.

Which may seem terrifying, anxiety -provoking …

It is inevitably anxiety -provoking because AI deeply redefines the place of human and leads us to rethink the line between human and non -human, between what comes from our thought, our ideas and our beliefs, and what is automatically developed by these systems. It is the human being who is questioned, repositioned, cropped. Which is very worrying in terms of impact on the world of work for example, but at the same time, the more we know these systems, the more we can participate in these vast transformations and not undergo them.

What types of relationships and ideology are reflected in these latent spaces?

There is a distinction to be made between AI models disseminated in open source (free access) as a stable diffusion – a model text-to-imagefor the generation of images from prompt, which can be downloaded and modified – and those on which the companies that market them, completely keep control by exercising censorship on the use of certain words or certain images: this is the case for example of OpenAi Chatgpt. At the game of palm, several artists show us that we can seize these models to understand them, divert them and do something other than what the major tech companies impose. Adam Harvey and Jules Laplace’s “Exhibition.ai” project thus presents a detailed cartography of how AI facial recognition and biometric analysis systems. Holly Herndon and Mat Dryhurst artists and musicians, who campaign for a collective, open and just, are leading to a new model text-to-image In free access, public broadcasting, drawn with images free of rights or for which people have given their consents. Other artists use open models such as stable broadcasting to explore their latent spaces and bring out images that visualize possible past and counterfeit stories: this is the case of Grégory Chatonsky, which in its installation features fragments of the possible lives that he could have experienced.

Gregory Chatonsky The fourth memory 2025, installation, generative film, 3D prints, digital impressions, robot, aluminum, stones, variable dimensions. IA models and programming languages: stable XL broadcast, animated, naughty, llama 3.2 7b, python 3.11 Training data: laion-5b, visual contagions under the direction of Béatrice Merry-Prunel, artist's personal archives © Gregory Chatonsky

Gregory Chatonsky The fourth memory2025, installation, generative film, 3D prints, digital impressions, robot, aluminum, stones, variable dimensions.

© Gregory Chatonsky

Systems work today from data that humans provide them, but what happens when these systems work with images and/or texts made by AI?

This is a big question. In an article published in the journal Nature, In July 2024, researchers showed that training of AI models with data generated by AI may cause a “Model collpase”a deterioration of the models. It is therefore in the interest of AI societies to try to distinguish the texts and images generated by AI of those which are not generated by it. This task, on the other hand, becomes more and more difficult, because images and texts are becoming more and more “hybrid”: they are the result of increasingly complex collaborations between humans and AI models.

How do you see the future?

It is difficult to make forecasts because everything changes at such a speed. But I think that we all have to really make a literacy effort in relation to this new context and try to understand how these models of AI work, what are the terms, the language in order to be able, and know, analyze them and, as much as possible, master them. Because we are in the middle of a major technological turning point that we can compare to the arrival of the Internet in the 1990s and if we go even more back to the invention of photography or that of printing. In the current state, AI is still deeply human: everywhere, there are choices made by humans. Will this be the case in five or ten years? We can fear that some of these systems become autonomous to the point of targeting their own survival or to aim for their own improvement as a priority, and that this escapes the companies that produce these models. But we are not there yet.

Similar Posts