The recent controversy involving Giorgia Meloni has once again brought a fast-growing technology into the spotlight: deepfakes. The issue began when AI-generated fake images of the Italian leader circulated online, showing her in situations that never happened. It is not the first time a female politician has been the victim of sexually explicit deepfakes, but it is the most recent high-profile example. Although she confirmed they were false, she used the moment to warn that this kind of manipulation “can happen to anyone.” In this case, manipulated and sexualized images of Meloni were shared online.
Her response underlined a key issue: while public figures may have platforms to respond, ordinary individuals are often more vulnerable. For a private person, a deepfake can spread within their social or professional circles before they even become aware of it. Without media attention or verified platforms, it is much harder to correct the record. There are also practical barriers. Removing harmful content often requires navigating complex reporting systems on platforms, and even then, copies may continue to circulate.
Why do deepfakes matter?
Beyond the technology itself, the real concern lies in the implications of deepfakes for society. These tools blur the line between reality and fabrication, making it harder for people to trust what they see online. This erosion of trust is one of the most serious risks identified by experts.
From a social perspective, deepfakes can cause significant personal harm. Victims may face reputational damage, harassment, or emotional distress, especially in cases involving non-consensual explicit content. Studies show that this type of misuse is widespread and disproportionately affects women.
Politically, the implications are even broader. According to research, deepfakes can be used as tools of disinformation, potentially influencing elections, public opinion, and international relations. Even the mere possibility of fake content can create confusion, allowing real evidence to be dismissed as false.
What can be done?
Scientific consensus suggests several solutions: improving digital literacy, investing in detection technologies, and strengthening regulation. For example, the European Union is working on policies such as the AI Act to manage the risks of artificial intelligence.
Deepfakes are no longer a future problem, they are already here. The case of Giorgia Meloni shows that anyone, from world leaders to ordinary citizens, can become a target. Scientific evidence indicates that this issue will continue to grow, making critical thinking and digital awareness essential tools in today’s information environment.
Editor of Daily 27.
Predoctoral researcher at the Department of Sociology in University of Barcelona.


