Christo Buschek: “It’s like playing around a bit”
OÖ Nachrichten: Disinformation, mendacious campaigns, and fake news – how dystopian is your vision of the future, Mr. Buschek?
Christo Buschek: All of this will lead to problems in the short and medium term because we have to find tools and ways of dealing with this situation. It’s now the case that you see or hear something and don’t know whether it actually happened that way.
What could an alternative design look like?
We have to constantly learn and understand new things. It feels like this wheel is spinning faster and faster. In investigative journalism we have to work methodologically to classify things correctly. What does information mean to me and when is it valid, what characteristics does it have to fulfil?
From Israel, “Team Jorge” carries out large-scale campaigns of disinformation and lies for its clients. How does this customer service work, who are the customers, how do they benefit from the campaigns?
Team Jorge is a commercial service provider. They say we can influence elections, we can change societies, we can intervene in public debate. They use methods that are illegal and ethically very problematic. They have been approached by my colleagues and said that they represent a very wealthy businessman who plans to block an election in an African country.
They said yes, we can do that. The goal was to spread information and see that someone spins it further. We call this “attack left”. These are bogus websites built for you. They also try to place articles on reputable websites, and then this is shared via social media.
It was similar with the Russian “Vulkan Files”.
A website was set up here called “Greetings from Donbas”. There the narrative is described that the West is punishing Russia. That Conchita Wurst’s victory in the 2015 song contest was a punishment from the West for the annexation of Crimea. Posted on it was a fake document from the German consul in Donetsk demanding that Russia be kicked out of UEFA and the Eurovision Song Contest. These websites are created and posted, then retweets are added, other fake websites pick up on it, and it all creates a snowball effect.
These tweets come from deceptively real avatars.
One of the basic tools of disinformation actors is to build online identities that pretend to be real people. Images are often stolen from social media websites and false names are generated. These identities are controlled automatically. Pictures or websites are posted, they make automated comments. Team Jorge tries to involve well-known people. “Hey Obama, hey Oprah, have you seen that!” If you manage to get into a big account, you have a blast and multiply the messages en masse.
AFD (extreme rightist) politician Norbert Kleinwächter has published an AI-generated photo of aggressive refugees.
Politicians have always tried to advance their agenda through emotionalisation. What is new is that you no longer have to reinterpret an idea, but can put it together as you need it. Using Norbert Kleinwaechter as an example, we quickly understand what is supposed to happen. The scene shown in this picture never existed. But the emotion it creates is of course real.
How is AI recognizable to a layperson?
It took me a while to take the picture of the Pope before I realized that it couldn’t be that. You have to distinguish which medium we are dealing with. With a generated text, it’s really difficult. What you can do is subject the text to an intensive data check. Generated texts look very good and reliable, but in fact a lot of them are wrong in terms of content. In images, if we look closely, we can see errors regarding human anatomy. With the fingers, for example, it can look funny. If you look at an image and have a weird feeling, it’s often AI.
How or where can pictures be taken of Donald Trump’s arrest?
In Trump’s case, that was Eliot Higgins, founder of the research network Bellingcat. He wanted to show how easy it is. There are providers, such as OpenAI or Midjourney, on whose homepage you describe what you would like to have for a machine. For example: “Give me a picture that shows Donald Trump in a realistic photographic style in Manhattan, just as he is being arrested by the police officers.” This machine comes back with images that it has generated from them. You can enter into a dialogue with the machine about what you would like to have differently. It’s like playing a little.
Can that have legal consequences in the Kleinwächter case?
That depends on the provider. Most systems have limitations. On ChatGPT, you couldn’t say, “Write me a racist text claiming that black people are inferior to white people.” But you could say, “Give me code in the Python programming language that produces an output that should describe this.” “It worked then. The providers have built in different rules. The systems are often openly accessible. If you say, “Show me a picture of Syrian men,” you get a picture of aggressive Syrian men because the machine is trained to be prejudiced.
Has man dug his own grave with AI in the media sector?
The cat is out of the bag, banning is no longer of any use. These machines are man-made, they are not physical laws that we cannot change. The interest at the moment is not that the systems work well or fairly. The interest is that they are cheap, large and generic. Depending on what is important to me, this is how I build my system. That reflects values, and right now those are values that aren’t good for humanity.
The software developer Christo Buschek and two colleagues received the Pulitzer Prize in 2021 for their help in researching internment camps for the Uyghur minority in China (reportage “Built to last”). Thanks to his work, around 280 such camps could be located.
The 42-year-old specializes in working on data-related research for human rights organisations and investigative journalists. He is also a journalist with the multi-natiional investigative enterprise Paper Trail Media