MILEN members and other experts met to discuss how media organizations have been using AI to fact-check misinformation. The event was co-hosted by MILEN, DW Akademie and the British University in Egypt.
Jochen Spangenberg is Deputy Head, Research and Cooperation Projects, at Germany’s international broadcaster Deutsche Welle. The unit operates at the intersection of journalism, IT, and media science.
Charlie Beckett is the founder of the London School of Economics and Political Science’s journalism think-tank, Polis and a professor in the LSE Department of Media and Communications. He currently leads their Journalism and AI project.
Tamar Kintsurashvili is a MILEN expert and the executive director of the Media Development Foundation in Georgia. She is an associate professor on media ethics and propaganda research methods. She runs the fact-checking platform Myth Detector, and Myth Detector Lab.
Rabiu Alhassan is the director and managing editor at GhanaFact, which is the first fact-checking organisation signatory to the International Fact-Checkers Network in Ghana.
Naglaa Elemary is a MILEN expert and consultant for the Middle East and North Africa. She founded the research center at the faculty of Mass Communication in the British University in Egypt, and is the lead international expert for the UNESCO award-winning Media and Information Literacy project in Jordan.
Amaral: Often when we speak about artificial intelligence, it seems we’re referring to rocket science. How can we demystify the use of AI in fact-checking?
Spangenberg: What I find is very important is to come to some understanding of what artificial intelligence is. There are a lot of areas that are covered in it and sometimes people are not sure what is included and what is not. What I often say is that there are different areas that are considered part of artificial intelligence. For example, machine learning is one. Data analysis of all kinds. Fields in human language technology… These are all under the banner of AI, but when we talk about particular applications, it would really help to name what it is.
Let’s take social media analysis. When we analyse a huge amount of posts on a certain network, what kind of analysis are we doing? Is it a language processing, or is it a data analysis? That’s an important part to start with.
Another point that can be taken from the literature is that, on one hand, there is a technological definition of what AI is, and on the other side, there’s a human element to it. And that is referring to technology performing work that has been previously performed by humans. That is one of the big questions. “Is Artificial Intelligence going to replace humans?” I have a very clear opinion. No, it’s not going to replace the work of humans, but rather the contrary. It’s going to free them to do other things. Used in a sensible way, it is a huge and fantastic opportunity for journalists and media organizations.
Amaral: Awesome! Let’s keep going. Charlie, you have released the Journalism AI report, the result of a one-year collaboration between Polis – the journalism think-tank at the London School of Economics and Political Science – and the Google News Initiative. Can you tell us how journalists and news organizations are trying to adapt to new technologies?
Beckett: The first thing we have to say… the dirty secret is that there is no such thing as artificial intelligence. The idea of a robot that can operate exclusively on their own and make decisions on their own does not exist.
As Jochen was alluding to, AI is a series of technologies. It’s about machine learning, it’s about algorithms, it’s about data. Often we use this umbrella term for things that encompass quite routine uses. At the moment, most of it is doing quite repetitive tasks, at scale. Often it’s doing things that humans can’t do, like sifting through vast datasets for investigative journalism. Or it can do very specific tasks, like translation, for example.
The key thing that we revealed when we did the survey on how newsrooms around the world are using AI, was that it is across all aspects of news production. Again, the headlines were on “robots writing stories”, but actually I would argue it’s most important in the relationship with the reader / viewer / user, through personalizing and recommending content.
That means it’s systematic, because it impacts on all aspects of news production. It means that anybody connected to a news organization, all the way from the journalist to the marketing people, need to pay attention to the possibilities AI can provide.
Amaral: Indeed. It’s very visible for us, who are not tech people, how that can free up time so journalists can undertake more in depth projects and pursue different stories. One person who will have to do a lot of that in the coming days is Tamar. Georgia is facing elections this weekend (31st Oct). How are you putting in practice AI in your fact-checking process?
Kintsurashvili: Human work is not replaceable. We usually use artificial intelligence for data analysis purposes. It is helpful in terms of saving our time and identifying problems. The Myth Detector recently joined Facebook’s third party fact-checkers program, which allows us to access a lot of data that is virally distributed. It’s impossible only to use traditional manners of analysing content when now social media is so broad, and all citizens are producers of media content.
It was quite interesting for us to discover information bubbles we had not seen before. Individuals, and not only traditional media, were spreading disinformation related to coronavirus in different groups. The Facebook program allows us to access this viral disseminated content and identify the problem. But the next step is debunking and verifying the content, and human work is essential in this regard.
We then label such content and notify people who share the disinformation why that content is false or misleading. It’s not about removing the content, but providing alternative information to those who were misled.
We recently started this program in mid September, and some people approached us. They had shared the information without knowing it was false, and then they deleted it themselves. It helps in terms of media literacy as well. There are some media outlets that translated from English some story on coronavirus, and Facebook restricted the distribution of advertising. In this regard, the machine learning approach and the algorithms are helpful.
Amaral: Thank you a lot, Tamar. There is a lot in there we need to discuss about, media literacy, AI immersed into tracking larger disinformation campaigns… maybe we could get another practical example from Rabiu, who just started GhanaFact last year. What are the opportunities and challenges in this field?
Alhassan: Interestingly, artificial intelligence used for fact-checking seems to be the shining new thing now. Since we launched in August last year, we have been verified by the International Fact-Checking Network from Poynter (IFCN). But for us in Ghana, for us in Africa, it’s important to pay attention to the basic ways people share disinformation.
If you look at Facebook, WhatsApp, Twitter, Instagram, these are the ways people share disinformation. We are in a situation now where restrictions due to COVID-19 have been lifted, and life appears to be returning to normal. In the initial cases of coronavirus, people questioned if it even existed. And now since cases are diminishing, people have stopped using face masks and social distancing. This only highlights the importance fact-checking has right now. There are basic things we need to deal with, things that fact-checkers can use to flag misinformation.
Going through large data to put out simple fact-checked reports to debunk issues, for example. Some days ago, we had to go through a hundred and ten reports in order to debunk misinformation related to a cholera outbreak in Ghana. For this, it would be important to use some tools to go through the data so you can quickly put out misinformation. There are two sides of the issue. Fact-checkers have to balance the two, in terms of doing a fair job and reaching a wider public.
Do the simple things, but also add the shiny bits of using AI and fact-checking.
Amaral: I know Naglaa has been part of the jury for the latest grant Poynter conceded to fact-checking initiatives that were involved with AI. Can you share with us some interesting ideas and common trends that you see are happening around the globe?
Elemary: From the competition we had, and the work we did in selecting the projects across the world, it was very clear that again we have a huge gap between North and South. I think it’s clear from what we heard from our colleague from Ghana, there’s a clear cut that there’s much more use of human fact-checkers in our part of the world, while we see that all of the projects who applied for the grant were either from the USA or Europe. This is something we need to think about. Is it for the benefit of everyone or not?
From the perspective of our part of the world, I have to raise two or three points. There is a technology gap, but there is also an ideology gap. We can see this on Facebook’s policy on hate speech. It’s very interesting to read what Mark Zuckenberg said about the policy of Facebook following the US elections. He said, for example, that he has to protect the integrity of the democratic process. What does it mean when we see this, coming from a tech giant company? We saw so many accounts being blocked for hate speech, while for us, it was our point of view in causes we believe in.
Also, there is a gap within the same country. It’s interesting to see in our part of the world who are the fact-checkers. Yes, there are fact-checkers like our friend from Ghana, but the most active in the fact-checking process are really the governments, those who are in power or businessmen. There is a huge gap between those who can use AI in the fact-checking process more easily than us and the civil society.
I’d like to hear from our colleagues what they think about these issues. There is always a human intelligence behind artificial intelligence, and we need to remind ourselves of that.
Kintsurashvili: Naglaa raised some very important points about the abuse of these technological novelties in transitional democracies and authoritarian countries. We are detecting cases where even a non-existing person is communicating with the public. That’s why we need fact-checkers and journalists to follow these trends.
In regards to how to differentiate real fact-checkers from government “fact-checkers”, we are part of the IFCN like Rabiu. We are transparent with our methodologies. We are certified fact-checkers whose quality is ensured by the Poynter institute. We are accountable, with correction policies and code of conduct.
Alhassan: Naglaa couldn’t have said it any better. We are faced with some unique challenges, operating in Ghana. There is always a polarized debate whether one belongs to “our” party, or the opposition’s party. You will find that we operate from a very sensitive place. Regardless of whatever fact-checking report you put out, it means you are speaking for or against someone. That’s where being signatory to the IFCN comes in. That’s where transparency comes in. We have it all laid out on our platform, transparency of funding, sources, methodologies, etc.
Beyond that, when we think about resources and who has the opportunity to use AI… We operate as a not for profit social enterprise, largely funded by some key organizations laid out on our platform, so we cannot compare ourselves with the same level or resources as the government. But it should be noted that if the Ministry of the Government puts out a fact-checking report, and if an independent organizations signatory to the IFCN puts out a fact-checking report, these are two completely different things.
Amaral: Those are super important points. When we talk about transparency, we also talk about the dilemma of ethics. Not only the humans behind technology can be biased, but the technology itself can be biased as well. From a developer’s point of view, Jochen?
Spangenberg: Yes, there are those that provide AI solutions and there’s a number of matters and issues that come with it. You mentioned ethics, that’s very important. But also bias is an important issue. What kind of bias is built into an algorithm? Because after all, these technologies are developed by human beings.
We also have developed a number of solutions, one of them is a plug-in that provides video verification and video analysis. That’s the provider side. Then, there is also the user side. There are so many tools out there, many of them freely available. From geolocations tools, to data recognition… they are all out there and can support you in your verification efforts.
What we are missing here is, even when talking about the media and journalists, that journalists are accustomed to them. That they know how to use it, and what they are good or not for. This is also very important too. Sometimes I get surprised that there are still journalists out there that do not know how to operate a reverse image search.
It’s important to learn and to know. And as a second step, to be aware that the results of these tools can also be biased. This awareness needs to be there and there is still a long way to go. This does not only apply to journalists, but also to the public at large. If anyone wanted to, with an internet connection and a computer, you could become a fact-checker.
As this is a media literacy event, we need to do a lot more in terms of teaching and bringing this into schools, and in the professional media landscape as well. More needs to be done, especially in the ground level, with people who are being bombarded with disinformation.
Amaral: A great point. The lack of skills was also one of the main points brought up by the Journalism AI report. Do we need more Media and Information / Artificial Intelligence Literacy?
Beckett: One of the key issues that was identified, especially when you’re thinking about access to the technology from the news organization point of view, is that obviously the access is limited by how much money you’ve got. There are inequalities between news organizations across the world. But I think it’s interesting to see when we got previous technologies. Take mobile telephony, for example. It has had a profound impact on journalism. It has made journalism so much easier to do.
It’s very interesting that in a region like Africa, for example, mobile journalism has been exponentially effective, despite the lack of resources in some of those areas. In a way, certain media organizations and journalists were even more innovative when using mobiles. So there is a kind of hope around these technologies that there might be a similar effect.
But you’re quite right in identifying one of the key problems is understanding it. We’re all put off because, if you’re like me, we have zero technological expertise. I always think of the analogy of a car. I don’t necessarily understand how cars work, but I can drive. But you do need at least an introduction.
A little advert, we created a one-hour training course, it’s free online. It’s just an introduction into machine learning, and it was designed by journalists to other journalists. Even if you’re never going to operate a tool in an in-depth way, you still need to understand the tech because it’s going to have some impact in your news organization.
To address all those issues that previous speakers mentioned about inequality, bias and how powerful people can misuse the tech, you need to understand the tech first. Everyone needs to have a basic literacy about this. And that is accessible to everyone. You don’t need huge resources to get a little bit of learning.
Amaral: How can a regular journalist get started on this?
Elemary: One of our fellow colleagues here mentioned how sometimes journalists are behind the usual person. That is a fact in our region. We see a lot of journalists who think it’s all about reporting and writing, and not verification. The biggest problem is their attitude. The first step is to change that. We need journalists to change their mind, and understand it’s not enough to be a TV anchor or a reporter. They need to be fact-checkers too.
I would like to also raise a question from the side of our MILEN network. We have a lot of experts from different backgrounds, and in the countries of Egypt and India, for example, there are a lot of people who can’t read or write. It strikes me everytime that even people who don’t know how to read are users of smartphones. They use Youtube, Facebook, WhatsApp like us. So maybe we need to rethink the concept of literacy and who needs to work with MIL. It’s not only for young people, or those who can read and write, but also for everyone who is using the technology.
Kintsurashvili: I fully agree with Naglaa. There is a problem with journalists who think they don’t need new technologies. That’s why we decided to work with non-journalists, young people. We run media literacy programs with the support of DW Akademie and our approach is that we equip them with the necessary data investigation tools. We are not imposing what is true or false. We are just equipping them with skills to distinguish what is or is not falsified. Our slogan is “discover truth yourself”.
We have more than 100 alumni so far, and they are very good fact-checkers. They are teaching media literacy themselves and promoting critical thinking. They also contribute to our fact-checking portal, and it is also a way we can address this problem of preparing media consumers.
Alhassan: It’s crucially important that journalists appreciate that times have changed and they need to rise to the occasion. You need to know how to use digital tools to verify and fact-check information that is being shared.
For us at GhanaFact, we executed the Fact-Checking for Peace project, leading to the elections. We are going around the country and helping around thirty media organizations (traditional media, radio and TV) to set up fact-checking decks in their newsrooms. We trained editors, reporters, journalists to know how to use digital tools for fact-checking. We are drawing on our strengths and collaborating to address misinformation, to reach people both online and offline.
This is another way to encourage people to open themselves up for media literacy and fact-checking and verification.
Amaral: Those are exciting news for the future. One thing that you are all talking about is collaboration.
Spangenberg: Collaboration is an important point. Often people do similar things in isolation and it’s being duplicated spending resources. That’s why we have developed a collaborative verification platform called Truly Media, which is being used in numerous organizations. For example, internally within Deutsche Welle now, but also with human rights organizations and many others.
Through DW Akademie we are also making that available to various other countries. We had a project in Myanmar… It’s being used in the run up to the elections in Mongolia as well. And also in Georgia at the moment. The aim is that different media organizations join forces and work collaboratively on this platform. Sharing not only their expertise, but also their findings, in order to be faster, more efficient and to serve the public better.
Kintsurashvili: We started using Truly Media this year and it’s a really useful platform using open source investigation tools. It’s really helpful to work in a collaborative manner. Several of our students collaborate with our editors and fact-checkers to elaborate one piece of work. Sometimes you have to respond rapidly, especially in COVID times, when we are socially distant working from home.
We have identified many coordinated misinformation campaigns using the platform and will be publishing results soon.
Beckett: I think the collaboration point is really important. Obviously collaboration is a nice word. We all like the idea of working together. But it is also a real challenge. The work that Jochen and other people are doing is fantastic. We have seen some incredible collaboration around investigative journalism globally.
And it goes right to the point where we’re having to see collaboration within news organizations. We are having to see the data people talking to the marketing people, talking to the editorial people. And very importantly, sharing knowledge between news organizations. We don’t all want to have to reinvent the AI wheel. There’s a lot we can learn, not just about the tech itself, but about how you use it. What works? What is most reliable? There is always the danger of newsrooms falling behind, but also the whole news industry can fall behind. This technology is being developed by big tech companies, and they are usually doing it for very rich industries. So the news industry has to work together.
I’m really encouraged. In our network, we are literally running a collaboration experiment. We’ve had forty journalists from completely different organizations around the world working together. And what’s exciting about that is how much they are learning from each other. They are sharing their experience and their insights. It’s taking a really interesting turn. Something born out of necessity, because we can’t do it on our own. But I think that’s quite exciting, if I can be optimistic for a second.
This event itself is great evidence from that. We’ve got so much to learn by listening to colleagues from around the world, even if they’re not in journalism themselves.
Amaral: Please be optimistic! We definitely need more optimism towards the future. Now we have a question from the audience.
Julius Endert: Some weeks ago, we could read about the GPT-3, this powerful AI able to generate endless texts. And it seems to me that fake news will be endless. What does this mean for fact-checking? Will we see a battle of AI in the future? On one side producing fake news, and on the other trying to check it?
Spangenberg: We are already there. It is always an arms race. Someone has developed something that can be used for good or evil. In the videosphere we have that. It used to be very complicated to produce authentic looking deep fakes. That used to be labor intense and computer power intense. Now it can be done with an ordinary machine and it can create fairly authentic looking pieces of videos. That will be done on a bigger scale, and so the efforts that are required to debunk that will increase as well.
Often those who are on the debunking side are a little behind. I think that will continue to be the case.
Amaral: It seems the more we talk about technology, the more the human factor behind it is what is going to make the difference.
Spangenberg: That is my point here. At the end of the day, the human element is always required, needed, as someone to take responsibility. For example, if you decide that something is manipulated or not. Someone has to take the editorial decision and also stand up for it.
Of course we will be making mistakes, that’s in the nature of life. But those won’t be because of an algorithm or a technology, but due to the humans behind it. Responsibility is a very important point when talking about this.
S. Batrawy: I would like to ask if media literacy people can join forces with health literacy people. There are teams working in isolation, especially in the COVD-19 time. How can we get them together to raise awareness?
Elemary: It goes without saying that it is needed to cooperate with those who have the knowledge. We, as journalists and media literacy people, have the channel but we don’t have the information. The information has to come from trusted sources other than those who are in the health industry. At least in the Arab World we haven’t seen such projects, but they need to be put in place.
Back to the idea of cooperation, cooperation is also between media people and trusted sources.
Kintsurashvili: This discussion is proof that we need cooperation in many ways. We need to learn more about each other. Use each other’s resources, such as the IFCN’s Coronavirus Alliance. Such things save time, resources and give us insights into global trends.
Elemary: Artificial Intelligence is a great opportunity. It’s like any other thing, we need to learn how to use it wisely for the benefit of the community. All these ideas are much needed accounting the pace how technology evolves, in order to counter any negative sides.
Spangenberg: I think there are many challenges when we talk about technology. But there are also so many opportunities out there. We just have to grab them. But if you look at what is happening in many countries around the world, this is not very encouraging. I wonder if artificial intelligence can solve the problem. I don’t think it can. But what it can at least support is education, education, education. We have to make sure that the current generation, the younger ones, is prepared. They are eager. Maybe there is hope for the future.
Alhassan: For us, the basics are crucial. AI seems to be the shining new thing, and if we only concentrate on that, we will be taking our eye off the ball. It’s important we invest in that because it will play a role in the future, but we should still pay attention to the basics. Largely, on the African continent, you need the human resources sitting behind the computer and identifying problems, helping flag misinformation. That’s the core of the issue.
Beckett: I agree with Rabiu there. We know that when you are fact-checking, you can have the facts, you can be scientific. But unless there is a human context, then all the fact-checking in the world won’t have an impact. The same happens with AI, we shouldn’t leave it to the tech companies, we shouldn’t leave it to data scientists. Especially in journalism, it’s important that everyone in news organizations, including readers, have some kind of understanding of this so we can shape it in a way that serves the human beings rather than just the technology.
This article was originally published on the Media and Information Literacy Expert network (MILEN) website.