Damian Tambini, an expert in media and communication regulation, thinks we need to address misinformation more systemically. He says that we need a system that is optimized for truth and designed involving civil society.
Dr. Damian Tambini is an associate professor in the Department of Media and Communications at the London School of Economics (LSE), where he also serves as Program Director for the MSc Media and Communications (Governance). Tambini is an expert in media and communications regulation, and also provides policy advice to the British government. Tambini is the author or editor of several books, including "Digital Dominance: The Power of Google, Amazon, Facebook and Apple" (Oxford University Press 2018), which he co-edited. We met him at the Internet Governance Forum (IGF) in November 2019 in Berlin, after a panel discussion titled "Disinformation Online: Reducing Harm, Protecting Rights."
DW Akademie: Damian, you said in your talk that we need a new systemic approach for dealing with disinformation. Could you briefly explain why you think this is necessary?
Tambini: We're obviously moving from a communication system based on broadcast and the press, to a completely different communication system which has very different features. Communication in the press was based on professional journalism which was autonomous of the state, but had very clear rules and had ethics associated with it. In social media, the distribution algorithm is automated and it seems to facilitate the distribution of other kinds of content, whereas journalism facilitates truth. Of course, Journalists also seek resonance, but within limits: they are going to be discriminating when it comes to the people they speak to and how they check facts, but algorithms don't do that. They tend to reward things which are shared by more people or are noisier and have more emotional resonance. We need to think systemically in terms of how to shape media systems, and not only trying to block different kinds of content and disinformation. Meaning that, rather than a debate about why and how to block and take down harmful content, we should think about how to have a system which is optimized for truth as with broadcasting and the press.
What does this mean in concrete terms?
We need to think about lots of different policy levers. So at the competition framework, do we need more platforms to compete, or are we looking at a regulated monopoly? What kind of information are they obliged to provide to users about how their algorithms select some content and filter other material out? Should there be new kinds of procedures? When should they be obliged to take down certain kinds of content? What should be left to them to decide? What should involve a judge? What should be industry wide rules and what left to company specific rules?
There are lots of discussions about this going on. In Germany you have NetzDG, [Netzwerkdurchsetzungsgesetz] for example, which is part of this, and that changes the incentives slightly so that when something is clearly illegal, it's taken down more quickly. But I think there are a number of other things that need to be done. For example, with things that are not illegal but may be harmful to certain people—or that certain people might consider to be harmful. My overall view is that we need to bring together the different policy levers. We need to do so in a way which is not controlled by any one political interest or political party and which involves civil society. So if we're designing new rules for these systems, we need those rules to be trusted and those rules to be more evolved through involving civil society at a nation state level. It's an extremely difficult policy process.
But NGOs, governments and the EU are making huge efforts and spending possibly millions of dollars on fact-checking initiatives. What are your thoughts on tackling misinformation with the fact-checking approach?
I think there are a lot of benefits in this. Some of them are not the direct benefits of reducing exposure to content which is misleading. A lot of the benefits are really more in consciousness raising and making people aware. Also the platforms, Facebook, WhatsApp and Google or Alphabet and YouTube, are trying to develop new ethics of responsibility. So they're saying "okay we will work with third party fact-checkers, it's not us making the decisions. We may decide if something is fact-check worthy, we'll either label it or we will bring it down in the rankings." I think it can be effective. It needs to be done carefully, and above all transparenty.
Because fact checking can have perverse consequences. A lot of people, not just conspiracy theorists, they see something labeled and they think “oh they’re trying to hide something.” You need a lot more research on what those labels actually don't, but I think it is worth doing. Is your view that there's too much money being spent on fact-checking?
No, we are trying to find out what impact fact-checking has.
I am trying to think of evidence and research. For example, the Information Disorder report by Claire Wardle for the Council of Europe. The distinction between disinformation—which is deliberately, maliciously misleading—and misinformation—which is merely sloppy is useful. But I differ from her view slightly because what the policy community and the European Union does is say, let's just narrow it down and just talk about deliberate disinformation and we'll try and stop that. I think you actually need to broaden it and think systemically when it’s about how the media systems work and how they optimize for things which are true. Historically that's been done by journalism. We need to think about the role of artificial intelligence, the role of algorithms, the role of independent fact-checkers and the overall environment of choice, competition and informed consumers—it is an optimization model.
This interview has been edited for length and clarity.