"Our democracy is only as good as the information that voters have"

Interview

A conversation with Carlos Hernández- Echevarría from Maldita.es, a Spanish independent journalistic platform focused on the control of disinformation and public discourse through fact-checking and data journalism techniques. 

Reading time: 12 minutes

Disinfo Talks is an interview series with experts that tackle the challenge of disinformation through different prisms. Our talks showcase different perspectives on the various aspects of disinformation and the approaches to counter it. In this installment we talk with Carlos Hernández-Echevarría, journalist at Maldita.es, a non-profit foundation and media outlet fighting misinformation and promoting transparency through fact-checking and data journalism.

Tell us a bit about yourself, how did you become interested in disinformation?

Carlos Hernández-Echevarría: Before joining Maldita in 2020, I worked in television news for 15 years. I gained experience as a reporter, a correspondent, and an editor. I was lucky because two of my very good friends who used to work with me in television started Maldita. From the earliest stage, it had a focus that I liked a lot in the fight against disinformation, which was to provide quality content, solid information that actually had a shot at going as viral as disinformation. It's wonderful that we have reached the point of governing ourselves as democracies and seeing very clearly that things are better when everybody has a say. Our democracy is only as good as the information that voters have.

Why is disinformation dangerous for democracies? How much disinformation poses a problem? Should we be worried about the future of our democracy, given the extent of false information being spread online?

Disinformation is an existential threat to our democracies and to our way of life. The whole system we've built for ourselves to govern society is based on the notion that the common decision is the best way forward. However, if that process becomes full of toxicity, if it's based on false information, then people cannot make informed choices. We are currently still in a bad place. That stated, I'm an optimist and I think there's a lot we can do. It took a long time, a very long time for most of the world to enjoy the privilege of self-government. In a large part of the world, people don't yet have this opportunity, but we do. We have democracies – we can go to the voting booth and we can elect our leaders to rule us in systems that have built-in safeguards against abuses of power. If we don't protect the process from disinformation, then the results are going to end up being quite similar to not having democracy.  

What are some of the challenges that are unique to Spain?

In Spain, we are right now in a very politically polarized situation. Disinformation breeds polarization and polarization breeds disinformation; this is a vicious cycle. In Spain, we have a particular problem, especially when it comes to over-reliance on and confidence in private messaging. The polarization level is closer to Brazil or to the US than to other countries in Europe. This has a lot to do with trust and how people inform themselves about current events. WhatsApp is very popular and serves also as a source of information, even about the pandemic. The nature of private networks makes it difficult to monitor them.

Are you able to identify the source? Who is behind these channels?

Normally, when you are a fact-checker, the first thing you learn is that there are things that cannot be checked, because they are not facts, but pure opinions. What we work with are ‘facts’. Our focus is always on the content, because that's what can be fact-checked. But we place a special focus on the bad actors. We keep an eye on them because they have a track record for disseminating disinformation. The first time we hear about a particular piece of disinformation is because someone sent it to us; but this is obviously someone who cared enough about disinformation to actually alert us. The actual identification of bad actors takes place through our monitoring of public platforms and then go back and see who posted a particular item of disinformation for the first time. Obviously, I have a clear picture of the motivation of the person spreading false information, but I'm not after the motivation. What I'm interested in as a fact-checker is being able to determine that and state with certainty that ‘this is not supported by facts,’ or ‘it is.

Could you tell us a bit about how you perceive the role of your organization in terms of coming up with solutions?

Our organization is a non-profit foundation that fights disinformation in a number of ways. Fact-checking is only one of them. We also do data journalism, and other activities to explain reality more broadly. Regarding fact-checking, while it is obviously not the whole ballgame, it is an important part of the solution. Fact-checkers need to be very smart with the formats and to absolutely respect the quality of the information, yet, also to make that piece of information, the debunk, attractive enough to be sharable and even viral. Part of this has a lot to do with relying on your community or on building a community. We have thousands of people that continuously send us content to verify. That's a treasure because the very people who give us things to work on (to see how viral something is, to assess how urgent is to intervene) are the same people through whom we disseminate the debunk, the quality information, and say: "please, do share this.". It's not only about republishing what you already have. It's a more creative and strategic process of creating a debunk that works, something that is as inviting to be shared as a hoax is.

Ultimately, the solution is not so much about fact-checking as about media literacy and education. It's about people believing less, being more critical. You receive something and you resist the urge to share. This has a lot to do with personality biases and traits. People should be able to stop and think when they encounter a catchphrase or something they really believe in. We try to design and provide technological tools to help people cultivate awareness and fact-check, but the biggest difference is at the level of  individual behavior – pausing for a moment to think critically, and deciding "I'm gonna stop. “Or maybe deciding to ‘Google it” first.  The decision ‘to wait’ would be more useful than anything else. This behavior modification could make some difference if taught through teaching schools and public education campaigns, instilling in people the awareness that not everything they receive is going to be true.

How do you view the type of technology employed by social media platforms? Do you think that it could ever replace your work, or is it more of a complementary measure?

At this point, I don't think automated fact-checking is a thing. There are obviously many ways in which technology helps our work, especially when it comes to detection. There are some serious efforts being made when it comes to detecting the items that are getting viral and have the potential to be disinformation; but right now, there's no machine in the world that can assess if these qualify as disinformation. Machines still do not have the capability to identify sarcasm, humor, intention. I'm talking in general terms. Maybe artificial intelligence will develop to learn from what's been declared as untrue in the past, analyze language and then apply that to analyze content. But in the end, it has to be a person, a journalist, who according to very clear methodology, decides to put a stamp on it and says: "This is disinformation." I don't see that there is any automated way to do it right now. In fact, I don't even see it happening in the near future, because it butts heads with particular issues regarding freedom of speech. What we want to do as fact-checkers in Maldita is to be able to talk about methodology. We want to be able to be crystal clear with a debunk. Technology is part of this – we believe in the power and potential of technology. We have an engineering team in house. Our AI chatbot on WhatsApp is a good example. Also, we now have a machine that knows how to match any content that comes in and can draw the kind of graphs necessary for understanding why it is necessary to start investigating. That frees up a lot of time from our team to actually do what it is critical, the investigative part.

What is the most sensational or persistent piece of disinformation that you have come across?

We have a whole category of hoaxes and disinformation pieces that we call "family hoaxes," because they keep coming back. Things that we verified as ‘false’ 2-3 years ago, and they keep coming back. For example, every time an election comes around, items circulate telling people that their mail vote is not going to count. We actually saw it in Spain massively, even before Donald Trump made it popular in the US. It comes and changes. It's funny in a way, because the government changes, so the intent of the hoax changes. They just change a few names and locations and they say "this friend of the Prime Minister is actually right now managing the Postal Service; he's going to make sure your vote doesn't count." We have seen that against conservative governments and we have seen that against progressive governments. It keeps coming back every time there's an election. It's actually very harmful, because there is no basis to it. It requires a massive conspiracy; it could not be credible and there is no proof at all, but it goes to the heart of the democratic system. Narratives generated about how the elected government is illegitimate, because they actually didn't win, because of their mail fraud and all that stuff. It comes back every time.

But it's not only the kind of things that are recurrent; it's also the kind of things that normally don't receive attention. If you don't have a community-centered approach that alerts you to those issues, you might miss them, and they can be a serious issue for many people. Our mission is for fact-checking to not have to rely on the highly informed. You need to have a look at the rest of the society, people who actually need you and don’t have as many ways to inform themselves.

How do you measure the impact of your work? How do you know if you're actually making a difference?

We have a department that focuses on impact, to try and see how effective we are. We have some very serious academic studies on disinformation that tell us what we have believed all along: once people see the fact-check, they become much less credulous when presented with disinformation. Also, studies show that debunk actually effects the way disinformation spreads. Maybe people do not go as far as to say "this is not true," but they become much more cautious before sharing content. Overall, the academic evaluations done in the field provides ample proof that fact-checking works.

Our challenge, as mentioned, rests in going beyond the usual realm of fact-checking, to aim to make an impact in the society overall, by addressing the worries and doubts of the population-at-large. That's a process, but I think this community-centered approach is a step in the right direction. It involves listening a lot to what people are actually curious about, and what truly concerns them. Let them tell you what they care about. In the beginning, we were engaging largely with political content because it was crucial to target people in power position and make them accountable. But, we have progressed, and most of what we do now is not even remotely political. Often, it has to do with frauds, scams, medical and scientific disinformation. People actually want to know if drinking salt water is going to prevent Coronavirus. I'm happy that we initially focused on monitoring politicians; but our society involves much more than politics, and given that these worries reach beyond Spanish society, we need to broaden our approach.


Carlos Hernández-Echevarría has been a journalist for 15 years at laSexta, a Spanish television channel. He is a reporter, correspondent and executive producer of news programmes. Carlos is a Fulbright Scholar with a Master's degree in Elections and Campaign Management Policies from Fordham University. He is also an analyst of USA politics on elDiario.es, Historia y Vida and El Orden Mundial. Currently at Maldita.es.

Maldita.es is a non-profit foundation and media outlet fighting against misinformation and promoting transparency through fact-checking and data journalism techniques by providing tools, technology and information. The only Spanish organization appointed by the European Commission to take part in the high-level group on Fake News and Disinformation. Signatory to the Code of Principles of the International Fact-Checking Network (IFCN).

The opinions expressed in this text are solely that of the author/s and/or interviewees and do not necessarily reflect the views of  the Heinrich Böll Stiftung Tel Aviv and/or its partners.