We’re living in an era of “new proximity” where information is readily available to make us feel more connected to each other and the world around us. In theory, this should also instantaneously make us more informed, right? Not so. The way we get news and information through the prism of algorithms in places like Facebook, Twitter, and Google is an oxymoron of sorts.
This “new proximity” (a totally non-scientific term that I have coined for the purposes of this article) is simultaneously bringing us closer together and further apart. It’s also bringing us nearer and further away from accurate news and information.
At a time when the biggest tech companies in the world are struggling to control the spread of misinformation through poisonous algorithms, a new Irish start-up called Neva Labs is hoping to make a virtue of the algorithm in the fight against fake news.
Neva Labs has its work cut out for it, though. The problem of misinformation runs much deeper than sporadic cases of our friends unintentionally sharing “alternative facts” on their social media feeds, or diving down a rabbit hole of echo chambers because of over-personalisation.
The scary amount of trust we place on sites like Google and YouTube to be an oracle of truth was exemplified this week in the aftermath of the Las Vegas shooting. The name “Geary Danley” wrongly topped a Google search result after a 4chan-inspired conspiracy theory identified him as the perpetrator of the crime that left 59 people dead and 527 injured.
We expect Google to instantly deliver all of the answers. But we forget that artificial intelligence can be outsmarted and that algorithms (such as the 4chan example) could potentially have catastrophic consequences.
This algorithmic disease is widespread and not isolated to search engines. On YouTube, for example, conspiracy theory videos claiming the Las Vegas shooting was a hoax were promoted on the site, causing huge hurt to the loved ones of the victims.
An algorithm was also responsible for pushing the misspelled hashtag #LasVagasShooting to the top of Twitter on Sunday. This could be perceived as a small error. But erroneous mistakes erode trust and diminish the necessity for accuracy. As Mashable’s Business Editor Jason Abbruzzese aptly points out: “Algorithms alone are not yet good enough to determine when something is a mistake or on purpose. Twitter’s system couldn’t tell that ‘Vagas’ was a mistake rather than a pun or a purposeful reference. That’s not really the algorithm’s job.”
How can trust be restored?
It’s clear that AI, algorithms, and personalisation is part of the problem. But it could also be the solution. Step forward Neva Labs, which aims to use to use artificial intelligence to burst the filter bubble and bring truth and trust back into how we consume news.
Neva Labs is the brainchild of Mark Little and Áine Kerr who’ve both got a deep understanding of journalism and social platforms having worked as news journalists in Ireland, in the start-up Storyful, and then in leadership roles at Twitter and Facebook, respectively.
They’ve set up an experimental team in Dublin and say they’re hoping to launch their first products next year. “I think the future is going to be a Spotify or Netflix for news,” Little told The Irish Times. “We are really focusing on the individuals’ experience of news. This is not just about saving journalism.”
Like Spotify and Netflix, the product that Neva Labs eventually releases is likely to be subscription based and there is evidence to suggest that people are willing to pay for quality over quantity.
It all sounds really idealistic, but if they manage to pull it off, it could revolutionise the way in which we distribute and receive information. And it could restore truth, and more importantly trust, in the news media landscape.
So how exactly do Neva Labs plan to use AI to tackle this challenge, when major tech companies are already failing to fix it? I caught up with Kerr to hear more:
Anne-Marie Tomchak: How do you intend to use algorithms for positive impact?
Áine Kerr: What we’ve learned is that people want more filters and controls, they want personalised recommendations and and an aggregation of news content. We know people are overwhelmed with the amount of news and information, and indeed misinformation. We also know that people’s trust in news and information has been severely tested. With that in mind, we want to enhance people’s existing social media experiences, build a conscious layer of control and filters over them.
Our key differentiator will be the control the individual gets over the personalisation of their news consumption and the improvement in their return on attention. It has more in common with health and fitness apps than most existing news products. It will also help people spot inefficiencies, weaknesses, and vulnerabilities to manipulation and misinformation.
If successful, we hope people will spend less time scrolling and more time clicking on information and news uniquely relevant and interesting to them. They will be protected against misinformation and manipulation. And they will be open to receiving a more diverse range of information sources.
What is the biggest challenge facing news organizations and consumers today?
The issues facing the industry are complex and diverse, ranging from inconsistent revenue models to information overload. But one fundamental issue is the breakdown in trust. You need only look at the Reuters Research Institute report from earlier this year which showed that just one in four people said social media did a good job sorting fact from fiction.
Mark (Little) and I would contend that fake news is ultimately a symptom of a much wider societal issue and if trust is to be restored, we collectively must question what a more transparent public-centric journalism looks like, what participatory journalism means, what a news experience looks like where the individual is in control of the AI, their filters, their sources. There’s a trust gap now between news organisations and consumers. But we believe there’s a way to close that where you give people control over their news experience; an experience that is personalised but also challenging them with new sources, ideas and perspectives.
Will a monetised model of receiving ‘truth’ lead to a greater social gap between those who can afford to be ‘enlightened’ and those who remain ‘manipulated’?
We hope we can build something that is affordable. We want to play our part with the industry as a whole restoring trust, reducing polarisation, creating engaged communities, connecting people to local and relevant news and information. To do that, we will build partnerships with stakeholders across the education, research, advertising, publishing, data, technology, and social media sectors.
Your venture comes at a critical time. This week alone there were multiple examples of misinformation spreading after the Las Vegas shooting.
We’ve both worked inside platforms and we know how seriously they take this issue. I spent the past year working on the issues of misinformation, polarisation and empowerment of users inside Facebook where they adopted a three-pronged approach: disrupting financial incentives to create fake news; creating new features that reduce the sharing of misinformation and partnering with third party fact-checkers; and focusing on training and tools for the community such as tips and tricks on how to spot fake news.
I say all of this to demonstrate that I know social media platforms take their role seriously when it comes to the spread of misinformation. They recognise there is no one solution and it is instead going to take a diverse range of tests and solutions to tackle the issue.
There is a risk that Neva Labs, while well meaning, will ultimately reinforce yet another echo chamber. How can personalisation ever work without being tainted by negative agents?
As you’ve hopefully now seen, we want to provide people with personalisation in order to help them deal with the overwhelming stream of news and information. But we also hope to challenge them, introduce people to new sources and new ideas, and ultimately improve the diversity of their news and information experience.
With Storyful, we often talked about the idea of the “Human Algorithm,” that you needed humans making sense of technology and tapping into communities, the people closest to a story, in order to make sense of what was happening. Storyful was a unique newsroom relying on the power of proprietary technology and expert journalism to find signal in the noise, verify the information and distribute it to the world’s media. We will be reflecting on those Storyful learnings as we go forth with this new startup.
What about the role of educators in ensuring people are media literate?
There’s a responsibility on all of us to give people the skills, tools and information they need to be their own arbiters of truth, to be able to spot fake news, to be the person in their community who is quick to debunk something and warn off others. News literacy isn’t something that is taught only in schools. Publishers can play their part with more transparency about how the sausage is made, more visual cues to help differentiate between a factual piece of reporting and an opinion piece, more outreach to their communities to ensure the news and information is relevant to them, that they see themselves in the content. That means thinking about issues of diversity in newsrooms, building new ways of doing transparent and participatory journalism that is public-centric.
For all of us as an industry, we need to increasingly focus on the demand for news and information, and not just the supply of it. Our tendency is naturally to dedicate huge amounts of our time to finding content, creating stories, and distributing those stories in the hopes of building audience and ultimately monetising it.
Too often, we probably don’t think enough about what the demand side, how we can better connect to communities, think about what they need from journalists and publishers. Getting that balance between Supply and Demand is now critical.