We’ve probably all heard about “filter bubbles” by now – the idea that, despite having ready access to an effectively unlimited cache of information on any number of topics, people often end up in social media echo chambers. This doesn’t just happen through intentional selection (i.e., choosing to follow things online that reflect your pre-existing worldviews and biases), but also the predictive power of algorithms that serve up news and headlines on your Facebook wall or Twitter feed based on past search history, clicks, location, and other factors.

A cool demonstration of this: drawing on research from Facebook data scientists, the Wall Street Journal created an interactive tool that shows the kinds of stories that “very conservative” or “very liberal” Facebook users are likely to encounter. Look at the tag “inauguration,” for example, and you’ll either see stories discussing how the Women’s March drew much bigger numbers than Trump’s inauguration, or bogus headlines about the CIA apparently releasing photos of the inauguration that support Trump’s inflated numbers. The latter is obviously from the “very conservative” feed, and is of course demonstrably false.

You could say that filter bubbles are the natural outcome of our current media landscape – the result of algorithmic selection, and age-old, subconscious cognitive biases. Take this to the next level (adding some intentional misinformation, unintentionally funded through ads from big brands and allegedly spread, at least in part, as part of concerted propaganda efforts), and you get fake news.

Much has been written about the phenomenon of “fake news,” misinformation that’s packaged to resemble credible journalism and spread online to large audiences; many have argued that it even influenced the outcome of the 2016 U.S. presidential election (others are less sure about that).

Conspiracy theories, hoaxes, and urban legends have always been regular landmarks of the internet (did you know that putting your iPhone in the microwave recharges its battery in minutes??). The difference with the 2016 ‘fake news epidemic’ is that deliberately constructed misinformation went viral, spreading across Facebook and seemingly reinforced by Google searches.

Buzzfeed reports that these fake news stories received more shares and comments in the last three months of the U.S. presidential campaign compared to stories from the New York Times, Washington Post and CNN. Yes, not too long ago, there would have been some kind of irony in the fact that a report from Buzzfeed is now being widely cited in the war against misinformation and fake news sites, but hey, this is where 2016 has taken us.

Fake news stories about Clinton being involved in a child sex ring, or Democrats wanting to impose ‘Islamic Sharia law’ in Florida, spread across the web and were given more attention and credibility after being repeated by Trump and people associated with his campaign.

What kind of responsibility, if any, do companies like Google or Facebook have to stopping this spread of misinformation?

Pushing back against this isn’t easy. After all, the features and affordances of social and digital media that fuel the spread of fake news are the very same characteristics that have allowed, for example, protestors and activists to challenge hegemonic media narratives. Lots of research and commentary has lauded social media’s capacity for presenting ‘alternative narratives’ that subvert the dominance of the mainstream news media; some blog posts on this very site have explored those issues. So how do you clamp down on fake news without censoring or creating barriers for activism, protest, civic engagement, and ‘real’ journalism?

Google, Facebook, and other companies and organizations have already begun experimenting and rolling out tools to combat fake news.

Google reportedly banned 200 publishers from its AdSense network, after updating a policy that targets sites with misleading content. Filtering out advertisers that make false claims was already standard practice, but Google now explicitly prohibits impersonating news sites (for example, some fake news publishers used “.co” domain extensions to resemble news sites that end in “.com”). In addition to this crackdown, Google has begun incorporating a fact-check feature into some news pages. French newspaper Le Monde, meanwhile, has put together a database that lists over 600 fake news websites.

Beyond these tech-focused solutions, some have argued that the best way to tackle the fake news problem is through education, making sure that students are taught philosophy, critical thinking, and media literacy. Technological solutions can only take us so far; if we rely on algorithms and programs to identify fake news, we’re outsourcing our critical thinking and essentially recreating the problem.

Author