Menu

Search

  |   Insights & Views

Menu

  |   Insights & Views

Search

The robots are polarising how we consume news – and that's how we like it

Facebook

An article recently published in the American Journal of Political Science claims to have found proof that the internet is fuelling polarisation. The article uses data from 2004 to 2008 to show those with better internet access consume greater quantities of partisan media, and that greater exposure to biased news sources makes people more hostile to opposing political viewpoints.

The authors conclude that partisan animus would be two percentage points higher if all US states had implemented policies that resulted in greater broadband adoption.

But is internet access making us less tolerant? Are we better off without it?

There is good reason to be concerned about changes in news media distribution and the impartiality of news sources. However, digital inequality is not the way to understand or measure it.

Polarisation and the human condition

In 2004 to 2008, when the data was collected, Americans were actively seeking out the media they consumed, either by going to online news sites or through search. It is possible those who preferred their media in paper form would have experienced the same reinforcement of their views if they read and purchased more of that same media.

What this research shows us is humans like to spend time looking at things that make us feel sure of ourselves. As moral psychologist Jonathan Haidt has shown, polarisation is a mix of our evolutionary groupishness – our desire to build self-narratives that correspond with grand political narratives in order to bind ourselves to others:

[A]n obsession with moral rightousness (leading inevitably to self-rightousness) is the normal human condition. It is a feature of our evolutionary design, not a bug or error that crept into minds that would otherwise be objective and rational.

The internet is one means by which we enact self-rightousness. In other words, it doesn’t cause us to be anything, but it does enable who we are or want to be.

Studies of the digital divide show that not only does digital inclusion fall along socioeconomic lines, but those with lower socioeconomic status are less likely to seek out opportunities online that might help them improve their life circumstances (such as job-seeking or online education).

Expecting that greater internet access will lead to a more tolerant or equal society goes against what we know of internet use: the way we use the internet conforms to our social world and capacities.

So, what is the relationship between internet access and partisan hostility? The issue we should be looking at is not our own selection and biases, but when those are enacted for us by machines.

Media ethics and algorithmic selection

Today, four in ten Americans get their news from Facebook, among other sources. News delivery via social media works on a business model that exploits the same need for self-validation that Haidt has identified.

To deal with the noise of the internet, Facebook has over time developed algorithms that can select and order information based on signals such as likes, reading time, shares and comments.

In June, Facebook announced it would promote content shared by friends and family. This means users are more likely to see news content from people in their networks than that offered directly by news outlets.

And, after word got out that humans curated the news classed as “trending”, Facebook recently said it would drop people from the process and move to “a more algorithmically driven process”.

The problem with this is fake stories can get through, and those that trend can be of dubious provenance designed to attract clicks rather than inform.

One way of seeing the problem is that it is the media ethics of those producing such stories that needs to be fixed, not media consumers. Possibly in recognition of this, Facebook said it would deprioritise posts with headlines that withhold information and create misleading expectations, like those ending in:

You won’t believe what happens next.

Tweaking the algorithm is an easy way for Facebook to respond. It is not a media outlet but a social networking site; therefore, it gets to shrug responsibility.

As Tarleton Gillespie of Microsoft Research points out, social media platforms have long fought for their own impartiality – particularly when it comes to legal responsibility. However, they will also behave like public institutions when it suits them – by, for example, policing for obscenity, hate speech and fundamentalist videos.

The ideal human-machine interaction?

Facebook has taken our own behaviours – our insatiable appetite for what we already believe – and automated it.

In Our Robots, Ourselves, artificial intelligence expert David A. Mindell observes that the most advanced technologies are not those that stand apart from people, but those that are designed to help us by being embedded in, and responsive to, human networks.

Mindell argues we need to change the way we think about robots: rather than fearing automation, we need to understand how to work with it.

Perhaps there is a need for a platform that deliberately challenges our existing views, where we can consciously control the algorithm, steering it one way when we want to be challenged through transparent and meaningful interaction, steering it the other way when we want to be comforted.

Rather than feeling nostalgic for an era when a small number of news outlets did the quality control for us, the answer may be to create algorithms that help us to be suspicious, questioning and aware of other viewpoints.

The ConversationEllie Rennie receives funding from the Australian Research Council and Telstra. She sits on the Board of Directors of the Community Broadcasting Foundation and EngageMedia.

This article was originally published on The Conversation. Read the original article.

The Conversation

  • Market Data
Close

Welcome to EconoTimes

Sign up for daily updates for the most important
stories unfolding in the global economy.