Technological Traps and Disinformation

Our mental shortcuts push us to believe information that confirms our beliefs. They make us think celebrities always tell the truth. And they get us to share publications that trigger our emotions, without thinking. But in a world where information is mainly consumed online, the Web environment and algorithms also contribute to this phenomenon. Here’s how.
The role of algorithms 

 

 

Algorithms are sequences of operations and calculations that govern how a system works. On social media and search engines, they customize content before the user is exposed to it. And they promote publications that generate a lot of reactions. Algorithms determine the advertising, publications, articles and photos you see as you browse. 

For example, PageRank is the algorithm that sorts the results appearing on the Google search engine. Several factors will determine their priority. They include the user’s language and location and the website’s security. Other factors are the optimization of the content for the search engines, the page loading speed, the quality of the content, and the mobile adaptation. 

However, the first result isn’t always the one that will best meet our needs. It also may not be the most appropriate to our search. Very relevant content may be hidden behind the first results.  

According to a 2019 analysis of five million Google search results, Internet users click 10 times more often on the first result than on the tenth. If a result is moved one position upward, its click rate increases by 30.8%. The first link posted obtains 31.7% of the clicks on a page containing 10 results.  

The reason is position bias. This is a cognitive bias that makes people prefer the information at the top of a list, regardless of its value or its real importance. Needless to say, this can greatly limit the value of a search. 

 

Customized content 

 

 

The algorithms on your feeds give priority to the content that will generate the most interest. They do this instead of showing everything in chronological order. The photos, videos and posts that appear first have therefore been sorted for each user. The algorithms choose what you see… and what you don’t see.  

These algorithms change very often. But here’s what we know about Facebook, Instagram and YouTube. 

Facebook determines what users see according to their interaction habits (what they like, what they read, what they click on). It also relies on the media used in the post (photo, video, live) and the popularity of the post. A relevant article that doesn’t correspond to what you usually read is very likely to be “hidden” from you. 

Instagram, which belongs to Facebook, gives priority to content published by accounts with which the user frequently interacts.  

The videos recommended by YouTube or those that appear in the platform’s search results also depend  on the relationship between the user and the source. If a user already likes a YouTube channel and spends a lot of time there, videos by the same creator will be recommended to that user. If an Internet user recently became interested in a particular subject, similar videos will appear in the user’s search results and recommendations. The algorithm also accounts for the user’s demographic data.  

Position bias can also pose a problem. Somebody sees a fake news item claiming that COVID-19 isn’t a major problem, based on videos showing empty emergency rooms in certain hospitals. Let’s say the user simply searches for “COVID-19 hospitals” and lives in a region that wasn’t hit hard by the pandemic. He’ll probably click on one of the first results, which only talks about his region. Then he won’t look any farther. So in this case, the first result isn’t the most relevant result. 

 

The data void  

 

 

Like all search engines, Google abhors a vacuum. It constantly seeks to provide results for a search, even if they aren’t relevant. People with bad intentions can twist the situation to their advantage. 

A data void occurs when there is a very limited number of search results for a specific term or news item. It also occurs when the available information is non-serious, false or extremist. This data void is often exploited to manipulate search results

Extremist groups often use the latest news to steer Internet users to their content. That’s because these items haven’t generated many serious articles yet, but are triggering a lot of searches. Let’s say an event happened involving COVID-19, an ambulance and the Jacques-Cartier Bridge. Extremist groups will use these terms to exploit the data void. They know that many people will search for these terms before the first legitimate articles are published. The content, while obviously false, will be shocking and encourage people to make cash donations, for example.  

 

Information bubble 

 

 

 

The filter bubble can lock people into an intellectual environment by always offering content likely to correspond to their values, beliefs, opinions and ideas. Activist Eli Pariser conceptualized this phenomenon. 

Not just anybody can enter the bubble. We tend to ignore or block anything that annoys us, such as arguments of political parties we don’t support. Or information that could anger us. Or data that goes against our beliefs or preconceived ideas. We click more often on information that confirms what we want to hear. Some algorithms sort the information as we browse and favour items that generate the most engagement. 

Let’s say some people were already opposed to vaccination before the pandemic. They’ll probably see a large number of false and negative posts in their current search. But they’ll see little substantiated scientific content on the subject.  

This means a large part of information available on social media is “hidden” from us. More than ever, it’s important to make an effort to access a wide range of information that reflects all relevant points of view. 

 

Instant sharing 

 

 

News on COVID-19, whether it’s true or false, spreads fast. Part of its popularity is due to instant sharing features.  

Retweet (Twitter), Share now (Facebook), Reblog (Tumblr), Share (Instagram) and other similar buttons are functions that require little effort by the user.  

Given that confirmation bias pushes us to trust information quickly if it confirms our beliefs, instant sharing can be dangerous. 

On Twitter, people retweet false information twice as often as true information, an MIT study reported. A Cornell University study concludes that on Reddit, 73% of the links shared are rated positively or negatively before they are opened. 

Beware! You can easily pump up the volume on a false and dangerous message.  

Here’s some good news. Twitter recently started testing a new feature to eliminate the problem. If an Internet user is about to retweet an article without opening it, a notification will ask if he prefers to read it before sharing.  

 

 

Back to news list