Greek elections, neural nets and the “Echo chamber” effect

Petros Demetrakopoulos
7 min readMay 28, 2023

--

Photo by Arnaud Jaegers on Unsplash

Last Sunday, the governing center-right party “New Democracy” (ND) won the Greek National elections with an immense, and a bit unexpected, difference from the loyal opposition center-left party “SYRIZA”.

Despite the fact that polls had correctly predicted the reelection of ND and almost everyone was expecting it, almost no one anticipated that the difference between the 2 parties would be more than 20%.

Following the election outcome, a considerable number of individuals in my social media network, particularly those within my age group comprising mostly Millennials and early Gen-Zers (the generations that mainly “reside” in social networks and spend a considerable amount of their timeon them) who share a political or ideological affinity with “SYRIZA” or the broader left, expressed strong discontent and skepticism regarding the electoral process or the vote counting procedures. Some of their social media reactions were enraged and even inappropriate in terms of political culture. They found it very challenging to accept that “SYRIZA” had suffered such a big loss, as it contradicted the prevailing sentiment they had perceived within society and social media before elections. This finding is worrisome because during the last months before elections, there were no polls or other data suggesting that ND would lose the elections.

This situation is indeed a cause for concern, as certain factors contributing to this effect have been extensively studied and are well-documented. One notable factor is that individuals who lean towards left-leaning political ideologies traditionally exhibit a greater inclination to openly discuss and express their political beliefs compared to their counterparts on the center or right-wing spectrum. Consequently, they tend to have a more prominent and vocal presence about their political beliefs. Furthermore, their exposure to and influence by information and opinions that reinforce their existing political convictions creates a reinforcement loop, isolating them from alternative perspectives. This is because they tend to befriend other people with common beliefs or consume information from media with a specific political direction. This phenomenon is commonly referred to as the “Echo chamber” effect, drawing an analogy to a situation where people perceive their own voices amplified due to the presence of echoes.

Extensive research has consistently demonstrated the amplification of the echo chamber effect within social media platforms (this YouTube video explains it very clearly). This is primarily due to users’ tendency to follow and engage with others who share similar traits and viewpoints. As a result, users are often exposed exclusively to tweets, stories, or posts that align with their own beliefs on significant social, political, and ideological matters. Regrettably, the human brain’s capacity to perceive and gauge quantitative relationships is inherently linear, which means that social media users may erroneously assume that the political perspectives and views they encounter within their social media circles are representative of the broader society in similar proportions. But things in social media tend to be even worse than what is described above due to the technical details of the algorithms and systems that suggest content to the users.

Let’s get a bit technical

On top of that, social media platforms are not suggesting content to users in a random and unbiased way. They deploy very complex systems and algorithms that take into account a vast amount of parameters and many times historical data of the content that a user engaged with. Most of the times, what social media platforms try to maximize is the time that users spent on them and obviously not to educate them in social or political matters.

But let’s see how content is suggested in one of the most famous social media platforms, Twitter.

How Twitter recommends and shows tweets to users

In early 2023, Twitter open-sourced the recommendation algorithm it uses to suggest tweets to users. This algorithm is used to show to each user only some tweets that are relevant and engaging for the specific user. This fact made it possible for developers who have a sufficient level of understanding to have a really deep insight into how content is suggested to users by social media platforms.

While I will try to avoid as many technical details as possible, I will try to explain how the core of the recommendation algorithm works. The heart of the algorithm is called “Heavy Ranker”. It is a neural network that scores the relevance of each candidate tweet with the specific user that has to suggest tweets to. This algorithm takes into account thousands parameters such as if the user who has originally posted a candidate tweet has many likes or retweets, the similarity of the candidate tweet with tweets that the user previously engaged etc. In case you want to take a deeper dive in the exact features used, you can follow this link. So, the tweets with the highest relevance according to this algorithm are shown to the timeline of the user.

As the fancy term neural network may say nothing to you, let’s dive a bit deeper and try to explain as simply as possible what neural networks are, how they are trained and why data are crucial for the output of a neural network. In our case, the tweets that will be shown to a user among all possible tweets on Twitter.

What are neural networks and how do they work?

In layman’s terms, a neural network is like a group of little workers, called neurons, that work together to solve problems. Each neuron is responsible for a small part of the problem, and they all work together to find the answer.

Neurons are connected to each other, just like friends holding hands in a circle. They pass messages to each other, saying things like “I think this tweet is relevant!” or “I’m not sure, let’s ask the next neuron.”

To teach the neural network how to solve a problem, we show it lots of examples. In our case, if we want it to distinguish relevant and irrelevant tweets, we show it many relevant tweets and say, “These tweets are relevant to this user.” We do the same for the irrelevant tweets as well. The network looks at the tweets and learns to recognize the characteristics that make a tweet relevant for a specific user. These characteristics may be related not only to the content of the tweet, but as we mentioned above, may be related to metadata such as engagement (likes, replies, retweets etc.)

When the neural network sees a new tweet, it goes through all the neurons and each one decides if the tweet is relevant or not. Sometimes, as in the case we are examining, this can be quantified, so the network may output a number noting how relevant a tweet is. Neurons communicate with each other, and finally, the network gives its best estimation about the relevance of a tweet. The more examples we show the neural network, the better it gets at distinguishing relevant from irrelevant tweets. It’s like practicing a lot to get really good at something.

Wait, this article was supposed to be about the result of the Greek elections right ? How are neural networks relevant ?

Here the interesting stuff begins. As mentioned at the very end of the previous paragraph, the more examples we show the neural network, the better it gets at distinguishing relevant tweets from irrelevant ones. However, this only happens if data is balanced and representative of the real world. This means that the neural network needs to be trained on sufficient number of different kind of tweets, probably posted from accounts in different political wings etc. Otherwise, the resulting neural network will be what we formally call biased. This means that because the data that it was trained on was not representative enough, it only focused on this subset of data while in the real world another distribution exists.

In our case this means that if the network is mostly trained on tweets supporting or expressing a specific political wing, while there is no significant amount of tweets from the opposite political wing, then the algorithm is trained on biased data that are not representative of the real situation in the rest of the society.

As you may have already understood, neural networks trained on biased data are very prone to making biased suggestions. This is because the algorithm did not had enough characteristics of the tweets composed by center or right-wing supporters, and thus it simply had not learned how to suggest them.

And how do you know if “Heavy Ranker” is biased and on what data it has been trained on?

For the last 10 years, I have been closely following the Greek Twitter community. During the decade it was more than apparent that there was a very heavy imbalance between left-wing inclined accounts and accounts representing the rest of the political spectrum.

The imbalance was so extreme that the distribution of twitter accounts by political wing was far from what was the actual distribution of political wings in the society. The left-wing accounts, which still compose the vast majority of the Greek Twitter community also tweeted at an enormous rate, especially during the years of the financial crisis that hit Greece, and many times using very toxic speech. Thus, these were the data that the network was mostly trained with.

During all these years, the algorithm has been trained on biased data that are not representative of the real situation in the rest of the society. As mentioned in the first section, left-wing inclined persons exhibit a greater inclination to openly discuss and express their political beliefs compared to their counterparts on the center or right-wing spectrum. As center and right-wing inclined accounts were less in number by orders of magnitude (and obviously the tweets expressing this kind of political views were much less as well), the algorithm did not have enough data to treat the tweets expressing these views equally, thus it probably suggested more of the left-wing inclined tweets. In the era of social media and data-intensive systems, this imbalance appeared massively impacting the quality and the variance of the suggested tweets.

Conclusion

Consequently, left-wing inclined users who observed such an imbalance in tweets and posts within their social media timelines, further amplified by the “Echo chamber” effect and the biased suggestion algorithm, likely developed the perception that this prevailing trend accurately reflected the broader societal landscape. However, the reality was far from what Twitter was suggesting to them.

--

--

Petros Demetrakopoulos

💻Code-blooded, 🌏 Traveler, . Lifelong learner 📚. Currently studying Data Science and AI at TU/e, Eindhoven, NL. https://petrosdemetrakopoulos.github.io