An initial thesis

Recommendation algorithms* and moderation rules in social media shape our societies

Their impact is hard to overerestimate, be it positive or negative. Much of the polarization, fear and insecurity we experience in our pluralistic societies today is fueled by the use (and abuse or exploitation) of social media recommendation algorithms. They can make social media highly addictive and psychologically distressing. They can increase insecurity, fear, hatred or paranoia and trigger a radicalisation of positions.

Nevertheless, we depend on them: An open public discourse online can hardly be imagined without rules of recommendation and moderation. Because only they can shape a huge cacophony of posers into a forum that creates more or less meaning.

*Please remind that algorithms can be both, established and executed either by people or machines.

.

Why is that so?

Recommendation algorithms …

• …compose the ingredients of our daily (social) media cocktail.

• …are, in opposition to blatant censorship, a more subtle, invisible but effective intervention.

• Their main target is not individual surveillance — but they know us all too well.

• By favoring content that agitates us and encourages interference, they tend to promote
hatred, fear and dissent.

• If controlled for commercial or political purposes, they can predict and influence
our behaviour.

• By providing simple explanations and group self-affirmation, they reinforce
ideologies and easily radicalize people.

• By polarizing our societies, they spoil a solution-oriented public discourse and endanger a fair democratic competition.

.

What shall we do?

The social media platforms themselves are very conscious about their impact. They investigate and probably know they could do better — in terms of the common good in society. But this is not their aim. According to their business model, they feel obliged to keep their money machine running.

We, as individuals, as civil society and as policymakers, are struggling with these impacts.
If we want to contain them and prevent them from causing (further) harm, we need to find strong measures to get involved and prevail.

.

TRAP  tries to get the whole picture – and more

The idea of TRAP is to discuss this thesis and investigate ways to improve things. 

Not so much on a scientific level (that few will read), rather than in an inspiring heuristic approach. It may range from a series of interviews to a well funded, bold cooperation across organizations and disciplines.

That’s why we want to start with interviewing experts and possibly later switch to rounds of moderated gatherings. In the end, there could be a policy paper, revised by various participants, and — if the outcome is worth it — a campaign to promote it. Let’s see, how far this gets us!

.

What questions do we pose?

• Is the initial thesis accurate in the first place? What stands against it? 

• What do scientific studies and evidence say about it?

• What projects / initiatives do exist already to improve things?

• Are there experiments to prevent the harmful effects? E.g. are there alternative algorithms, that promote common sense, friendliness, respect and understading? If so, why are the not in use – or are they, somewhere? With what effects? 

• How are the algorithms being maintained, tested, written, manipulated, automatized,
self-learning, monitored, kept secret in the social media companies?

• Why does e.g. Twitter feel so much more aggressive than Instagram, and yet it is a main stage for political opinion and a major source for press coverage?

• There are hints, that with the increasing use of recommendation algorithms the internet turned sour and toxic. Or was it just that extremists learned to make use of them?

• What gives people the impression to share exclusive information and “think for themselves” while following huge manipulations in scales up to mass cults?

• What other psychological, societal or macroeconomic effects are connected with the concept of “maximum involvement” – within individuals, families, work, friendships, communities, nations and the globe (e.g. self-esteem, cultural exchange, procrastination, stimulation of ideas, lack of personal interaction, spread of information, lack of sleep, social connection, dogmatism etc.) ?

• What do the platforms themselves think about it, after having emerged from nice start-ups towards opinion machines?

• What alternative business models could there be for platforms to run their businesses without the current hazardous effects?

• In which ways would a regulation from outside make sense – be it in the shape of monitored algorithms, a quality seal, public-law institutions, NGOs, boycotts, state regulations etc.?

• How are political interests dealt with? Is there a justifiable common ground that could be considered a “middle-of-the-road agenda” (such as a human rights policy), or is it all about domestic rules of conduct?

.


.

Please feel free to contact me via email on the issue of this project.
Best Regards, Peder Iblher