A memorandum with seven examples and seven proposals.

The last few months have shown that our democracies can face serious crises of confidence as a result of media-driven polarization. If we want to further protect freedom of expression and fundamental rights, we have a lot of work ahead of us: We need to clarify on a broad basis what “harmful content” exactly is and whether and how we should regulate it. We need to clarify if and where freedom of expression has limits. And we need to muster the courage to shape our discourse and media spaces in a meaningful way. To that end, I’d like to offer a few suggestions for discussion..

„We are not afraid to follow truth wherever it may lead, nor to tolerate any error so long as reason is left free to combat it.“

..
This quote by Thomas Jefferson (1743 – 1826) has it all. Yes, we think instinctively, everyone should be allowed to say anything! In the long run, truth will prevail — and good ideas, whose time has come, will prevail even faster. Who should possibly question that, except perhaps tyrants, blinded ideologists or people who have something to hide!?

And yet: I have never met anyone who maintains such a radically open position on prolonged reflection.

  • Recently I had a discussion on climate change. Someone complained that scientists with minority opinions were hushed up or defamed. “Why sholdn’t people even spread medical misinformation about Corona? They can be disproven, if they are wrong!” But the same person told me 10 minutes later that he found it absolutely unacceptable that in mosques in western countries preaching is done in Arabic. Freedom of speech?
    .
  • Someone found it a good idea to publish the explosive diplomatic US Cablegate dispatches unedited — even if that would endanger e.g. informants of the USA. War is based on lies — so the logic goes — and the fewer lies the better. Even WikiLeaks initiator Assange didn’t want to go that far, but then followed his lead once the data was leaked. He was much criticized for this, even by his supporters.
    .
  • Some libertarians believe that credit agencies and companies should have absolute freedom to collect data and analyze it as needed. They reject any encroachment on freedom of information, whether by the state or by big tech corporations. On the other hand, they think cryptocurrencies and encrypted messengers are great — after all, the nasty tax authorities should be denied access to their financial data. So they decide the question of data protection solely on the principle of “What works?”
    .
  • In 1949, Jefferson’s quote above was chiseled into the Berlin America Memorial Library. Meanwhile in the US it was counteracted by McCarthy’s anti-communist persecutions. The discrimination against blacks or homosexuals, the disinformation in the various wars and intrigues of the USA or the NSA scandal until today stand against the promised freedom of speech.
    .

So does the call for freedom of expression only serve to enforce one’s own worldview? Is the “freedom of the dissenter” really as important to us as we claim?
.

Example #1: Storming the Capitol

“We are more than three million people. A sea of red, white and blue.” With euphoria, U.S. veteran Ashli Babbitt bid farewell in a final video message, heading towards the Capitol and to her senseless death. Firmly believing she was part of a great revolution, she became a tragic antihero of the uprising against Joe Biden’s confirmation as president. After months of online hate and persistent false claims, she was willing to give her all to the cause.
.
But it wasn’t three million — it was a few tens of thousands of amped-up Trump supporters who brought this nightmare to American democracy. Shortly thereafter, various of Trump’s social media accounts were shut down. He had caused this uproar and had not convincingly distanced himself. Had Rupert Murdoch (Fox News) and the social platforms not dropped Trump, there could easily have been a civil war with fanatical and armed militias — based on blatant and demonstrable lies.

Bild: YouTube

So, what is “harmful content”?

.

But even for sincere pluralists, there are obvious reasons to curtail freedom of expression and to oppress certain information, images, content, symbols or statements. After all, there is content that is generally considered so harmful that its deletion or even punishment is considered right and generally accepted.

For example, because it …

  • openly incites violence
  • wants to eliminate pluralistic democracy
  • is extremely polarizing or incite to hatred and terror
  • discriminates members of groups or minorities
  • violates people’s privacy or dignity
  • promotes defamation or slander or exposes people
  • constitutes an insult, humiliation, harassment or bullying
  • constitutes stalking, doxing or other harassment
  • is dangerous disinformation, e.g. in the proximity of elections
  • endangers public peace and order
  • involves an attempt to defraud (scam)
  • serves the purpose of stock market manipulation or betting fraud
  • serves corruption, organized crime or drug trafficking
  • contains medical or legal misinformation
  • is motivated by sadism or sexual violence against children
  • depicts shocking violence and/or make it accessible to minors
  • depicts pornography and/or make it available to minors
  • promotes self-injurious behavior
  • causes animal suffering
  • leads to environmental damage
  • disturbs the peace of the dead
  • causes epileptic seizures
  • violates trademark, property or copyright laws (seemingly for some people the most crucial point of all!)

or perhaps (and here it becomes controversial — why should one not be allowed to do that?) …

  • promotes violence or depict it in an unnecessarily realistic way (e.g. in computer games)
  • offends religious feelings, national pride or the like (blasphemy, attacking state leaders, gurus etc.)
  • spreads a degrading concept of humanity (sometimes subliminally, e.g. in pornography)
  • tags people or issues death lists (i.e. without preparing a concrete crime of it’s own)
  • causes unnecessary fears (e.g. spreads paranoid delusions that are subjectively “true”)
  • incites a riot or political insurgency (the legitimacy of which may be disputed in each case, violence, etc.)
  • reports on terror (and thus serve its rationale of spreading fear or triggering counter-reactions)
  • questions unborn life (information or debates on abortion)
  • gives instructions for suicide
  • promotes idealised bodies (which can encourage anorexia)
  • advertises products or drugs that are harmful to health
  • provides instructions on how to build weapons, bombs etc.
  • appears otherwise offensive to a social minority (puritanism, offensive terms, …)

… etc.

Then again, there are often enough reasons not to block content that is perceived as harmful.

These could be

  • social and war reporting (content serves to inform)
  • investigative journalism or whistleblowing (reprehensible practices are uncovered and thus border crossings are legitimised)
  • legitimate criticism (e.g. of brand manufacturers or public figures)
  • art (a serious artistic or literary value — possibly controversial)
  • recognisable satire or irony
  • valuations, borderline expressions etc. which are still subject to freedom of expression
  • protection of oppositional activity (e.g. to protect dissidents from rigged reasons for deletion)

… etc.

A long list to which I will refer several times later on.

So much is clear, that whatever is so harmful, that our society declares it illegal, may and should be removed from public media. And, vice versa, everything that is not illegal is subject to freedom of expression and should thus not be removed.

However, as we shall see, any weighting affects freedom of expression. Any prioritisation is a privilege and any restriction of scope is a form of discrimination. Content that is legal but undesirable can be kept down based on internal guidelines. This soft power of social networks is difficult to control. Yet this is the very mechanism that makes the economics of attention so powerful, as Shoshana Zubhoff describes in her book on Surveillance Capitalism. It has the automatic tendency to promote especially absurd or disputed positions, that would normally remain at the edges of society – which may lead to unwanted radicalisation or a polarisation of the society.

Yet, it is rare to succeed in proving actual content interference or discrimination that is also relevant enough to concern fundamental rights. An obligation of large platforms to maintain a “balance” would hardly be feasible at the moment. However, it is being discussed in Europe, e.g. through supervision similar to that to which independent public media are subject.

The goal — I guess everyone can agree on this — is a tolerant and diverse, but also responsible media world. But who is responsible for it: The public audience? The private platforms with their community standards? Or finally, the judges? All three need to be harmonised: balanced jurisprudence, a public sense of justice and the consent of the corporations. In any case we need a common understanding throughout more or less the entire society. And if that is, that only one of the above-mentioned content categories is defined as so illegal, that an intervention is justified (which I assume), then we need something like regulation. You may call that censorship. Then we need bases for control and decision-making and appropriate measures and action.

And we must ensure that these measures are not being abused. Because every government that slides into totalitarianism will look for reasons to legitimise the deletion of opposition content. It happens every single day. If summoning a “terrorist threat” doesn’t work, one can still try to use copyright law etc.

So, before we venture into this difficult terrain, let’s understand a few more terms.
.

 

Example 2: Tik-Tok’s throttling

.
The video platform TikTok has throttled the accounts of people who fall outside the norm. Because they are severely overweight, disabled or homosexual, their reach was restricted. Allegedly, this was done to protect them from bullying by making them less visible. But if you look at how minorities are treated in TikTok’s home country China, you might also suspect discrimination. When the measures became public in Germany, they were widely criticised. The platform then promised improvement and to handle the issue more sensitively.

Image: Nathan Anderson on Unsplash

To clarify some terms

.
1. Objectivity, balance or consensus?

.
Evaluating content often requires an objective judgement, or at least a consensus on “truth” – which is difficult to establish. We live in times of rapid exchange of visually compelling news that often overwhelms us. A hysteria based on a bare lie is sparked within minutes. A character assassination is quickly launched, but impossible to disprove in time. So we need a compass that tells us quickly and with relative certainty what should be promoted or tolerated and what should be kept down. Time is a critical factor here.

So where do the lines run between ordinary contributions and harmful content, between legitimate criticism and dangerous propaganda? Can we agree on verifiable criteria that are binding? What is satire, what is irony and what is an insult? Which scientific findings may be considered sufficiently certain and which irrelevant or refuted? And don’t these standards change all the time?

In fact, many things are normal today that were considered unthinkable just a few years ago — and vice versa. Headscarves used to be very common in Europe, today they are highly controversial. Maybe one day we will find photoshopped models obscene, but vulgar expressions completely normal. Those who once saw “a women’s place at home” may now be outraged by their oppression in Arab countries. Perhaps at some point, advertising for sugar will be considered more dangerous than terror propaganda — which would be perfectly justifiable in terms of the number of victims … — We don’t know, things are in flux.

That is why freedom of expression should not simply be about consensus or majorities. Rather, it needs the understanding of the majority that dissent is necessary (democracy, pluralism, protection of minorities, rule of law). But it needs a consensus on what kinds of content (see above) should not be granted the status of protected opinion. Because they are too destructive or dangerous. This consensus can only apply “until further notice“, because history shows that it is often time-bound.
.

2. Free thinking — is that possible at all?

.
Individual and critical thinking has a long tradition in Europe. “Sapere aude — have the courage to use your own intellect!“ was the ancient motto Immanuel Kant issued for the Enlightenment. More than the Buddhists, Confucians or Muslims in Asia, enlightened Europeans saw themselves as detached individuals. Freeing oneself from all bondage through one’s own efforts became the credo of the social revolutions since 1789.

But consciousness in the Enlightenment is also conditional: “Being determines consciousness” taught Marx. And Freud followed up: Our early childhood socialisation determines our subconscious. So even individualists are nothing that they wouldn’t somehow have become. New concepts of political re-education soon followed, from Goebbels’ National Receiver to Mao’s Cultural Revolution. This was countered by a pluralistic humanism that called for a robust democracy.

Until today, the changing influences are massive: whether at home, school, advertising, entertainment, news or social media — ideas and ideologies are pouring in on us from everywhere. Our culture and identity are changing faster than ever. Subcultures and peer groups are available to choose from, sometimes with seductive offers. Those who claim that they want to “shape and invent themselves” are under an illusion.

But perhaps something else is meant: Thinking should not be guided by foreign interests. It should serve the maximum attainable knowledge and not the influence of others. Self-determination can only arise where there is self-awareness and has discovered and formulated its own interests.

Since the internet came into existence, our thinking has changed. It has become networked and accelerated. New ideas chase around the globe and establish clusters of followers. Sometimes this gives rise to almost hermetically sealed bubbles, entire cults or parallel societies. People can no longer perceive a delusion once it surrounds them as naturally as the air they breathe. That is more or less the business model of cults and conspiracy myths.

When we collide with people from such exotic mental worlds, we consider them to be extraterrestrials – incomprehensible morons who apparently do not live on this planet. They seem to be “controlled”, they don’t even think for themselves, but follow a powerful ideology. Like ourselves, presumably — except that from our point of view, our world of ideas is less extreme, less closed and gives more room to our perceived self-determination.

 

Example 3: #lockitalldown locked down?

.
In the Corona pandemic, some German actors posted a series of sarcastic statemets mocking the government’s health measures. Under the hashtag #allesdichtmachen (“lock it all down”), the videos caused controversy. The corresponding channel on YouTube quickly became very popular and should, of course, be easy to find. After two days, there was a great uproar: “YouTube deletes #allesdichtmachen! Our freedom of expression is being infringed!” — What had happened?

Nothing. The channel was still online. However, it had been superseded in the keyword search by other media, some of which reported critically on the campaign and were presumably considered more reputable and relevant by the algorithm. The fans of the campaign felt duped and complained big time. After about a week, the search result had corrected itself and #allesdichtmachen could again be found on top of the search results. Or was it YouTube decision-makers who had a hand in manipulating opinion in the interests of the federal government? Possible, but unlikely, because the videos weren’t that explosive — and of course not illegal. In my opinion, the advocates of freedom of expression have fallen prey to an illusion of control, while the algorithm has simply done its job, regardless.

3. Can we avoid framing?

 

There is a lot of ranting about framing, which means deliberately putting a fact in a certain light through clever choice of words. Does one speak of “nuclear power” or “atomic power”? Of “right-wing extremists” or of “national conservatives”, of “communists” or “leftists”? Unconsciously, we all do it — keeping in mind that every choice of words has some kind of presupposition or connotation.

All too raw facts and data do not make sense to our brains, but at best lead to sensory overload. Only when they are filtered and put into context, patterns do emerge — and then these form an overall picture. Of course evaluations, premises and specific questions are being involved. Therefore no such thing as neutral reporting exists — at best, a balanced position is feasable. So the question can only be: Who controls our information, the story, the framing? Who do we trust and how often do we become a little suspicious? And honest with ourselves that our knowledge is only relative and acquired and simplified.

“Where there is mixing and grinding, there is also cheating” says an old country lore. In terms of media, one could say: “Where there is editing and curating, there is opinion-making”. So framing always takes place, just like evaluation, weighting, selection. There are always interests at work. Be it ideology, economic advantages or simply delivering an interesting story.

But how do we protect ourselves from manipulation and massive fraud?

  • One clue is the self-interest of others. Putting the community’s interest before your own, the long-term advantage before the short-term – nothing else is being ethical. So I can look at: What interests does a journalist, a blogger, an advertising agency, a lobbyist, a politician represent? Does that make sense and in whose interest is it? Who do I trust and to what extent?
    .
  • Another clue is the openness of results. When a subject is investigated, how open or one-sided is the result? Sometimes it is obvious that a pre-defined, desired result is being delivered. But with scientific statistics, for example, this is not easy to see. Plausibility and dissenting voices, fact checks and verifiable sources create trust and can help in the assessment.
    .
  • And finally: Knowing that the really big fraud only very rarely exists. Conspiracies almost always come to light — the more people involved, the faster. And not everything that looks like a conspiracy at first glance actually is one.
    .

So media literacy is part of the answer. Books like “The hype machine” or “The Truth-Seeker’s Handbook: A Science-Based Guide”, the occasional reading of fact checks or reports on forum networks and troll factories should provide enough background knowledge to protect oneself to some extent against manipulation. But — is that enough?
.

4. What is public — what is private?

.
The internet has created a fluid transition between private and public communication. That’s why a clear distinction is difficult today, but actually important. Because the privately spoken word, the picture or video sent among friends, must be protected. The public media, on the other hand, are subject to control because harmful content meets a mass audience there.

There is a good reason why an encrypted smartphone should remain just as unobserved as a private word spoken on the beach. Because all attempts to systematically break up this privacy will sooner or later lead to a threat to freedom of expression. If today it may be a matter of combating child pornography (as the EU has recently decided), tomorrow it will be terrorist content that must be found. From Kurdish, Catalan or soon perhaps Scottish separatists. Not to speak of Hong Kong democracy activists, Greek investigative journalists, Saudi atheists or German whistleblowers. In a time when one almost attracts attention when one does not have a smartphone, this technology must not be turned against us (Asimov).

(Mind you, we are talking about warrantless mass surveillance and weakening of systems. It is understandable that the police are frustrated when some surveillance potentials are not used. Targeted measures, such as intrusion into suspicious groups, are a completely different issue. They can be legitimised by the courts. And by the way, this approach is also more successful.)

Now some people communicate in encrypted chat groups with up to 200,000 loyal followers. We can already speak of a medium there. It makes a difference whether 50 people see a terror propaganda video or tens of thousands. The former is bad (and may also have viral effects), but cannot be prevented if we respect confidentiality — see above — except by undercover investigators or whistleblowers. The other is a mass media dissemination of content, which should be open to regulation.

Now, social networks are a hybrid form of private and public communication. Here, a platform provider has the moderator role. In Germany, from a certain size on (number of visitors), he is obliged by law to intervene and block illegal content. It is allowed to have it’s own community standards, but these must comply with the freedom of expression and must not be discriminatory. The practice looks different: As a rule, the terms of use are interpreted narrowly and overblocking often occurs. Even a visible  nipple (if female!) can ring the alarm bells on Instagram. From my point of view, however, this should mainly apply to accounts that have crossed the threshold of private communication, but this distinction is not made.

Example 4: A kiss stirs anger

When Amed S. published a series of photo montages on Facebook and Instagram, he certainly knew that not everyone would like them: One motif showed a kiss with Mohamed H., the two were mounted on the background of the Kaaba in Mecca that was decorated with a gay pride flag. Fundamentalist Muslims immediately covered the pictures with hate and anger, including death threats. Especially in Pakistan, the images were often reported as harmful content. But instead of recognising that this was a deliberate act of intimidation, Facebook and Instagram closed Amed’s accounts — acting exactly in the sense of the islamist anti-gay bullies.
.
Facebook has thus fulfilled a paradoxical calculation: The more offended a group acts, the more “public order” is disturbed – but not by the supposedly offended, but by the post. S. did not want to take this lying down. Supported by the Giordano-Bruno-Foundation, the Institute for Worldview Law and the Initiative for Freedom of Opninion, he took the arduous legal action. And succeeded: “I am overjoyed about this important sign for freedom of opinion on the internet,” he commented on his victory. “A religious mob must not be allowed to prevail on Facebook with its misanthropic ideas! A kiss is not a crime!”

Image: private / ifw

We can’t do without rules – deal with it!

.
However wonderful they were — the days of cheerful information anarchy on the internet are gone. It’s no longer the weirdos and tinfoil hats, but savvy agencies and professional demagogues who master the keyboard of disinformation. You can observe that in yourself: If you have inwardly decided to follow certain narratives — e.g. in the first weeks of the Corona pandemic or before the US elections — you will easily get stuck in a bubble of self-satisfied confirmation. This can lead to sharp polarisation. Yet a democracy has to tolerate this, because after all, much is a matter of interpretation or can be understood as a discussion of values, which is of course vital. Even sheer disinformation challenges counter-speech, enlightenment, and must be tolerated up to a certain point, because otherwise we would be too quick to censor.

The critical point is when there is a serious threat to essential protected assets (see above) or to plural society as a whole. Int this case, the desirable media competence and self-responsibility are no longer enough. When it comes to these values, our information spaces cannot do without regulation — just like other areas, such as road traffic, environmental protection, urban planning, education or food production. Because the individual is sometimes too selfish, too malicious or simply too stupid to be granted such destructive power via mass media. Even if it sounds downright authoritarian, if we agree that some of the content listed at the beginning is illegal and unacceptable, then we must commit to suppressing it. This is exactly what is meant by a “well-fortified democracy”.

Regulation of our media is therefore legitimate and must also have clear limits. But how should this look and function in practice?
.

A moderation shall be applied

.
Article 5 of the German Basic Law states that “censorship does not apply”. But what is meant here is merely the prohibition of state pre-censorship:

Everyone is allowed to express their opinion, but can be held responsible afterwards if they violate laws in doing so. The consequences can be confiscation and indexing of the work in question or punishment of the person. (…) A pre-selection by private bodies as to whether contributions are published or not (e.g. in a newspaper editorial office or online forums) is therefore not censorship in the sense of the Basic Law and is unobjectionable under constitutional law.
.
At most, in the course of the so-called indirect third-party effect of fundamental rights, the significance of Article 5 of the Basic Law also comes into play indirectly between private parties, depending on the circumstances. However, this is then an instrument of interpretation for other laws, not a direct application of the prohibition of censorship from the Basic Law.
.
(Source: Wikipedia, German)

One thing we should certainly not do now is to empower anyone arbitrary to censor our media. But that’s exactly what is happening: the state is demanding that social network platforms delete banned content. If necessary, already at the upload stage. Great: A few random soldiers of fortune of the internet age, who became billionaires with finesse and a lot of chutzpah, decide about our freedom of expression! This is not how we had imagined it.

So, a “moderation” does take place and it’s not subject to any real legal control. In the belief that they are always standing up for the good cause, the platforms implement their own house rules as they see fit. Deletions are done quickly, but legal action against them is often lengthy. Even one day for an objection can be too long, if we consider election campaigns or flaring conflicts. And: a full deletion will still be noticed, but a weighting, a restriction of coverage, an adjustment of the algorithm can have equally explosive effects.

.

The criteria and implementation are currently far too clumsy

.
So we have to find a social consensus on what must not be said, broadcast, disseminated in public. And what must be allowed to be said on a large, public platform. So far, so difficult the theory. But the practice is even more complicated: where and how should regulation (control, cleansing, evaluation, censorship) intervene and by whom?

We can see every day that it doesn’t work very well. Or rather: Much works well — but the anger when something is misjudged is justifiably great. It’s not as simple as letting an AI clean out our debate rooms. Content moderators are also often overwhelmed. The effort to do it well would be high and no one wants to pay for the costs. While, on the other hand, nothing less than our freedom of expression is at stake.
.

 

Example 5: Facebook incitement in Myanmar and India


.
In some countries, Facebook is virtually the only website used by the general population. In Myanmar, for example, the military has had resounding success with a massive smear campaign on Facebook against the Rohingya Muslim minority since 2013. The ethnic group is not recognised there and has been systematically marginalised as “stateless”. Resentment, illegal detention, torture, rape and murder are commonplace. Displacement peaked in 2017, and since then some 1.5 million Rohingya have been living abroad, often in temporary shelters. The exclusion has been flanked by massive defamation, hate speech and open genocide claims on Facebook. The platform has failed in a big way to take the inciting content off the net because it has not built up a corresponding infrastructure in Burmese over the years.
.
India has also seen pogroms against minorities after rumours spread that they were kidnapping children on motorbikes. A video of such a scene was circulated via WhatsApp, but it was actually a staged scene from a film set. For weeks, the police did not succeed in clearing up the matter — the messages were embellished and shared further. Incited mobs eventually lynched a whole number of completely innocent people. An excess of violence triggered solely by fake news.

So how can it work?

.
In order to remedy the difficult situation our freedom of expression is in as social media matures, I propose the following concepts:
.

1. A separation of private and media communication

In the past it was clear: a phone call is private, a letter to the editor is public. But today we have hybrid forms such as profiles, forums or chat groups in which communication is quasi-private, but potentially with a large impact radius. So when we say
a) private must remain protected and
b) media must be moderated,
then we cannot avoid drawing an artificial line. Arbitrarily, I propose a range of 50 participants:

a) Anyone who communicates with up to 50 people is subject to full confidentiality of encryption and ePrivacy, including metadata.There you can say and share what you want in a protected space. If there are calls for murder and manslaughter here, you have to hope for a whistleblower. Or good police work with a judicial reservation, because even private spaces are not free of law. Anything else we would have to accept as collateral damage to our freedom of expression. Because we know that it could just as well have been shared on the darknet or at a meeting in the forest, surveillance cannot eradicate evil.

b) Channels that reach an audience of more than 50 people are considered media and are generally subject to public inspection and regulation. Depending on the degree of dissemination and the dangerousness, harmful content (see above) must be taken off the net more or less quickly. (Reporting and reaction times, warnings and filters are dealt with in detail below). A messenger service would then have to limit closed chat groups to 50 people, operators of larger forums would be obliged to moderate.
.

2. Reporting by laypersons must work

Reporting lines in the major networks are currently awkward and inconsistently designed. What is needed is a uniform process in which it can be stated in a low-threshold, quick and precise manner why a piece of content is illegal and should be removed. Or, as a complaint option, why the post should not be subject to blocking. In this way, a warning can be given after just one or two reports and a verdict can be obtained from the crowd. Soon we will have several judgements this way – and perhaps a reporting war. If the matter is controversial, a qualified person should look at it, and within a short time.

However, this swarm intelligence has one big catch: the sheer amount of harmful material is hardly tolerable for laypeople without filtering. Hordes of content moderators (“cleaners”) risk their mental health to protect us from beheading videos and images of child abuse. This work is unbearable; within months, these people are finished. One hope lies in AI, but it currently makes more mistakes than would be permissible in terms of freedom of expression. It is not for nothing that we have always fought against upload filters. Are we lying to ourselves here? Low-threshold objection options, in combination with a better-trained AI, could be a solution in the long term. But the topic remains explosive. The question of storing “training material” alone is difficult to answer ethically.
.

3. Recognised accounts ensure integrity

There are methods to assess the quality of accounts, their internet karma. Freshly opened, few followers — this account should not be trusted with authoritative judgements. Many warnings for harmful content, possibly belonging to a closed ring — obvious troll accounts could be given a fair warning. With time penalties, less reach, less trust in reporting and, in extreme cases, a block. On the other hand, if profiles have given hints that have been confirmed, they receive a bonus in their karma and are considered qualified users at some point.

Note: The criterion must not be the interaction rate, nor any good behaviour or “mainstream” opinion. Only behaviour in relation to demonstrably harmful content according to the criteria developed (see above) is registered. Those who simply mock their political opponents or appear vulgar to others may make themselves unpopular, but they have a right to do so — and may well have an intact sense of judgement.

Such a system of account weightings should be simple, transparent and uniform, i.e. open-source implementable and valid in every major forum for its respective accounts.
.

4. Court-authorised persons must decide

The very next step is for a legally authorised person to decide — not in terms of platform policies, but in terms of the law. Between the split-second judgement of the crowd or qualified users on the one hand, and a legal decision that takes weeks on the other, two more levels would have to be added. This would require a completely new type of decision-maker. Lay judges who have taken a kind of exam in a crash course. Not someone who pores over files in an office, but people who earn a few extra bucks in their home office during rush hours. People who know our cutural and the slang of the various subcultures. Here, too, of course there must be complaint procedures. But the higher the instance, the longer a sound judgement will take.

This would result in the following hierarchy in the decision on harmful content:

So far:

  • Users (report)
  • Cleaners (decide within a day)
  • Courts (decide on objections within months and years)

Proposal:

  • Users: report and judge (marking, incl. possibility of objection and counter-opinions in seconds)
  • Qualified users: decide (release, warning or temporary blocking within one hour)
  • Adjudicator: decides in controversial cases (approval or blocking with the possibility of appeal within one day)
  • Lay judge: decides in the event of an objection (unblocking or further blocking with the possibility of an objection, if possible within one day)
  • Regular courts: final decision (takes 3 months to 2 years, release or further blocking with path through the instances)

Example 6: Böhmermann and the Lex Soraya


.
Insulting the majesty or the president is punishable by imprisonment in some countries. In Turkey, this offence is being prosecuted with increasing severity. In 2016, the German satirist Jan Böhmermann deliberately mocked Turkish President Erdoğan in an invective poem, which led to diplomatic animosity. Is that allowed? It soon became clear: in a free country, it is. It’s none of the politicians’ business what the press does in their country, as long as no legal assets recognised here are violated. And a public person must live with harsh criticism — Ms Merkel can tell you about that. To finally clarify this, the outdated §103 StGB was soon abolished in the German parliament.
.
The case was reminiscent of an affair surrounding the so-called “Lex Soraya” of 1958: at that time, the German tabloid press had gone on a rampage about separation rumours in the Persian dynasty, thus incurring the wrath of the Shah of Persia and triggering a diplomatic crisis. There were actually plans in the cabinet to pass a law that would prevent such crises through censorship. But the German Press Council and the Federal Council quickly intervened — and the bill went to the paper trash.

Image: US Department of Defense / Wikimedia

5. Provide assistance

Social media phenomena also follow people’s needs, fears and reflexes and these should be taken serious. Jede Maßnahme „gegen“ einen User sollte daher mit einem spezifischen (Plattformübergreifenden) Angebot begleitet sein. Warum sind zu meinem Thema so viele Fakes unterwegs? Wie kann ich meine Wut in etwas Konstruktives umsetzen? Wohin kann ich mich wenden, wenn ich bei mir pädosexuelle Neigungen bemerke? Wie kann ich aus einem kriminellen Netzwerk aussteigen? Warum mache ich mich mit einem Inhalt strafbar? Das klingt „gut gemeint“ und ist es auch. Wer dies für eine Bevormundung hält, muss ja nicht teilnehmen.

Every measure “against” a user should therefore be accompanied by a specific (cross-platform) offer. Why are there so many fakes on my topic? How can I turn my anger into something constructive? What could I do if I notice paedosexual tendencies in myself? How can I leave a criminal network? Why do I make myself liable to prosecution with content? This sounds “well-intentioned” — and it is. Anyone who thinks this is paternalism may reject to take part.
.

6. Legal — tolerable — illegal

In addition to the categories of “legitimate expression” (no interference) and “illegal content” (must be deleted), we should admit that there is a third category: Content that is not worthy of promotion to a mass audience, but is still tolerable. These must be accessible, but should not be pushed. This could include, for example, claims of responsibility from terrorists or the documentation of cruel war crimes. No responsible editorial team would put something like that on page one, but there is no such awareness in the social media to date.

Here we leave the purely legally oriented area of black and white and enter the explosive grey area of the weighting of content. In the current proliferation, this weighting results from the interests — i.e. from the content-related or profit interests of the platforms or from the communication goals of those who know how to use them skilfully. Do we dare to intervene here in a judgmental way? How do we protect this area from mistrust, conceit and particular interests? Because, as I said, where there is mixing and grinding …

.

7. Revise business models

If we can regulate the alcohol or cigarette industries or tanning salons because we consider their products potentially harmful — why should we stand by and watch social media algorithms fuel polarisation and hatred? This is about business models. At the end of the day, the big platforms have to show their cards: What do they push, what do they suppress? Where is there overblocking or structural discrimination? Do they promote hate and disinformation to an extent that poses a threat to democracy as a whole and justifies regulation?

This is about business models. At the end of the day, the big platforms have to show their cards: What do they push, what do they suppress? Where is there overblocking or structural discrimination? Do they promote hate and disinformation to an extent that poses a threat to democracy as a whole and justifies regulation?

At the moment, the question what prevails is all about what is the most exciting, what brings the most impact, involement and screen time. Algorithms could be used to gradually counteract this, even beyond deletions. They could just as well promote what encourages constructive discussion and solutions, i.e. de-radicalise. Such an intervention in the engine room of the large social media platforms would require clear criteria and the most neutral supervision possible to ensure balance and diversity. And we would have to be wary of a politically coloured “public spirit” in order not to gamble away valuable trust.

No one should earn their money by keeping us in thrall for as long as possible with negative basal reflexes like fear, hate and insecurity. But how else are they supposed to finance themselves? Whether one defines large platforms as public (because they have to remain independent from the government!), whether one finances them with broadcasting fees instead of advertising, whether one cuts margins, whether one asks users to pay or perhaps only accounts with very high reach — that would also be worth a hot discussion.
.
.

Example 7: Radical Islamism — appeasement or defiance?


.
A whole series of horrific murders, attempted murders and death threats in Europe can be attributed to radical Islamists. Freedom of expression itself is often in the crosshairs. The film director Theo van Gogh or the Charlie Hebdo cartoonists were murdered. The cartoonist Kurt Westergaard was also attacked. The author Salman Rushdie has been under the constantly renewed threat of death by Iranian mullahs since 1989. Enlightened Muslims like Seyran Ateş or the Islam critic Hamed Abdel-Samad have to live under constant threat in the middle of Germany. “I do not share your opinion, but I would stake my life that you should be allowed to express it.” — this Voltaire quote has a literal meaning in relation to political Islam.
.
It should be clear that we must stand up for freedom of expression here. “Religious feelings” are not subject to legal protection in secular states. And yet there are always voices calling for appeasement out of misunderstood tolerance or the fear of being seen as hostile to Muslims: “Why do we always have to be so provocative?” Well, maybe because the radical Islamists will will not settle for small concessions. On the contrary they will interpret them as a sign of weakness and will never be satisfied. The secular understanding that religious views are diverse and ultimately a private matter is fundamental to a free society.

Image: Charlie Hebdo

Not a perfect world — but maybe we will get our act together

.
The proposals mentioned are not necessarily new. Nor do they offer any guarantee of security. What if we could implement them? Some libertarians would see it as paternalism and the market liberals would see them as sign of a nanny state. They would probably even lead to new injustices. Some harmless troublemakers would continue to end up in the virtual padded cell of a limited range. And some reporting cartels would succeed in silencing unwelcome critics.

But clear boundaries and better functioning processes could decisively help social media to outgrow their current adolescence. The danger of democracy and cohesion in plural societies breaking down due to hate and targeted disinformation would be reduced. We could face the next crisis with more trust in each other, with better functioning debates and more constructive solutions. And at the same time, the unobserved, dissident spaces would be preserved — without which we might as well give up democracy and freedom of thought altogether.
.

//