Der Anon-Rechtsterrorismus von Halle

Der Nazi-Attentäter von Halle hat sich in seinem Video, das er auf Twitch.tv hochgeladen hatte, als „Anon“ bezeichnet, dem Nutzernamen aller User von Image-Boards wie 4chan oder 8chan. Damit handelt es sich auch bei diesem Terroranschlag sehr wahrscheinlich um einen weiteren Fall stochastischen Terrorismus aus dem Netz, in dem sich Täter auf Image-Boards radikalisierten, genau wie bei den Anschlägen in Christchurch und El Paso zuvor.

„My name is Anon and I think the holocaust never happened…[indecipherable]…feminism is the cause of declining birth rates in the West which acts as a scapegoat for mass immigration, and the root of all these problems is the Jew…[indecipherable].“ (Shiraz Maher)

Die genauen Hintergründe dieses Anschlags bleiben abzuwarten, aber dieser Nazi-Terrorismus ist eine Online-Meme und folgt immer demselber Muster in der immergleichen Konstellation aus Livestreaming/Videoaufzeichnung, Bekenntnis zur Anon/Meme-Kultur und gezielter, extremer, mörderischer Gewalt gegen eine Minderheit. Es handelt sich hier um gamifizierten politischen Terror mit dem Ziel der maximalen Aufmerksamkeit durch maximale Verstörung. Es würde mich wenig wundern, wenn auch hier ein Manifest gefunden wird, die selbstgebauten Waffen sind ein eindeutiger Hinweis auf langfristige Planung.

Dieser Anschlag war weder der erste (Dylann Roof, Christchurch, El Paso), noch wird es der letzte bleiben. 4chan und 8chan sind Radikalisierungs-Beschleuniger und wir müssen uns überlegen, wie wir mit dieser Form des „freien Internets“ umgehen. 8chan wurde zwar nach den jüngsten Attentaten stillgelegt, ein Relaunch unter neuem Namen ist allerdings geplant.

Dazu müssen wir auch darüber reden, was Stochastischer Terrorismus im Kontext von Chan-Websites und Trolling bedeutet. Der Begriff „Stochastischer Terrorismus“ stammt aus einem 2011er Posting des Blogs Daily Kos und bedeutete ursprünglich: „the use of mass media to incite attacks by random nut jobs—acts that are statistically predictable but individually unpredictable“, also die gezielte Radikalisierung einer Gruppe von Menschen durch Massenmedien mit dem Ziel, dass ein Element aus dieser Gruppe eine Grenze überschreitet und zu gewalttätigen Mitteln greift.

Wo in den USA ein wahnsinniger Reality-TV-„Präsident“ diese Radikalisierung offen und meines Erachtens auch willentlich schürt, wird der rechte Wahnsinn hierzulande viel eher von kalten Strategen wie Höcke und seinem Kubitschek-Whisperer und dessen Zögling Sellner erzeugt. Nicht ganz so schrill wie MAGA-Hats, dafür aber scheinintellektuell und im Kombination mit dem radikal-freien und rechten Netz-Untergrund extrem ansteckend. Die neue deutsche rechte Online-Härte hat heute einen Terror-Anschlag verübt und nur der Zufall hat ein Blutbad verhindert.

Seit El Paso muss sich Trump fragen lassen, wie er zur Radikalisierung von Rechtsterroristen beiträgt und seit heute müssen sich diese Fragen auch alle deutschen Rechtstrolle gefallen lassen, die ironisch oder nicht rechtes Gedankengut verbreiten, Youtuber wie Shlomo die rechte Inhalte für junge und beeindruckbare Kids weichspülen, das rechte Viertel von Sifftwitter und natürlich Schlüsselfiguren wie Sellner, der heute tatsächlich behauptete, er hätte sein Leben lang gegen seine eigene mörderische Nazi-Ideologie gekämpft. All diese Leute sind Teil eines Schwarms, der zur Radikalisierung von Mördern beiträgt unter dem Deckmantel einer radikal-libertären Redefreiheit auf Image-Boards, die einmal das Herz der Netzkultur bildeten.

Online-Radikalisierung auf Nerdcore:

Der virale Terrorismus von Christchurch
Die PewDiePipeline: Wie edgy Humor zu Gewalt führen kann
Hotwheels über 8chan: Die Story einer Chan-Radikalisierung und die Ideologie der Anonymität
El Paso und 8chans Gamification von Rightwing-Terror
Digitaler Faschismus als Emergenz des Freien Internet
Infinite Evil: Dokumentation über 8Chan und den rechtsterroristischen Anschlag von Christchurch

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

Die psychologischen Folgen von Facebooks Content-Moderation

Nachdem The Verge bereits Anfang des Jahres über die katastrophalen Folgen der Content Moderation für Facebooks Mitarbeiter berichtete, hat nun der Guardian mit Moderatoren aus dem Berliner Team gesprochen. Die tägliche Konfrontation mit hunderten Gewaltdarstellungen, sexuellem Missbrauch und hasserfüllter Sprache macht die Mitarbeiter auch hier paranoid und hat Auswirkung auf ihre politische Haltung.

Ich frage mich derzeit immer häufiger, ob das Internet als solches verloren ist und ab welchem Punkt man das „größte soziale Experiment der Menschheit“ als gescheitert aufgrund einer Gefährdung der Öffentlichkeit abbrechen muss. Mich interessiert, wieviel der Bösartigkeit durch das Medium selbst ausgelöst wird. „People are bastards“, schon klar. Aber welche Rolle spielt der Wettbewerb um Aufmerksamkeit dabei? Wie sehr steigert Sichtbarkeit die ausgestellte Bösartigkeit? Und welche Arten der menschlichen Bösartigkeit genau?

FB muss massiv mehr Moderatoren einstellen, es handelt sich dabei um den derzeit wichtigsten Job der Welt und diese Arbeit wird völlig falsch bewertet, da das Prinzip der Arbeitsteilung nicht korrekt eingeschätzt wird. Es handelt sich nicht um „Content“-Moderatoren, sondern um „Violent Shit“-Moderatoren. Es sind Sicherheitskräfte, die nur im gemeldeten „Notfall“ eingreifen. Die überprüfen nicht „Inhalte“, sondern lediglich „gemeldete Inhalte“, der Anteil der Gewalt ist also ungleich höher. Ergo die psychologischen Folgen. Deshalb sind Content-Moderatoren viel eher das Äquivalent zu Gefängnis-Wärtern oder Wärtern in Psychiatrien. Diese Arbeit braucht entsprechende arbeitsrechtliche Regulation, entsprechende Ausbildungs-Vorraussetzungen sowie eine angemessene Bezahlung und die Arbeiter müssen durch massive Mehreinstellungen psychologisch entlastet werden. Pronto.

Guardian: Revealed: catastrophic effects of working as a Facebook moderator

A group of current and former contractors who worked for years at the social network’s Berlin-based moderation centres has reported witnessing colleagues become “addicted” to graphic content and hoarding ever more extreme examples for a personal collection. They also said others were pushed towards the far right by the amount of hate speech and fake news they read every day.

They describe being ground down by the volume of the work, numbed by the graphic violence, nudity and bullying they have to view for eight hours a day, working nights and weekends, for “practically minimum pay”. […]

The Verge report appeared to trigger reforms. Moderators in Berlin said after the article was published there had been immediate interest from Facebook’s head office in their workload. Previously, they had been required to moderate 1,000 pieces of content a day – more than one every 30 seconds over an eight-hour shift.

In February, an official from Facebook’s Dublin office visited, John said. “This person after this meeting decided to take off the limit of 1,000. We didn’t have any limit for a while, but now they have re-established another limit. The limit now is between 400 and 500 tickets.” The new cap – or number of tickets – was half that of the previous one but still required workers achieve about a ticket a minute. However, that volume of work was what their American colleagues had faced before the reforms. […]

While the moderators agreed such work was necessary they said the problems were fixable. “I think it’s important to open a debate about this job,” he said, adding that the solution was simple – “hire more people”.

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

Vortrag von Armin Nassehi: Für welches Problem ist die Digitalisierung eine Lösung?

5. September 2019 17:00 | #Allgemein #Media #Social Media #Soziologie #synch

Soziologe Armin Nassehi mit einem Vortrag am Humboldt Institut für Internet und Gesellschaft über sein Buch Muster – Theorie einer digitalen Gesellschaft: „Die digitale Technologie hat in nur wenigen Jahren die Welt revolutioniert: unsere Beziehungen, unsere Arbeit und sogar das Ergebnis von Wahlen – alles scheint völlig anderen Regeln zu folgen.“ –>

Der Soziologe Armin Nassehi geht von einer techniksoziologischen Intuition aus: Eine bestimmte Technologie kann nur erfolgreich sein, wenn sie ein grundlegendes Problem löst. Wenn es der Digitalisierung also gelingt, solches Veränderungspotential zu entfalten, muss die Frage gestellt werden: „Für welches Problem ist die Digitalisierung eine Lösung?“ Die Antwort wird unter anderem darauf hinweisen, dass die moderne Gesellschaft schon vor der Computertechnologie auf eine eigentümliche Weise „digital“ genannt werden kann.

Vorher auf Nerdcore:
Editierfähige Muster
Armin Nassehis Muster der Digitalen Gesellschaft

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

Source Hacking: Vier netzkulturelle Techniken der Medienmanipulation

Gutes Paper von Joan Donovan und Brian Friedberg über eine Kombination altbekannter Netzkultur-Techniken, die sie unter dem Stichwort „Source Hacking“ zusammenfassen und im Kontext gezielter Medien-Manipulation anhand einiger bekannter Beispiele von Pizzagate bis Charlottesville untersuchen. Es sind Techniken wie diese, die die Editierfähigkeit aller digitalen Inhalte herstellen, wobei ich „Source Hacking“ als Edits zweiter Ordnung betrachten würde, während Coding, Hacks oder Manipulationen per Photoshop oder Deepfakes Edits erster Ordnung darstellen würden.

Keine der hier vorgestellten Techniken ist wirklich neu und ihre Wurzeln liegen in alten Praktiken der Netzkultur, gewannen aber auf Boards wie 4chan in den letzten Jahren eine semi-ritualistische Qualität und sie sind in dieser Kombination bei nahezu allen größeren Nachrichten-Events anzutreffen. Das Ziel der Medien-Manipulatoren ist immer die Platzierung von Propaganda in der Berichterstattung der Leitmedien, um linke oder progressive Politiken zu verhindern und/oder rechten Politikern zur Macht zu verhelfen.

Wer wissen will, wie Trolle und Nazis auf Plattformen wie 4chan kooperieren, um die öffentliche Meinung und den Journalismus zu manipulieren, bekommt mit diesem Doc eine nette Zusammenfassung ihrer Strategien.

Data & Society: Source Hacking: Media Manipulation in Practice

Online media manipulators often use specific techniques to hide the source of the false and problematic information they circulate. Joan Donovan and Brian Friedberg label this strategy “source hacking.” Typically used during breaking news events, source hacking targets journalists and other influential public figures to pick up falsehoods and unknowingly amplify them to the public.

In Source Hacking: Media Manipulation in Practice, Donovan and Friedberg use case studies to illustrate four main techniques of source hacking:

– Viral Sloganeering: repackaging reactionary talking points for social media and press amplification
– Leak Forgery: prompting a media spectacle by sharing forged documents
– Evidence Collages: compiling information from multiple sources into a single, shareable document, usually as an image
– Keyword Squatting: the strategic domination of keywords and sockpuppet accounts to misrepresent groups or individuals

These strategies are often used simultaneously, and make it difficult to find proof of coordination. While each technique is effective on its own, their ultimate value comes from “buy-in from audiences, influencers, and journalists alike.

These four tactics of source hacking work because networked communication is vulnerable to many diferent styles of attack, and finding proof of coordination is not easy to detect. Source hacking techniques complement each other and are often used simultaneously during active manipulation campaigns. These techniques may be carefully coordinated, but often rely on partisan support and buy-in from audiences, influencers, and journalists alike. Viral sloganeering allows small groups of manipulators to receive disproportionate mainstream coverage by encouraging those exposed to their slogans to seek further information online. Forged leaks are seeded by manipulators and set the stage to defame public figures. Similarly, the creators of evidence collages amplify falsified documents and propaganda to sway journalistic coverage and prompt audiences to self-investigate. Keyword squatting allows manipulators to impersonate individuals and organizations, creating false impressions of their targets’ goals and allowing for controlled opposition.

Manipulators who use the techniques illustrated here rely on quick deployment and prior organizing experiences to coordinate participation. Manipulation campaigns that gather on one platform to plan an attack on another are designed to give the impression of large-scale public engagement. This adversarial media environment requires both journalists and platform designers to think with the tools of information security and open source intelligence to spot when they are being manipulated. Greater attention to the coordination of manipulation campaigns across platforms is the most productive way to guard against their reach. Only through careful attention to the data craft used to create disinformation can these campaigns be debunked in a timely manner.

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

Against Moral Social Media Panics

Etwas kurzsichtiger Kommentar von Milton Mueller beim Cato Institute über eine Moral Social Media Panic, der die alarmierenden Studien der jüngeren Zeit schlichtweg ignoriert und als Argument erstaunlicherweise einen der Faktoren ins Feld führt, die Social Media erst zum Outrage-Katalysator machen: Hypertransparenz. Diese führt laut Autor dazu, dass wir auf einmal Phänomene sehen würden, die vorher unsichtbar blieben, zum Beispiel Gossip oder Bullying und Vandalismus. Der Text ignoriert dann aber die Auswirkungen sozialer Medien auf eben jene Phänomene, den Wettbewerb in der Aufmerksamkeitsökonomie um Clicks und Likes, er ignoriert die beschleunigende Wirkung der Vernetzung, er ignoriert die Effekte größerer Reichweiten, er ignoriert den Wegbruch von Gatekeepern.

Am erstaunlichsten wirkt dieser Text, wenn er zwar „große Effekte der Hypertransparenz auf den Dialog über Regulation der Kommunikation“ feststellt, diese „großen Effekte“ sind aber bei allen anderen sozialen Phänomenen auf einmal verschwunden und spielen dort keine Rolle mehr. Ebenfalls verschweig der Text die Rolle emotionaler Manipulation durch die medieninhärente Übertreibungstendenz in eben jener Aufmerksamkeitsökonomie oder der Fälschung von Inhalten durch die grundsätzliche Editierfähigkeit aller digitalen Inhalte.

Der Text offenbart seine Oberflächlichkeit auch dann, wenn er über das „Kuschel-Hormon“ Oxytocin spricht, das über Vernetzung und „Likes“ ausgeschüttet wird. Hätte Mueller seine Hausaufgaben gemacht, wüsste er, dass Oxytocin nicht nur für „trust, empathy, and generosity“ für die In-Group verantwortlich ist, sondern auch für gesteigerte Aggressivität und Ausgrenzung für die Out-Group, womit Polarisierung und exzessive Empörung in sozialen Netzwerken erklärt werden können (vorher auf NC: Das Oxytocin-Web).

Ebenso scheint Milton Mueller sich nicht über die dualistische Daseinsform der Kommunikation klar zu sein, wenn er über „storable, searchable records“ menschlicher Aktivität spricht. Die mündliche Rede gewinnt unter diesen Bedingungen Veröffentlichungscharakter, was die menschliche Kommunikation neuen Bedingungen unterwirft. Mir scheint, Mueller unterschätzt die Effekte dessen massiv.

Der Artikel gewinnt an den Stellen, an denen er zum Beispiel die Rolle von Bots und der russischen „Einmischung“ in die US-Wahl 2016 richtig einschätzt. Die Anzahl der Bots sagt nichts über deren Impact und der Erfolg der russischen Medienkampagne 2016 war nicht eine manipulierte Wahl, sondern eine weitere Polarisierung der Öffentlichkeit durch gefälschte, hyper-einseitige, politische „Memes“, etwa für Black Lives Matter oder die Waffenlobby, einsehbar etwa in der Datenbank russischer Facebook-Ads.

Einer Meinung mit dem Autor bin ich dann am Ende mit seiner Diagnose, dass das, was am Netz „kaputt“ ist, genau dasselbe ist, was es so erfolgreich macht: Seine Effizienz in Entdeckung und Austausch von Informationen unter interessierten Teilnehmern in einem bisher unbekannten Ausmaß. Deshalb denke ich auch, dass staatliche Intervention schlichtweg verpufft (as in „The Net interprets censorship as damage and routes around it“) und sich die neuen psychologischen und soziologischen Bedingungen so ausspielen werden, wie sie sich unter diesen neuen Bedingungen eben ausspielen. Daran werden auch moralische oder unmoralische Paniken nichts ändern. Die Vorzeichen dabei lauten, wie der Text richtig sagt: Hypertransparenz und Sichtbarkeit, aber auch Globalisierung und Emotionalisierung, Hyperpolitisierung, eine Erfassung aller sozialer Vorgänge durch massenmediale Bedingungen und so weiter.

Ich halte diesen Kommentar für interessant als medienphilosophische Diskussion der libertären „Hands off“-Haltung zur Neuformierung der gesellschaftlichen Synchronisation durch soziale Massenmedien, der an vielen Stellen durch seine selektive Ignoranz leidet und daher nur eine eingeschränkt korrekte Diagnose des Zeitgeschehens liefern kann. Als Hinweis sei hinzugefügt: Das Cato Institute ist ein konservativer Think Tank und wird von den Koch Brothers finanziert.

Cato Institute: Challenging the Social Media Moral Panic: Preserving Free Expression under Hypertransparency

The human activities that are coordinated through social media, including negative things such as bullying, gossiping, rioting, and illicit liaisons, have always existed. In the past, these interactions were not as visible or accessible to society as a whole. As these activities are aggregated into large-scale, public commercial platforms, however, they become highly visible to the public and generate storable, searchable records. In other words, social media make human interactions hypertransparent. […]

Hypertransparency generates what I call the fallacy of displaced control. Society responds to aberrant behavior that is revealed through social media by demanding regulation of the intermediaries instead of identifying and punishing the individuals responsible for the bad acts. There is a tendency to go after the public manifestation of the problem on the internet, rather than punishing the undesired behavior itself. At its worst, this focus on the platform rather than the actor promotes the dangerous idea that government should regulate generic technological capabilities rather than bad behavior.

The psychological claims also seem to suffer from a moral panic bias. According to Courtney Seiter, a psychologist cited by some of the critics, the oxytocin and dopamine levels generated by social media use generate a positive “hormonal spike equivalent to [what] some people [get] on their wedding day.” She goes on to say that “all the goodwill that comes with oxytocin — lowered stress levels, feelings of love, trust, empathy, generosity — comes with social media, too … between dopamine and oxytocin, social networking not only comes with a lot of great feelings, it’s also really hard to stop wanting more of it.”9 The methodological rigor and experimental evidence behind these claims seems to be thin, but even so, wasn’t social media supposed to be a tinderbox for hate speech? Somehow, citations of Seiter in attacks on social media seem to have left the trust, empathy, and generosity out of the picture. […]

What is “broken” about social media is exactly the same thing that makes it useful, attractive, and commercially successful: it is incredibly effective at facilitating discoveries and exchanges of information among interested parties at unprecedented scale. As a direct result of that, there are more informational interactions than ever before and more mutual exchanges between people. This human activity, in all its glory, gore, and squalor, generates storable, searchable records, and its users leave attributable tracks everywhere. As noted before, the emerging new world of social media is marked by hypertransparency.

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

[Memetik-Links 29.8.2019] Wandel der Debattenkultur in sozialen Medien; Dirk Baecker: Digitalisierung und die nächste Gesellschaft; Die siebte Sprachfunktion

Where’s the anger on Facebook these days? A lot of it is on far-left sites: Arash Barfar of the University of Nevada, Reno, analyzed comments on political Facebook posts and found that posts on hyperpartisan Facebook pages received “significantly less analytic responses from Facebook followers” than mainstream news sites did, with “greater anger and incivility” appearing in the comments on far-left sites.

Rene DiResta: „The internet is basically a series of factions competing for amplification from algorithms at this point. Pump up your guy, downrank theirs. Coordinate for an algorithmic boost or trend where possible. Marketing by way of manufactured consensus.“ –> ANGRY FANS KEEP WRECKING PODCASTS WITH ONE-STAR REVIEWS

• Konrad Adenauer Stiftung: Wandel der Sprach- und Debattenkultur in sozialen Online-Medien – Ein Literaturüberblick zu Ursachen und Wirkungen von inziviler Kommunikation

• Vortrag: Dirk Baecker: Digitalisierung und die nächste Gesellschaft

• Spektrum der Wissenschaft: Verursacht Sprache wirklich Gewalt? In Laurent Binets satirischem Roman »Die siebte Sprachfunktion« von 2015 kreist die Handlung um eine geheimnisvolle Eigenschaft von Sprache, deren Beherrschung einem Menschen uneingeschränkte rhetorische Macht über andere Menschen verleiht. Neben den bekannten sechs Funktionen von Sprache, die Roman Jakobson 1960 in seinem Kommunikationsmodell beschrieben hat, ist die siebte Funktion nur einem kleinen Kreis von Eingeweihten zugänglich. Das macht sie so begehrt, dass Menschen dafür reihenweise Morde begehen.

• Realistisches Szenario memetischer Kriegsführung mit Virals, Hacking und Deepfakes: What Cyber-War Will Look Like.

Sailors stationed at the 7th Fleet’ s homeport in Japan awoke one day to find their financial accounts, and those of their dependents, empty. Checking, savings, retirement funds: simply gone. The Marines based on Okinawa were under virtual siege by the populace, whose simmering resentment at their presence had boiled over after a YouTube video posted under the account of a Marine stationed there had gone viral. The video featured a dozen Marines drunkenly gang-raping two teenaged Okinawan girls. The video was vivid, the girls’ cries heart-wrenching the cheers of Marines sickening And all of it fake. The National Security Agency’s initial analysis of the video had uncovered digital fingerprints showing that it was a computer-assisted lie, and could prove that the Marine’s account under which it had been posted was hacked. But the damage had been done.

There was the commanding officer of Edwards Air Force Base whose Internet browser history had been posted on the squadron’s Facebook page. His command turned on him as a pervert; his weak protestations that he had not visited most of the posted links could not counter his admission that he had, in fact, trafficked some of them. Lies mixed with the truth. Soldiers at Fort Sill were at each other’s throats thanks to a series of text messages that allegedly unearthed an adultery ring on base.

The variations elsewhere were endless. Marines suddenly owed hundreds of thousands of dollars on credit lines they had never opened; sailors received death threats on their Twitter feeds; spouses and female service members had private pictures of themselves plastered across the Internet; older service members received notifications about cancerous conditions discovered in their latest physical.

Leadership was not exempt. Under the hashtag # PACOMMUSTGO a dozen women allegedly described harassment by the commander of Pacific command. Editorial writers demanded that, under the administration’s “zero tolerance” policy, he step aside while Congress held hearings.
There was not an American service member or dependent whose life had not been digitally turned upside down. In response, the secretary had declared “an operational pause,” directing units to stand down until things were sorted out.

Then, China had made its move, flooding the South China Sea with its conventional forces, enforcing a sea and air identification zone there, and blockading Taiwan. But the secretary could only respond weakly with a few air patrols and diversions of ships already at sea. Word was coming in through back channels that the Taiwanese government, suddenly stripped of its most ardent defender, was already considering capitulation.

The machine always wins: what drives our addiction to social media: Social media was supposed to liberate us, but for many people it has proved addictive, punishing and toxic. What keeps us hooked?

Why does 8chan exist at all? Because 1 asshole wants to.

• Good riddance: Youtube nimmt Identitären-Chef Sellner den Videokanal weg.

Im Mülleimer des Internets: Faschistische Memes, dauer-ironische Influencer und echter Hass

Selbstkritische Verteidigung linker Identitätspolitik, die die Widersprüchlichkeit in der eigenen Ideologie anerkennt. Kann man so stehen lassen, aber überzeugt bin ich nicht.

Identity Politics Versus Independent Thinking: Tocqueville stressed the importance of preserving, within the larger democratic order, islands of culture devoted to the undemocratic values of excellence and truth. These could be, he thought, enclaves for protecting the independence of mind that a democracy like ours especially needs. Today our colleges and universities are doing a poor job of meeting this need, and the idea of diversity is at least partly to blame.

Why Reddit Is Losing Its Battle with Online Hate: Nithayanand believes that if Reddit consistently banned all communities violating its policies, the company would make it harder for the site’s worst users to find new homes and keep spreading bigoted, homophobic, and sexist messages, promoting violence, and otherwise break Reddit’s rules. This would be a significant change from Reddit’s usual practice of taking action, if at all, on communities at inconsistent points in their evolution. Paper (PDF): To Act or React? Investigating Proactive Strategies For Online Community Moderation

Reddit, with wigs and ink: The first newspapers contained not high-minded journalism but hundreds of readers’ letters exchanging news with one another.

In The Structural Transformation of the Public Sphere (1962), Jürgen Habermas argued that print enabled the establishment of an arena of public debate. Print first made it possible for average people to come together to discuss matters of public concern. In turn, their knowledge and cooperation undermined the control that traditional royal and religious powers had over information. Habermas pointed to the early 18th century as a crucial moment of change. At that time, newspapers and periodicals exploded in both number and influence. ‘In The Tatler, The Spectator and The Guardian the public held a mirror up to itself,’ Habermas noted of the impact of a trio of periodicals by the journalists Joseph Addison and Richard Steele. The new publications allowed readers to shed their personal identities as rich or poor, male or female. Instead, in print they could enter into conversation as anonymous equals rationally engaging with the topics of the day.

What exactly is The Epoch Times? And why should you care? Here are a few reasons, from the NBC report:

– it’s the biggest spender (aside from Trump’s own campaign) on pro-Trump Facebook ads – more than $1.5 million in the last six months
– it has one of the biggest social media followings of any news outlet – in April, its videos combined for around 3 billion views on Facebook, YouTube and Twitter
– it is closely tied to Falun Gong, a Chinese spiritual community with the stated goal of taking down China’s government, and whose founder “has railed against what he called the wickedness of homosexuality, feminism and popular music while holding that he is a god-like figure who can levitate and walk through walls”
– it sees Trump as a key ally and has become a favorite of the Trump family – Trump’s Facebook page has posted Epoch Times content and Donald Trump Jr. has tweeted several of their stories too
– its network of news sites and YouTube channels has made it a powerful conduit for the internet’s fringier conspiracy theories, including anti-vaccination propaganda and QAnon – the overarching theory that there is an evil cabal of “deep state” operators and child predators out to take down the president – to reach the mainstream
– it “sees communism everywhere: former Secretary of State Hillary Clinton, movie star Jackie Chan and former United Nations Secretary General Kofi Annan were all considered to have sold themselves out to the Chinese government”, according to Ben Hurley, a former Falun Gong practitioner who helped create Australia’s English version of The Epoch Times before leaving in 2013
– “It is so rabidly pro-Trump,” Hurley said, referring to The Epoch Times. Devout practitioners of Falun Gong “believe that Trump was sent by heaven to destroy the Communist Party.”

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

Infinite Evil: Dokumentation über 8Chan und den rechtsterroristischen Anschlag von Christchurch

Infinite Evil – The Incubators of Online Hate ist eine neue Doku der neuseeländischen Nachrichten-Seite Stuff.co.nz über 8chan und den rechtsterroristischen Anschlag von Christchurch. Dazu ein paar interaktive Features, von denen „The Anatomy of a Tweet“ sicher das interessanteste ist und ein Artikel über The alleged Christchurch shooter’s digital footprint.

In der Doku äußern sich unter anderem 8chan-Gründer Fredrick „Hotwheels“ Brennan, Spieleentwicklerin Brianna Wu, einem der bekanntesten Angriffsziele von Gamergatern, Extremismusforscherin Becca Lewis von Data&Society (deren Studien ich auf Nerdcore mehrfach verlinkt hatte), Robert Evans von Bellingcat, der dort die möglicherweise beste Berichterstattung im Netz über die Rolle des Internet-Undergrounds in Online-Radikalisierung schreibt (The El Paso Shooting and the Gamification of Terror, Shitposting, Inspirational Terrorism, and the Christchurch Mosque Massacre, Ignore The Poway Synagogue Shooter’s Manifesto: Pay Attention To 8chan’s /pol/ Board).

Hier der Trailer, die Doku gibt es nur online auf der Website von Stuff.co.nz.

March 15, 2019, changed everything about our understanding of the real world impact of online hate. A shooter opens fire at two Christchurch mosques – killing 51 people. On an online image board called 8chan, the shooter’s actions were anticipated, celebrated and amplified.

What was this site, also known as Infinity Chan? Who’s responsible for the racist, Islamophobic, anti-Semitic, mysognist material it spreads?

Stuff Circuit journalists traveled to the United States and the Philippines in search of answers. The result of their investigation is a video documentary, Infinite Evil, available now.

This accompanying site includes three interactive elements as part of the investigation: what a viral meme tells us about online hate, how tech companies responded to the Christchurch attacks, and an extended interview with 8chan founder Fredrick Brennan.

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

[Memetik-Links 27.8.2019] Gamergate ist überall; die Widerstandsfähigkeit von „Hass-Netzwerken“; the Weaponization of Context

• Lars Fischer in Spektrum der Wissenschaft über ein neues Paper über die Widerstandsfähigkeit von „Hass-Netzwerken“: Hass-Netzwerke sind selbstheilend: „Eine Arbeitsgruppe analysiert, warum Facebook & Co sich mit dem Bekämpfen extremistischer Gemeinschaften so schwertun – und schlägt ungewöhnliche Strategien gegen Online-Hass vor.“

Fig. 1 | Global ecology of online hate clusters

Johnsons Team schlägt vier Optionen vor. Zum einen sei es effektiver und einfacher, systematisch kleine und kleinste Hass-Zellen zu bekämpfen, um die Bildung größerer Gruppen zu unterbinden. Außerdem sollten statt ganzer Gruppen zufällig ausgewählte Mitglieder verschiedener Gruppen von den Plattformen verbannt werden, um eine Reorganisation des Netzwerks zu vermeiden.

Zwei weitere Strategien basieren darauf, die Netzwerke durch andere, gegnerische Gruppen von Nutzerinnen und Nutzern angreifen zu lassen: Einerseits sollten demnach den Extremisten feindlich gesinnte Gruppen vom Netzwerk gefördert werden – andererseits schlägt Johnsons Team vor, Extremisten mit unterschiedlichen Ansichten aufeinanderzuhetzen.

Aus dem Paper:

Interconnected hate clusters form global ‘hate highways’ that—assisted by collective online adaptations—cross social media platforms, sometimes using ‘back doors’ even after being banned, as well as jumping between countries, continents and languages. Our mathematical model predicts that policing within a single platform (such as Facebook) can make matters worse, and will eventually generate global ‘dark pools’ in which online hate will flourish. We observe the current hate network rapidly rewiring and self-repairing at the micro level when attacked, in a way that mimics the formation of covalent bonds in chemistry. This understanding enables us to propose a policy matrix that can help to defeat online hate, classified by the preferred (or legally allowed) granularity of the intervention and top-down versus bottom-up nature.

Remember: NONE of these apply to radicalizing sites like 8chan. Measurements like these can only be applied by plattforms like Facebook or Twitter.

• Guter Artikel über die „Weaponization of Context“: Misinformation Has Created a New World Disorder: Our willingness to share content without thinking is exploited to spread disinformation.

The most effective disinformation has always been that which has a kernel of truth to it, and indeed most of the content being disseminated now is not fake—it is misleading. Instead of wholly fabricated stories, influence agents are reframing genuine content and using hyperbolic headlines. The strategy involves connecting genuine content with polarizing topics or people. Because bad actors are always one step (or many steps) ahead of platform moderation, they are relabeling emotive disinformation as satire so that it will not get picked up by fact-checking processes. In these efforts, context, rather than content, is being weaponized. The result is intentional chaos.

Gamergate comes to the Classroom: educators face new challenges: teaching responsibly, while also safeguarding themselves from the very kids they hope to help. “You develop this self-preservation intuition,” Ruberg tells The Verge. “You have to know what’s happening so that you know how to protect yourself.” As misinformation and hate continues to radicalize young people online, teachers are also grappling with helping their students unlearn incorrect, dangerous information. “It has made a lot of us teachers more cautious,” they say. “We want to challenge our students to explore new ways of thinking, to see the cultural meaning and power of video games, but we’re understandably anxious and even scared about the possible results.”

Everything is Gamergate: Der Artikel beleuchtet nicht alle Facetten des Phänomens, aber er formuliert den meines Erachtens Ground-Zero-Moment unserer neuen geilen Zeit hier sehr deutlich: „Steve Bannon, at the time Breitbart’s chairman, saw Gamergate as an opportunity to ignite a dormant, internet-native audience toward a focused and familiar cause: that feminism and social justice had spiraled out of control. ‘I realized Milo could connect with these kids right away,” Mr. Bannon told the journalist Joshua Green in 2017. “You can activate that army. They come in through Gamergate or whatever and then get turned onto politics and Trump.’“

Ab hier hatte die Rechte die Netzkultur gehijackt und eine erfolgreiche PsyOp initiiert, die auch aufgrund einer oberflächlichen und emotionalen Berichterstattung der Leitmedien genug Angriffsfläche nutzen konnte.

• I don’t like Whitney Phillips’ left-identitarian take on these phenomena, but she’s not wrong: It Wasn’t Just the Trolls: Early Internet Culture, “Fun,” and the Fires of Exclusionary Laughter.

A collection of several hundred late-2000s internet memes posted to image sharing site Imgur, and subsequently linked to on Reddit, provides a perfect example (“Late 2000s imagedump (352 images),” 2019; “Late 2000s imagedump,” 2019). Appropriately, the first comment reads, “A more simple time.” Ha ha ha, here’s a grainy photo of two photoshopped cows. “Moo,” one cow’s dialogue box says. “You bastard, I was going to say that!” says the second. Here’s a guy with his mouth photoshopped over both his eyes. Here’s two cats photoshopped to look like they’re playing a handheld videogame console. “LET ME SHOW YOU MY POKEMONS!” the cat says. This image is ensconced in an additional text frame, which at the top reads “Pokemons,” and at the bottom, “Let me show you them.” Here’s a bear running onto a golf course. The top caption indicates that this is golf course rule enforcement bear. The bottom caption concludes that you are fucked now. Here’s some kittens making sassy faces at each other.

Like I said: a simpler time, but oh right, also, the thread begins with a lighthearted meme about Hitler. And continues with dehumanizing mockery of a child with disabilities. And more sneering mockery of an old man hooked up to an oxygen tank. And date rape. And violence against animals. And fat shaming. And homophobia. And racism. And pedophilia. And how hilarious 9/11 was. And women as unfeeling, inanimate sex objects. With multiple examples of the last seven.

If this were a collection, specifically, of 4chan memes, that flagged itself as representing early trolling subculture, many would nod and say, yes, they really were pieces of shit back then. Those trolls! They helped elect Donald Trump you know.

But this was not a collection, specifically, of 4chan memes. This was a sampling—and as a person who has studied such things for the better part of a decade, I can attest that it is a representative sampling—of what was often described as “internet culture,” or simply “meme culture,” from about 2008 to 2012. Such a broad framing belies the fact that internet/meme culture was a discursive category, one that aligned with and reproduced the norms of whiteness, maleness, middle-classness, and the various tech/geek interests stereotypically associated with middle-class white dudes.2 In other words: this wasn’t internet culture in the infrastructural sense, that is, anything created on or circulated through the networks of networks that constitute the thing we call The Internet. Nor was it meme culture in the broad contemporary sense, which, as articulated by An Xaio Mina (2019), refers to processes of collaborative creation, regardless of the specific objects that are created. This was a particular culture of a particular demographic, who universalized their experiences on the internet as the internet, and their memes as what memes were.

• The Institute of Art and Ideas: The Dark and the Internet (they changed the title to „Should The Internet Be Censored?“ later, presumably for clicks): „Most of us like to think that people are good, yet the anonymity of the internet has enabled an epidemic of abuse. Watch Ella Whelan, Yasmin Alibhai-Brown and Nigel Inkster debate whether the internet should be censored.“

YouTube’s Plan To Rein In Conspiracy Theories Is Failing: Conspiracy theorists have capitalized on YouTube’s change to its algorithm by using it to rally their bases for grassroots promotion.

• Netzpolitik: Radikalisierung durch YouTube? Großzahlige Studie zur Empfehlung rechtsextremer Inhalte: „Befördert YouTube die Verbreitung politisch extremer Positionen? Eine neue großzahlige Studie legt nahe, dass sich YouTube-Nutzer:innen tatsächlich im Zeitverlauf radikalisieren. Und dass YouTubes Empfehlungsalgorithmen einen Beitrag dazu leisten.“

Die Filterblase der rechten Influencer

• I bet Facebook sits on a treasure trove on data for serious memetic studies which they use to suck your soul: Facebook’s Ex-Security Chief Details His ‘Observatory’ for Internet Abuse: Alex Stamos’ Stanford-based project will try to persuade tech firms to offer academics access to massive troves of user data.

• The most absurd shitshow in the culture wars: Knitting’s Infinity War, Part III: Showdown at Yarningham. Social Justice Knitters seem to me the most tyrannical, harsh, unfriendly, antagonistic assholes on the illiberal left.

• Buzzfeed on the latest variant of the victim-playbook: Andy Ngo Has The Newest New Media Career. It’s Made Him A Victim And A Star.

The Illinois Artist Behind Social Media’s Latest Big Idea: Instagram and Twitter are removing the numbers of likes and retweets from public view. But it began with a man named Ben Grosser.

Wie geht eigentlich Hype? Fieberkurven der Konsumgesellschaft: Wenn die Influencerin Masha Sedgwick “Mon Paris in Berlin” postet und dabei ein Parfum von YSL lobt, wollen ihre Follower in diesem Duft baden. Auch das Instagramfoto von Katie Perrys Verlobungsring ging viral. Banksys Schredderbild “Love is in the bin” hat sogar einen Karnevalswagen geschmückt. Und eine fast vergessene Marke wie “ellesse” ist plötzlich wieder Kult. Wie entstehen Fieberkurven des Konsumentenkapitalismus? Wie produziert man einen Hype?

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

[Memetik-Links 27.8.2019] Gamergate ist überall; die Widerstandsfähigkeit von „Hass-Netzwerken“; the Weaponization of Context

• Lars Fischer in Spektrum der Wissenschaft über ein neues Paper über die Widerstandsfähigkeit von „Hass-Netzwerken“: Hass-Netzwerke sind selbstheilend: „Eine Arbeitsgruppe analysiert, warum Facebook & Co sich mit dem Bekämpfen extremistischer Gemeinschaften so schwertun – und schlägt ungewöhnliche Strategien gegen Online-Hass vor.“

Fig. 1 | Global ecology of online hate clusters

Johnsons Team schlägt vier Optionen vor. Zum einen sei es effektiver und einfacher, systematisch kleine und kleinste Hass-Zellen zu bekämpfen, um die Bildung größerer Gruppen zu unterbinden. Außerdem sollten statt ganzer Gruppen zufällig ausgewählte Mitglieder verschiedener Gruppen von den Plattformen verbannt werden, um eine Reorganisation des Netzwerks zu vermeiden.

Zwei weitere Strategien basieren darauf, die Netzwerke durch andere, gegnerische Gruppen von Nutzerinnen und Nutzern angreifen zu lassen: Einerseits sollten demnach den Extremisten feindlich gesinnte Gruppen vom Netzwerk gefördert werden – andererseits schlägt Johnsons Team vor, Extremisten mit unterschiedlichen Ansichten aufeinanderzuhetzen.

Aus dem Paper:

Interconnected hate clusters form global ‘hate highways’ that—assisted by collective online adaptations—cross social media platforms, sometimes using ‘back doors’ even after being banned, as well as jumping between countries, continents and languages. Our mathematical model predicts that policing within a single platform (such as Facebook) can make matters worse, and will eventually generate global ‘dark pools’ in which online hate will flourish. We observe the current hate network rapidly rewiring and self-repairing at the micro level when attacked, in a way that mimics the formation of covalent bonds in chemistry. This understanding enables us to propose a policy matrix that can help to defeat online hate, classified by the preferred (or legally allowed) granularity of the intervention and top-down versus bottom-up nature.

Remember: NONE of these apply to radicalizing sites like 8chan. Measurements like these can only be applied by plattforms like Facebook or Twitter.

• Guter Artikel über die „Weaponization of Context“: Misinformation Has Created a New World Disorder: Our willingness to share content without thinking is exploited to spread disinformation.

The most effective disinformation has always been that which has a kernel of truth to it, and indeed most of the content being disseminated now is not fake—it is misleading. Instead of wholly fabricated stories, influence agents are reframing genuine content and using hyperbolic headlines. The strategy involves connecting genuine content with polarizing topics or people. Because bad actors are always one step (or many steps) ahead of platform moderation, they are relabeling emotive disinformation as satire so that it will not get picked up by fact-checking processes. In these efforts, context, rather than content, is being weaponized. The result is intentional chaos.

Gamergate comes to the Classroom: educators face new challenges: teaching responsibly, while also safeguarding themselves from the very kids they hope to help. “You develop this self-preservation intuition,” Ruberg tells The Verge. “You have to know what’s happening so that you know how to protect yourself.” As misinformation and hate continues to radicalize young people online, teachers are also grappling with helping their students unlearn incorrect, dangerous information. “It has made a lot of us teachers more cautious,” they say. “We want to challenge our students to explore new ways of thinking, to see the cultural meaning and power of video games, but we’re understandably anxious and even scared about the possible results.”

Everything is Gamergate: Der Artikel beleuchtet nicht alle Facetten des Phänomens, aber er formuliert den meines Erachtens Ground-Zero-Moment unserer neuen geilen Zeit hier sehr deutlich: „Steve Bannon, at the time Breitbart’s chairman, saw Gamergate as an opportunity to ignite a dormant, internet-native audience toward a focused and familiar cause: that feminism and social justice had spiraled out of control. ‘I realized Milo could connect with these kids right away,” Mr. Bannon told the journalist Joshua Green in 2017. “You can activate that army. They come in through Gamergate or whatever and then get turned onto politics and Trump.’“

Ab hier hatte die Rechte die Netzkultur gehijackt und eine erfolgreiche PsyOp initiiert, die auch aufgrund einer oberflächlichen und emotionalen Berichterstattung der Leitmedien genug Angriffsfläche nutzen konnte.

• I don’t like Whitney Phillips’ left-identitarian take on these phenomena, but she’s not wrong: It Wasn’t Just the Trolls: Early Internet Culture, “Fun,” and the Fires of Exclusionary Laughter.

A collection of several hundred late-2000s internet memes posted to image sharing site Imgur, and subsequently linked to on Reddit, provides a perfect example (“Late 2000s imagedump (352 images),” 2019; “Late 2000s imagedump,” 2019). Appropriately, the first comment reads, “A more simple time.” Ha ha ha, here’s a grainy photo of two photoshopped cows. “Moo,” one cow’s dialogue box says. “You bastard, I was going to say that!” says the second. Here’s a guy with his mouth photoshopped over both his eyes. Here’s two cats photoshopped to look like they’re playing a handheld videogame console. “LET ME SHOW YOU MY POKEMONS!” the cat says. This image is ensconced in an additional text frame, which at the top reads “Pokemons,” and at the bottom, “Let me show you them.” Here’s a bear running onto a golf course. The top caption indicates that this is golf course rule enforcement bear. The bottom caption concludes that you are fucked now. Here’s some kittens making sassy faces at each other.

Like I said: a simpler time, but oh right, also, the thread begins with a lighthearted meme about Hitler. And continues with dehumanizing mockery of a child with disabilities. And more sneering mockery of an old man hooked up to an oxygen tank. And date rape. And violence against animals. And fat shaming. And homophobia. And racism. And pedophilia. And how hilarious 9/11 was. And women as unfeeling, inanimate sex objects. With multiple examples of the last seven.

If this were a collection, specifically, of 4chan memes, that flagged itself as representing early trolling subculture, many would nod and say, yes, they really were pieces of shit back then. Those trolls! They helped elect Donald Trump you know.

But this was not a collection, specifically, of 4chan memes. This was a sampling—and as a person who has studied such things for the better part of a decade, I can attest that it is a representative sampling—of what was often described as “internet culture,” or simply “meme culture,” from about 2008 to 2012. Such a broad framing belies the fact that internet/meme culture was a discursive category, one that aligned with and reproduced the norms of whiteness, maleness, middle-classness, and the various tech/geek interests stereotypically associated with middle-class white dudes.2 In other words: this wasn’t internet culture in the infrastructural sense, that is, anything created on or circulated through the networks of networks that constitute the thing we call The Internet. Nor was it meme culture in the broad contemporary sense, which, as articulated by An Xaio Mina (2019), refers to processes of collaborative creation, regardless of the specific objects that are created. This was a particular culture of a particular demographic, who universalized their experiences on the internet as the internet, and their memes as what memes were.

• The Institute of Art and Ideas: The Dark and the Internet (they changed the title to „Should The Internet Be Censored?“ later, presumably for clicks): „Most of us like to think that people are good, yet the anonymity of the internet has enabled an epidemic of abuse. Watch Ella Whelan, Yasmin Alibhai-Brown and Nigel Inkster debate whether the internet should be censored.“

YouTube’s Plan To Rein In Conspiracy Theories Is Failing: Conspiracy theorists have capitalized on YouTube’s change to its algorithm by using it to rally their bases for grassroots promotion.

• Netzpolitik: Radikalisierung durch YouTube? Großzahlige Studie zur Empfehlung rechtsextremer Inhalte: „Befördert YouTube die Verbreitung politisch extremer Positionen? Eine neue großzahlige Studie legt nahe, dass sich YouTube-Nutzer:innen tatsächlich im Zeitverlauf radikalisieren. Und dass YouTubes Empfehlungsalgorithmen einen Beitrag dazu leisten.“

Die Filterblase der rechten Influencer

• I bet Facebook sits on a treasure trove on data for serious memetic studies which they use to suck your soul: Facebook’s Ex-Security Chief Details His ‘Observatory’ for Internet Abuse: Alex Stamos’ Stanford-based project will try to persuade tech firms to offer academics access to massive troves of user data.

• The most absurd shitshow in the culture wars: Knitting’s Infinity War, Part III: Showdown at Yarningham. Social Justice Knitters seem to me the most tyrannical, harsh, unfriendly, antagonistic assholes on the illiberal left.

• Buzzfeed on the latest variant of the victim-playbook: Andy Ngo Has The Newest New Media Career. It’s Made Him A Victim And A Star.

The Illinois Artist Behind Social Media’s Latest Big Idea: Instagram and Twitter are removing the numbers of likes and retweets from public view. But it began with a man named Ben Grosser.

Wie geht eigentlich Hype? Fieberkurven der Konsumgesellschaft: Wenn die Influencerin Masha Sedgwick “Mon Paris in Berlin” postet und dabei ein Parfum von YSL lobt, wollen ihre Follower in diesem Duft baden. Auch das Instagramfoto von Katie Perrys Verlobungsring ging viral. Banksys Schredderbild “Love is in the bin” hat sogar einen Karnevalswagen geschmückt. Und eine fast vergessene Marke wie “ellesse” ist plötzlich wieder Kult. Wie entstehen Fieberkurven des Konsumentenkapitalismus? Wie produziert man einen Hype?

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

Digitaler Faschismus als Emergenz des Freien Internet

Neue Studie von Maik Fielitz und Holger Marcks über Radikalisierungsprozesse in Sozialen Medien: Digital Fascism: Challenges for the Open Society in Times of Social Media. Die Studie besagt, dass moderner Faschismus nicht mehr länger einer Top-Down-Dynamik vergangener Tage unterliegt, in dem sich eine kleine Gruppe um ein Bedrohungsszenario radikalisiert, Macht erlangt und aus dieser Machtposition die Gesellschaft kippt, sondern in dem sich eine kleine Gruppe um ein Bedrohungsszenario radikalisiert und dieses Bedrohungsszenario memetisch vervielfältigt (also viral macht) und so auch unbedarfte User in die propagandistische Logik mit einbezieht.

Telepolis: Der braune Algorithmus

Natürlich gebe es auch im digitalen Rechtsextremismus weiterhin zentrale Führungspersonen und auch Parteistrukturen, die auf diesen Prozess einwirken. “Deren Aufgabe ist es weniger, eine organisierte Masse zu dirigieren, als die richtigen emotionalen Knöpfe in den sozialen Medien zu drücken, damit unbedarfte User rechtsextreme Inhalte reproduzieren und unbewusst Teil der faschistischen Dynamik werden.” Die Führungspersonen von Parteien wie der AfD und Organisationen wie der Identitären Bewegung koordinieren dementsprechend nicht mehr den Prozess in der Breite. Vielmehr wirken sie durch Techniken des Mikromanagements auf gesellschaftliche Debatten ein und lenken sie damit nach rechts.

Ich interpretiere Fielitz und Marcks’ Studie so, dass Digitaler Faschismus eine Emergenz aus den Prozessen des freien Internets selbst darstellt, das ein ideales Biotop für diffuse Ängste und Bedrohungsszenarien für In-Groups bildet, für die rechtes Denken eine genauso irrationale wie logische Antwort darstellt und befeuert wird dieses virale rechte Denken von emotionaler Geschichtenerzählung aus dem rechten Lager.

Aus dem Paper:

dispersed digital (sub-)cultures create new counter-publics that go far beyond the familiar logic of far-right organizations. They strongly correspond with the fear-mongering that is being reproduced by a patchwork of beliefs in which contradictory influences converge into myths of an endangered community that is forced to take radical action. The narratives of victimhood and imperilment are key to understanding the enhanced mobilization of such emotions. These myths of menace are easily compatible with the cultural pessimism that permeates mainstream and radical right-wing ideologies. Therefore, it is crucial to analyze how they diffuse in the digital infrastructures that connect the more organized forms of the far right with the dispersed potential of fascist dynamics. […]

Social media, in turn, goes further. It offers every individual a dirt-cheap service structure to spread content effectively, ready-made and not demanding any skill. Even the access to an audience is included in this service, even for individuals who provide nothing more than a dull commentary that would formerly have failed to qualify as a reader’s letter. All this not only accounts for political actors, but also for clueless individuals. As “prosumers” they not only consume (manipulative) information, but (re-)produce it by sharing it uncritically if they lack the expertise to classify the information at hand properly.

The far right is a major profiteer of this opening up of plural information. Classical fascism was already gaslighting successfully by using new media for spreading manipulative information. As a response, the open societies developed protection mechanisms against this, such as journalistic or ethical standards for knowledge production, disarming the far right, whose agenda stands and falls with society’s susceptibility to making truth random. Social media levers these mechanisms out, thus giving the far right its most important weapon back to unleash alternative perceptions. Bypassing established routines and institutions of knowledge production, it can easilyspread its manipulative content.

That structures of social media are also immanentlybeneficial for the far rightisdue to its instrumental relation to truth. While other political actors are bound to ethical constraints in dealing with information, in the fascist rationale there are basically no limits that would sacrifice political ambition for the sake of the factuality of events. Leaders of the Identitarian Movement, for instance, admit openly, that “[w]e need a moral justification of our position much more urgently than proof of its factual correctness!” (Sellner 2017: 218). And this need is also satisfied by social media itself, as it contributes to an erosion in the intersubjective understanding of truth andthusto an “epistemic crisis” (Benkler et al. 2018: 3).

As mentioned above, dramatic events are more salient in human perception, and, at the same time, offensive material attracts more attention. Fear-mongering content is hence only more likely to migrate from one platform to the next. Promising more clickbait (and revenues), it also gets prioritized in the algorithm-based curation of users. In this way, social media keeps pushing the diffusion of “post-truth” forward, which the fascist rationale then builds on.

Metric Manipulation and the Logic of Numbers

Taking the techniques described above a step further, we can observe a symbiosis of far-right manipulation strategies with a business-like competitiveness over followers and attention. […]

In the German-speaking context, the technique has been further elaborated by a far-right network called Reconquista Germanica (Bogerts/Fielitz 2019). Several thousand far-right activists and self-considered trolls gathered on an encrypted discussion board to coordinate manipulation efforts that worked in favor of the AfD party. On central command, hordes of far-right activists targeted the mainstream discussion boards in social media in the disguise of anonymity. Besides these methods of outnumbering, they were also involved in hijacking hashtags and the harassment of politicians, including the doxing of personal information that had already led to the withdrawal of representatives from politics. Organizations like the Identitarians and the AfD have welcomed the flood of comments, memes and bots to marginalize opponents and to manipulate discourses. They also encourage online activists to bring discord into discussions and challenge opponents with disruptive tactics and transgressive appearances. Trolling as a tactic in particular reflects the ambivalence of the internet (Philips 2018). Double meanings, in-joke humor, irony and invective build the cornerstones of a subtle practice, where activists hide behind fake profiles to sidetrack, frustrate or (in the best case) neutralize critics, contributing to a discursive metric that makes far-right tropes look common. […]

4. Grasping the Intangible: The Fluidity and Ambivalence of Digital Fascism

Relating far-right agency in social media to structures of social media, the above section has shown that Acker’s argument has a plausible core. Social media does not simply offer opportunities for far-right actors to spread their worldviews, but offers opportunity structures that are particularly beneficial for far-right agency.

Moreover, social media itself (re-)produces orders of perception that are prone to the fascist rationale. This is plausible if we understand palingenetic ultranationalism as the core feature of fascism and corresponding myths of menace asconstitutive for fascist dynamics. After all, social media enables an allocation and selection of information that unleashes perceptions of imperilment in M. Fielitz /H. Marcks15particular, thus doing the emotional work that, in classical fascism, had to be done by a regimentedparty structure.

Digital fascism can thus be considered a family-like variation of fascism in which the fascist core feature draws dynamics directly out of social structures in the digital world.

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).