Content-Moderation intern als gefährlicher Beruf anerkannt, Arbeiter müssen PTSD-Klausel unterzeichnen

Casey Newton arbeitet sich auf The Verge bereits seit Jahren an den Arbeitsbedingungen der Content-Moderatoren ab. In Interviews fand er heraus, dass die mittlerweile an Unternehmensberatungen ausgelagerte Tätigkeit in unzähligen Fällen Mental Health-Issues erzeugte, die Moderatoren unter mutmaßlich illegalen Bedingungen arbeiten und diese psychologische Belastungen in der Vergangenheit praktisch ohne nennenswerte Begleitung durch Psychologen oder Mental Health-Maßnahmen ertragen mussten. Ein bisschen ist das so, als hätten die Zuckerbergs der Welt ihre neuen digitalen Städte (aka Plattformen) ohne Kanalisationssystem geplant und dann sie haben den Klempner outgesourced, wo er sich für Mindestlohn in seinen Überstunden Mental Health-Issues abholt.

Nun hat die Unternehmensberatung Accenture ein Dokument an ihre Youtube-Content-Moderatoren zur Unterzeichnung ausgegeben, mit dem die Mitarbeiter die Gefährlichkeit des Arbeitsplatzes und die Möglichkeit einer Erkrankung an einer Posttraumatischen Belastungsstörung (PTSD) anerkennen. Das Dokument wurde kurz nach der Veröffentlichung von Newtons Artikeln an die Moderatoren verteilt und darin wälzt das Unternehmen die Verantwortung für die geistige Gesundheit auf seine unterbezahlten Mitarbeiter ab, während man gleichzeitig die emotionale Belastung des Jobs formal anerkennt.

In einem Newsletter schreibt Newton nun davon, dass dieses Dokument nicht nur an Youtube-Moderatoren ausgegeben wurde, sondern auch an Moderatoren anderer Plattformen in den USA und Europa, offenbar bereitet sich Accenture mit dieser Vereinbarung auf kommende Gerichtsverhandlungen vor. Die ersten Moderatoren haben Klagen gegen ihre ehemaligen Arbeitgeber eingereicht. Als möglicherweise unbeabsichtiger Nebeneffekt gibt Accenture mit diesem Dokument praktisch zu, dass es sich bei der Moderation von Online-Content allgemein um einen gefährlichen Beruf handelt, der zu Einschränkungen der geistigen Gesundheit führen kann.

Ich erachte den Beruf des Content-Moderators als den wichtigsten Beruf unserer Zeit. Sie sind die Klempner, die dafür sorgen, dass das Internet zumindest oberflächlich nicht durch Gewalt und Porno überschwemmt wird. Genau wie die Erfindung der Abwassersysteme während der Industrialisierung Krankheiten eindämmte und zu einem gewaltigen gesellschaftlichen Sprung führte, sind es Systeme der Content-Moderation, die das Netz für uns alle ein wenig sicherer machen. Arbeitsplätze in diesem Bereich müssen massiv ausgebaut und besser bezahlt werden, während die Unternehmen die Sicherheit ihrer Arbeitnehmer gewährleisten und entsprechende Maßnahmen in ausreichender Menge zur Verfügung stellen.

The Verge: YOUTUBE MODERATORS ARE BEING FORCED TO SIGN A STATEMENT ACKNOWLEDGING THE JOB CAN GIVE THEM PTSD

“I understand the content I will be reviewing may be disturbing,” reads the document, which is titled “Acknowledgement” and was distributed to employees using DocuSign. “It is possible that reviewing such content may impact my mental health, and it could even lead to Post Traumatic Stress Disorder (PTSD). I will take full advantage of the weCare program and seek additional mental health services if needed. I will tell my supervisor/or my HR People Adviser if I believe that the work is negatively affecting my mental health.”

The PTSD statement comes at the end of the two-page acknowledgment form, and it is surrounded by a thick black border to signify its importance. It may be the most explicit acknowledgment yet from a content moderation company that the job now being done by tens of thousands of people around the world can come with severe mental health consequences.

“The wellbeing of our people is a top priority,” an Accenture spokeswoman said in an email. […]

The PTSD form describes various support services available to moderators who are suffering, including a “wellness coach,” a hotline, and the human resources department. (“The wellness coach is not a medical doctor and cannot diagnose or treat mental health disorders,” the document adds.)

It also seeks to make employees responsible for monitoring changes in their mental health and orders them to disclose negative changes to their supervisor or HR representative. It instructs employees to seek outside help if necessary as well. “I understand how important it is to monitor my own mental health, particularly since my psychological symptoms are primarily only apparent to me,” the document reads. “If I believe I may need any type of healthcare services beyond those provided by [Accenture], or if I am advised by a counselor to do so, I will seek them.”

The document adds that “no job is worth sacrificing my mental or emotional health” and that “this job is not for everyone” — language that suggests employees who experience mental health struggles as a result of their work do not belong at Accenture.

Aus Casey Newtons Newsletter: How tech companies should address their workers’ PTSD.

First, invest in research. We know that content moderation leads to PTSD, but we don’t know the frequency with which the condition occurs, or the roles most at risk for debilitating mental health issues. Nor have they investigated what level of exposure to disturbing content might be considered “safe.” It seems likely that those with sustained exposure to the most disturbing kind of photos and videos — violence and child exploitation — would be at the highest risk for PTSD. But companies ought to fund research into the issue and publish it. They’ve already confirmed that these jobs make the workforce ill — they owe it to their workforce to understand how and why that happens.

Second, properly disclose the risk. Whenever I speak to a content moderator, I ask what the recruiter told them about the job. The results are all over the map. Some recruiters are quite straightforward in their explanations of how difficult the work is. Others actively lie to their recruits, telling them that they’re going to be working on marketing or some other more benign job. It’s my view that PTSD risk should be disclosed to workers in the job description. Companies should also explore suggesting that these jobs are not suitable for workers with existing mental health conditions that could be exacerbated by the work. Taking the approach that Accenture has — asking workers to acknowledge the risk only after they start the job — strikes me as completely backwards.

Third, set a lifetime cap for exposure to disturbing content. Companies should limit the amount of disturbing content a worker can view during a career in content moderation, using research-based guides to dictate safe levels of exposure. Determining those levels is likely going to be difficult — but companies owe it to their workforces to try.

Fourth, develop true career paths for content moderators. If you’re a police officer, you can be promoted from beat cop to detective to police chief. But if you’re policing the internet, you might be surprised to learn that content moderation is often a dead-end career. Maybe you’ll be promoted to “subject matter expert” and be paid a dollar more an hour. But workers rarely make the leap to other jobs they might be qualified for — particularly staff jobs at Facebook, Google, and Twitter, where they could make valuable contributions in policy, content analysis, trust and safety, customer support, and more.

If content moderation felt like the entry point to a career rather than a cul-de-sac, it would be a much better bargain for workers putting their health on the line. And every tech company would benefit from having workers at every level who have spent time on the front lines of user-generated content.

Fifth, offer mental health support to workers after they leave the job. One reason content moderation jobs offer a bad bargain to workers is that you never know when PTSD might strike. I’ve met workers who first developed symptoms after a year, and others who had their first panic attacks during training. Naturally, these employees are among the most likely to leave their jobs — either because they found other work, or because their job performance suffered and they were fired. But their symptoms will persist indefinitely — in December I profiled a former Google moderator who still had panic attacks two years after quitting. Tech companies need to treat these workers like the US government treats veterans, and offer them free (or heavily subsidized) mental health care for some extended period after they leave the job.

Not all will need or take advantage of it. But by offering post-employment support, these companies will send a powerful signal that they take the health of all their employees seriously. And given that these companies only function — and make billions — on the backs of their outsourced content moderators, taking good care of them during and after their tours of duty strikes me as the very least that their employers can do.

Vorher auf Nerdcore:

Facebooks Inhalts-Moderatoren stecken sich mit Verschwörungstheorien an
Content Moderatoren und Community Manager sind die wichtigsten Berufe im Internet
Die psychologischen Folgen von Facebooks Content-Moderation

Tiefenunschärfe als fotojournalistische Meme: The Rise of the Blur

Kluges Essay von Dushko Petrovich im n+1-Mag über eine technisch-ästhetische Meme im Foto-Journalismus, die seit der Wahl Trumps immer häufiger als illustratives Gestaltungselement in Bildredaktionen genutzt wird: Tiefenunschärfe.

n+1: Rise of the Blur – A specter is haunting photojournalism—an actual, visible specter!

Fotografen arbeiten grundsätzlich mit sehr reduzierten Gestaltungsmitteln, ihnen bleibt 1. das zu fotografierende Motiv oder Ereignis, das sie 2. in einem wählbaren Fokus aus 3. einer Perspektive fotografieren. Und seit Trump rahmen die redaktionellen Bildgestalter ihre Motive immer häufiger in dunklen, unscharfen Objekten im Vordergrund, die das eigentliche Hauptmotiv partiell verdecken, sie selbst diffus wirken lässt. Die Bildredaktionen (in den USA) zeichnen ihre Fotografien heute nebulöser, „unfassbarer“ und verleihen ihnen damit eine subtil surreale Ästhetik, als noch zu während der Präsidentschaften von Bush oder Obama.

Petrovich führt führt das Entstehen dieser Meme auf zwei Ursachen zurück. Einerseits reagieren professionelle Fotografen auf die Allpräsenz von Fotografie-Technologie in unseren Smartphones, mit denen wir Fotografien mit professioneller Anmutung per Klick erstellen können, die dennoch technisch oft anspruchslos bleiben. Wo das iPhone den Hintergrund weichzeichnen kann für influenzerkompatiblere Insta-Fotos, steht einem Profifotograf das komplette Gestaltungsarsenal der Technik „Tiefenunschärfe“ zur Verfügung.

Andererseits und sehr viel interessanter ist diese ästhetische Meme selbstverständlich eine Reaktion auf eine Myriarde von Phänomenen: Der offenen Feindlichkeit der Trump-Regierung gegenüber der Presse, der zusammenbrechende gesellschaftliche Konsens auf alle möglichen Fakten und Ereignisse, die diffuse „Bedrohungslage“ durch unberechenbare Präsidenten oder den Klimawandel und schlußendlich auf die explosionsartige Komplexitätssteigerung durch Vernetzung und erhöhte Sichtbarkeit in sozialen Medien.

Petrovich beschreibt die Tiefenunschärfe-Meme im modernen Fotojournalismus als ästhetischen Ausdruck eines fast lovecraftschen „existential Dread“, einer existenziellen Bedrohung, die (scheinbar) unscharf außerhalb unseres Fassungsvermögens liegt. Eine total-sichtbare, globalisierte, hyperaktive und gleichzeitig unsichere Welt, bebildert in unscharfen, nebulösen, ungreifbaren Fotografien und die Bildredaktionen der Medien als Seismograph für die emotionale Verfasstheit der Menschen darin.

the blur functions best as an insinuation. It has an element of gossip about it. I want to say it is like a whisper or a murmur, but in fact, the blur functions wordlessly. When your mind tries to link it to “government corruption” or “systemic collapse,” the blur retreats.

That’s because it doesn’t illustrate the particular phenomenon, so much as our feelings—our dread—about what is going on outside of view, outside of our control. This might be why the blur is so often positioned like a velvet curtain. Nestled inside the relentless clarity of our Super Retina displays, the dark swath of pixels also comes as a kind of relief. The blurs are weirdly sensual. The framing is voyeuristic. We have access, the photographers seem to be saying, but we don’t have access. Unlike the other pervasive abstraction of our time—the black rectangle of government redaction—the blur is being inserted by the journalists themselves. It isn’t self-censorship though—it’s weirder than that. If anything, the blur has the heat and the blush of pure expression. Each instance, however, employs the tactic of plausible deniability. “A dark, looming figure just happened to block more than half of this otherwise perfect shot,” the photo says. News outlets have tended to parcel the blurs out one at a time, perhaps in an effort to preserve visual variety. But there’s another possibility: if you put all these pictures together, they accumulate to something monstrous, almost obscene.

SINCE 2016, the foreground blurs have grown not only in frequency but also in size. What was once a subtle framing became an interruption, then an encroachment; now the fuzziness regularly suffocates the subject. Trump, of course, has been pictured this way almost daily, with a dark blur closing in on him from all or most sides. His henchmen, too: Mulvaney, Barr, Bolton. But many others get the smudge treatment. In the past few weeks, in the Times alone, Adam Schiff had a black and red blur swirling all around him, to accompany a warning: “Democrats: Don’t Overreach on Impeachment.” Michael Bloomberg, entering the presidential race as a moderate, was put the middle of gargantuan blurs coming at him from all sides. Pete Buttigieg was shown stroking his chin behind a crowd of blurry forms to illustrate a piece about how he has “quit playing nice.” […]

And yet even as the blur metastasizes, Trump has regained his position at its vanguard. As he faced impeachment, Trump was no longer surrounded by blurs. He had, instead, become the blur. In a notable case of paper-of-record and magazine-of-record alignment, the New York Times and the New Yorker used extremely similar images to chronicle the extent to which the President found himself hemmed in by the impeachment inquiry. […]

It’s hard to say what will come next. Maybe more flames. It’s possible that the bright orange photos of Australia’s ravenous fires—arid Turner landscapes for the anthropocene—will also render all these blurs quaint and modest.

The Internet of Beefs: Eine Konflikttheorie der Multiplayer Online Battle Arena formerly known as The Web

17. Januar 2020 14:32 | #Allgemein #Social Media #synch

„We are beefing because we no longer know who we are, each of us individually, and collectively as a species.“

Venkatesh Rao mit einer neuen Konflikttheorie der Multiplayer Online Battle Arena, die er als „Internet of Beef“ bezeichnet und in dem sich Armeen aus „Mooks“ („Wicht“) in die Kulturkämpfe ihrer Netzwerke stürzen, die sich selbst organisieren und in denen sich semi-aristokratische, harsche Hierarchien ausgebildet haben und der „Wicht“ möchte in der Hierarchie seines Netzwerks natürlich ganz nach oben und benutzt dafür die schärfeste und verletzendste Sprache, um seinen Gegner (meist ein anderer „Wicht“) wortwörtlich zu vernichten („Delete your account“ etc.) Und die Mooks bilden durch ihre schiere Masse einen Markt, mit dem die sogenannten „Knights“ (also Wortführer bis hin zu Politikern und Prominenten) Aufmerksamkeit aus diesem „Empörungskapital“ abschöpfen, das von den „Mooks“ erzeugt wird.

Das Internet of Beefs ist laut Rao ein selbsterhaltendes System, dessen einziges Ziel es ist, Konflikte zu erzeugen, da diese Konflikte die einzige wirklich sinnstiftenden Interaktionen innerhalb des Netzwerks darstellen. Das Internet wird so laut Rao zu einem Netz endloser Konflikte, die aus reinem Selbstzweck geführt und nie aufgelöst werden, eine unendliche Shitshow formerly known as The Internet und ihre konflikt-erzeugende Empörungsökonomie.

Ribbonfarm: The Internet of Beefs

The Internet of Beefs, or IoB, is everywhere, on all platforms, all the time. Meatspace is just a source of matériel to be deployed online, possibly after some tasteful editing, decontextualization, and now AI-assisted manipulation. […] Conflict on the IoB is not governed by any sort of grand strategy, or even particularly governed by ideological doctrines. It is an unflattened Hobbesian honor-society conflict with a feudal structure, at the heart of which is an involuntarily anonymous, fungible, angry figure desperate to be seen as significant: the mook.

The semantic structure of the Internet of Beefs is shaped by high-profile beefs between charismatic celebrity knights loosely affiliated with various citadel-like strongholds peopled by opt-in armies of mooks. The vast majority of the energy of the conflict lies in interchangeable mooks facing off against each other, loosely along lines indicated by the knights they follow, in innumerable battles that play out every minute across the IoB.

Almost none of these battles matter individually. Most mook-on-mook contests are witnessed, for the most part, only by a few friends and algorithms, and merit no overt notice in either Vox or Quillette. Beyond a local uptick in cortisol levels, individual episodes of mook-on-mook violence are of no consequence. In aggregate though, they matter. A lot. They are the raison d’être of the IoB. […]

In a real conflict, you would seek an overwhelming advantage, and ideally, to win without firing a shot, at no cost. On the IoB, knights seek balanced matches, actual fighting, and no outcome, at the highest cost possible. You cannot predict the course of a culture war by trying to understand it as a military conflict. You can only predict it by trying to understand it as the deliberate perpetuation of a culture of conflict by those with an interest in keeping it alive. […]

The IoB is, at its core, a vast zone of wartime fan fiction generated by copypasta mooks LARPing knightly patterns of conflict, and attempting to write themselves into the end of history as heroic Mary Sues, one meme at a time.

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

[Das Geile Neue Internet 3-1-20] Taylor Fang über das Internet als psychosoziales Baukastensystem; Liu Cixins Wandering Earth und die junge Rechte in China; tödliche Impfgegnerkampagnen in Pakistan

Vor Weihnachten hatten Twitter und Facebook ein Netzwerk auf Fake-Accounts hochgenommen, die insgesamt einen Reach von 55 Millionen Accounts hatten. Es ging um „610 Facebook accounts, 89 Facebook Pages, 156 Groups and 72 Instagram accounts“, die mit StyleGAN erzeugte Profil-Pics aufwiesen („This person do not exist“ und so weiter). Mehr zur Analyse der Profile-Pics hier: Fake Faces: People Who Do Not Exist Invade Facebook To Influence 2020 Elections.

Das Netzwerk kam von der Epoch Media Group, einem Netzwerk von Pro-Trump-Medien mit dem Background der chinesischen Sekte Falun Gong. Hier der ganze Report: #OperationFFS: Fake Face Swarm – Facebook Takes Down Network Tied to Epoch Media Group That Used Mass AI-Generated Profiles.

Sehr guter Aufsatz der Schülerin Taylor Fang bei Technology Review über das Internet als psychosoziales Baukastensystem, in dem wir unterschiedliche Identitäten erschaffen, den verschiedenen Facetten unserer Persönlichkeit Ausdruck verleihen und mit den von uns gespielten sozialen Rollen jonglieren.

Teenagers are disparaged for not being “present.” Yet we find visibility in technology. Our selfies aren’t just pictures; they represent our ideas of self. Only through “reimagining” the selfie as a meaningful mode of self-representation can adults understand how and why teenagers use social media. To “reimagine” is the first step toward beginning to listen to teenagers’ voices.

Meaning—scary as it sounds—we have to start actually listening to the scruffy video-game-­hoarding teenage boys stuck in their basements. Because our search for creative self isn’t so different from previous generations’. To grow up with technology, as my generation has, is to constantly question the self, to split into multiplicities, to try to contain our own contradictions. In “Song of Myself,” Walt Whitman famously said that he contradicted himself. The self, he said, is large, and contains multitudes. But what is contemporary technology if not a mechanism for the containment of multitudes?

Anti-Fascists Are Waging a Cyber War — And They’re Winning – Inside the world of antifa researchers as they build an online army to battle far-right extremism: “When the Pittsburgh synagogue shooting happened, the second [the killer’s] name hit the news, I almost threw up,” Molly Conger recalled. “I remember thinking, ‘I’ve read his posts, under his real name, and I just moved on. You can’t tell the difference between shitposting trolls and mass murderers. They all talk the same.”

Killer content: disinformation campaign derails Pakistan’s national anti-polio drive – Misleading videos online provoked mass hysteria and attacks on health workers: The Express Tribune reported that the mass hysteria created by the campaign led to over 908,381 families refusing to vaccinate against polio. […]

Liu Cixins Wandering Earth und die junge Rechte in China: Wandering Earth: A Magic Mirror Reflecting the Monstrosity of the Prometheans and Young Cyber-Nationalists.

Yup: Digital connectivity has turned the “social factory” into a global battlefield: cyberwar increasingly takes the form of precisely targeted psychological warfare. The aim is not merely to deliver specific messages to manipulate specific individuals but also to create an increasingly paranoid enclosure in which no one knows what is going on around them. Dyer-Witheford and Matviyenko call this the “veritable fog machine” of digital war. Because of how capital has woven digital networks into everyday life, this fog can engulf any aspect of society or the economy and transform it into a battlefield.

Popdust über The Verges Terror Queue: This Content Is Dangerous: Trauma in the Age of YouTube: Google researchers are experimenting with using technological tools to ease moderators’ emotional and mental distress from watching the Internet’s most violent and abusive acts on a daily basis: They’re thinking of blurring out faces, editing videos into black and white, or changing the color of blood to green–which is fitting: blood the color of money.

Video-Anonymisierung mit Deepfake-Tech: This startup claims its deepfakes will protect your privacy

Social media and protest participation: Evidence from Russia: on average, a 10% increase in VK penetration leads to a 4 percentage point higher chance of a protest taking place and a 19% larger protest.

THE INVENTION OF “ETHICAL AI”: How Big Tech Manipulates Academia to Avoid Regulation

‘Virtue Signalling’ May Annoy Us. But Civilization Would Be Impossible Without It

‘Boomerspeak’ Is Now Available for Your Parodying Pleasure: The verbal stylings of the boomer generation—dot dot dots, repeated commas, mid-sentence caps—crystallized into a distinct genre this year.

Gerrymandering in Social Network-Strukturen: How social networks can be used to bias votes: Evidence is stacking up that a small number of strategically placed bots can influence the choices of undecided voters.

Political hashtags like #MeToo and #BlackLivesMatter make some people doubt the stories they’re attached to: “This is a load of crap on a number of levels.” “This article reads ‘FAKE NEWS.’” “I don’t believe this post is backed with any real knowledge or fact.” A simple hashtag intended to boost a post’s audience on social can also prime audiences to read it through an emotional, partisan lens.

Meta-Studie findet keinen Zusammenhang zwischen Benutzung von Social Media und Depressionen: social media use only predicts a small change in well-being over time. Der Artikel beschreibt dann ein Facebook-Experiment, in dem User nach dem Löschen der App Verbesserungen der Mental Health zeigten, das aber anscheinend vom Entzug der Nachrichten im Newsfeed herrührte. Medien im Netz erzeugen also ein minimales aber konstantes Stress-Level. Dazu auch: A digital detox does not improve wellbeing, say psychologists.

Kids mit Insta haben häufiger Essstörungen: Behaviours related to disordered eating were reported by 51.7% of girls, and 45% of the boys, with strict exercise and meal skipping to lose weight or prevent weight gain being the most common. […] The more social media accounts, and greater time spent using them, were associated with a higher likelihood of disordered eating thoughts and behaviours, says lead author Dr. Simon Wilksch, a Senior Research Fellow in Psychology at Flinders University. The study is believed to be the first to examine the relationship between specific social media platforms and disordered eating behaviours and thoughts in young adolescents.

Groupthink: Small distinctions, large effects: when and why individuals begin to identify themselves as members of a particular group. Clearly this sort of ‘groupthink’ can set in instantaneously. “Even slight differences between groups, such as wearing a green scarf instead of an orange one, enable us to distinguish our own group from another,” says Misch. “People are extraordinarily receptive to signals that appear to imply a readiness to cooperate.”

Geert Lovink – Digitaler Nihilismus: Thesen zur dunklen Seite der Plattformen (Amazon): Facebook, Twitter, Instagram, Tinder und Co. – all das Klicken, Scrollen, Wischen und Liken lässt uns am Ende sinnentleert zurück. Traurigkeit ist zum Designproblem geworden, die Höhen und Tiefen der Melancholie sind längst in den Social-Media-Plattformen kodiert. Geert Lovink bietet eine kritische Analyse der aktuellen Kontroversen, die sich um Social Media, Fake News, toxische virale Meme und Online-Sucht drehen. Er zeigt: Die Suche nach einem großen Entwurf darf als gescheitert gelten – und hat zu einer entpolitisierten Internetforschung geführt, die weder radikale Kritik übt noch echte Alternativen aufzeigt.

Papers

Stating the obvious: Unregulierte Räume (also auch Netiquette oder ähnliches) führen zu mehr Cyberbullying. No shit sherlock.

Noch eine Studie bestätigt, dass Menschen mit „starken Meinungen“ aggressive Debatten bevorzugen und dadurch Debatten beherrschen: Staying silent and speaking out in online comment sections: The influence of spiral of silence and corrective action in reaction to news: „The results suggest that the opinion climate formed by news comments influenced the opinions and comments of participants, providing evidence that those who hold strong opinions are more likely to comment when they perceive the opinion climate to be oppositional rather than supportive to their worldview.“

Menschen shiften ihre Identität nach ideologischen Bedingungen: small but significant shares of Americans engage in identity switching regarding ethnicity, religion, sexual orientation, and class that is predicted by partisanship and ideology in their pasts, bringing their identities into alignment with their politics.

Dazu passend: citizens are more strongly attached to political parties than to the social groups that the parties represent. In all four nations, partisans discriminate against their opponents to a degree that exceeds discrimination against members of religious, linguistic, ethnic or regional out‐groups. This pattern holds even when social cleavages are intense and the basis for prolonged political conflict.

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

ContraPoints on Canceling

Die großartige Natalie Wynn in einem langen 100 Minuten-Video über Canceling-Culture und die Unterschiede zu Kritik: Canceling arbeitet mit 3 Levels: 1.) Aus einem konkreten Fehlverhalten (X did Y, Level 1) wird ein allgemeines Fehlverhalten (X always does Y, Abstraktion auf Level 2) und die wiederum als ad hominem angriff (X is Y, Level 3) konstruiert. Kritik hätte bei Level 1 eingesetzt und möglicherweise Level 2 eingebunden, that’s it, der Rest ist Bullying.

From the comments:

„I’m a non-binary trans woman and I was cancelled 7 years ago by my local leftist activist community. I had a lot of suicidal thoughts as the way literally everyone I knew and depended on as my community turned on me. It’s really hard to not have your thoughts race with anxiety in that situation. I believed I was the horrible human being they said I was. I didn’t even know first-hand what anyone said. I only heard third-hand rumors. To see this manifest in online spaces now I know what the consequences are and it may be a long time before people figure out that cancel culture is toxic bullying.“

Interessante Stelle, aus dem Video bei 1:11h:

Jesus Christ, the situation here is that any citizen who defends me or even associates with me in any way will be labeled a Transfer. Any binary trans person who associates with me will be branded a MP Phobe and any non-binary people who associate with me will be ostracized from their own community. So in the Internet, I find myself increasingly alone. I’m isolated by the harassment and that is ultimately the point.

Was Natalie hier beschreibt, nannte ich vor drei Jahren in Vorträgen die Erfüllung der Postmoderne durch das Netz, in der jedes Wort zur Waffe wird, da wir durch die Hypersichtbarkeit jede erdenkliche Diskriminierungssituation erzeugen: irgendjemand fühlt sich immer angegriffen, manchmal zurecht, manchmal nicht, je nachdem wie die Sprache ausgelegt wird oder gemeint ist und ob das eine Rolle spielt oder nicht. Für den einen ist das Wort “blau” antagonistisch, für den anderen “rot”, zusammen erzeugen wir einen Raum in dem man nicht mehr über Farben reden kann, ohne dass sich jemand angegriffen fühlt. Das multiplizieren wir mit 4 Millarden Usern und addieren Dekontextualisierung und die Editierungsmöglichkeiten des Digitalen: Die ultimative Dekonstruktion der Sprache.

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

[Das Neue Geile Internet 20.12.2019] Falsche Erinnerungen durch Fake News; Atomwaffen-Leak analyzed; The Man Who Reads 1000 Articles a Day (and who is not yours truly)

👾 Bellingcat analysiert einen geleakten Datensatz der Nazi-Terrorgruppe Atomwaffen Division, die auch in Deutschland aktiv sind und Bundestagsabgeordnete der Grünen mit dem Tode bedrohen: Transnational White Terror: Exposing Atomwaffen And The Iron March Networks. Die Rekrutierung des Nazinachwuchses dieser Terror-Orga läuft unter anderem über Gaming-Communities. Tja.

To understand the rise of Iron March’s role in Atomwaffen’s rise from Central Florida teens to transnational terror networks, we tracked down an early member of Atomwaffen who has since left the group and spoke to us under condition of anonymity.

“It started off with these gaming groups. We would just play video games, talk shit… we would talk politics and history and they seemed obviously nationalist leaning, but I was like, ‘ok,’” he said. The former member continued, “I guess they added me to the Atomwaffen group… it started off all on Skype.” Soon, the group branched out through Iron March, recruiting members from Orange County to Colorado to a North Carolina military base.

Our source mentioned that, as well as video games, Trump’s 2015 presidential campaign became “part of the way for us to connect, like, nationalism, patriotism, and fascism.” Despite the original Attomwaffen members’ disdain for some aspects of Trump, they found anti-liberalism a common denominator. Atomwaffen also often bonded over misogyny and relationships, mixing extreme hatred and potential violence towards women with inability to maintain relationships.

🧠 Nieman Lab über die Produktion falscher Erinnerung über emotional triggernde Fake News: Galaxy brain: The neuroscience of how fake news grabs our attention, produces false memories, and appeals to our emotions. Die Regeln dieser nach Emotionen selektierten Informationsdistribution bilden die Regeln der neuen Mechanismen gesellschaftlicher Kanonisierung (Vgl: Digitalmediale Vernetzung der Massen als neuer Mechanismus gesellschaftlicher Kanons). Die Lehre dieser neuen Regeln der Informationsverbreitung aufgrund neuronaler und psychologischer Prozesse ist das, was ich unter Memetik begreife.

The novelty and emotional conviction of fake news, and the way these properties interact with the framework of our memories, exceeds our brains’ analytical capabilities. Though it’s impossible to imagine a democratic structure without disagreement, no constitutional settlement can function if everything is a value judgement based on misinformation. In the absence of any authoritative perspective on reality, we are doomed to navigate our identities and political beliefs at the mercy of our brains’ more basal functions.

The capacity to nurture and sustain peaceful disagreement is a positive characteristic of a truly democratic political system. But before democratic politics can begin, we must be able to distinguish between opinions and facts, fake news and objective truth.

☝️ Klares ideologisches Vorfeld solcher Rekrutierungen sind die „ironischen“ Jokes von Youtube-Stars wie PewDiePie, der sicher kein Nazi ist oder wirklich rechtes Gedankengut teilt, wohl aber mit seinen unverfänglichen Grenzüberschreitungen den Acker bereitet, auf dem echte Nazis dann ihre Gewalt- und Untergangsfantasien pflanzen können. Youtubes Algorithmen spielt dabei eine entscheidende Rolle: How Hate Makes Money: YouTube’s Hypocrisy on Hate Speech.

more insidious than YouTube’s lack of tangible action is the way in which irony and nihilism have come to define our digital modes of communication, ultimately feeding into outrage culture, cancel culture, and the capitalist absurdity of brands making dad jokes on Twitter. Is PewDiePie really joking? Does it matter, when his clickbait videos were in the same playlists as right-wing personalities like Alex Jones (who offered him a guest slot on Infowars, which Kjellberg declined)? Roose notes, “Edgelords—people who post offensive things online for attention—had always existed on message boards like 4chan. But YouTube brought them out of the shadows and turned provocation into a viable career path.” He adds, “On YouTube, there were few rules and no lawyers looking over creators’ shoulders — which is precisely why millions of young people went there, to find the kind of stuff they couldn’t get on TV.”

Das Thema ist natürlich weitaus komplexer als dieser eine Aspekt, hier liegt zu allem Übel auch ein Historisierungsprozess der Geschehnisse um den zweiten Weltkrieg vor, die von der nun jungen Generationen in den nächsten Jahren kollektiv „archiviert und abgelegt“ werden, ähnlich wie wir heute auf die Napoleon-Kriege oder den Dreißigjährigen Krieg schauen. Das kollektive Gedächtnis reicht circa 80 Jahre zurück, der Beginn dieses Prozesses startet also genau in unserer heutigen Zeit.

Die Aufgabe dieser nun jungen Generation wird es sein, die kollektive Erinnerung an die Shoah, die Verbrechen der Nazis und den historisch singulären industriellen Massenmord, der ihn auch als Genozid eben einzigartig macht, in angemessener Weise zu kommunizieren und in eben diesem kollektiven Gedächtnis zu halten. Also das genaue Gegenteil von dem, was die AFD und ihrer historisch schwachsinnigen Forderung einer „180° Wende in der Erinnerungskultur“ fordert.

☠️ Ich muss seit einigen Jahren ja nur noch lachen, wenn jemand heute noch „Hach Internet“ ins Internet schreibt: The Decade the Internet Lost Its Joy. Hätte uns doch nur jemand gewarnt! (Also: I miss The Awl.)

At the beginning of 2015, Alex Balk, then-editor of the now-defunct website the Awl, wrote a post of advice for young people in which he supplied three laws about the internet. The first: “Everything you hate about The Internet is actually everything you hate about people.” The second: “The worst thing is knowing what everyone thinks about anything.” But Balk’s third law was most prescient, especially as we end this miserable decade: “If you think The Internet is terrible now, just wait a while.” He went on: “The moment you were just in was as good as it got. The stuff you shake your head about now will seem like fucking Shakespeare in 2016.” Reader, we’ve waited a while, and today it seems indisputable that Balk’s law has held: The 2010s is the decade when the internet lost its joy.

🙀 Neue Studie sieht keinen Zusammenhang zwischen Social Media-Nutzung, der Existenz von Echo Chambers und der Unterstützung rechten Gedankenguts: Right-Wing Populism, Social Media and Echo Chambers in Western Democracies (PDF, Diskussion auf Marginal Revolution).

Many observers are concerned that echo chamber effects in digital media are contributing to the polarization of publics and in some places to the rise of right-wing populism. This study employs survey data collected in France, the United Kingdom, and United States (1500 respondents in each country) from April to May 2017. Overall, we do not find evidence that online/social media explain support for right-wing populist candidates and parties. Instead, in the USA, use of online media decreases support for right-wing populism. Looking specifically at echo chambers measures, we find offline discussion with those who are similar in race, ethnicity, and class positively correlates with support for populist candidates and parties in the UK and France. The findings challenge claims about the role of social media and the rise of populism.

🐤 Twitter veröffentlicht ein Dataset von Accounts, die Falschinformationen verbreiteten und den Kern einer State Sponsored Desinformationskampagne von Saudi Arabien bilden.

❗️ Report: Industry responses to computational propaganda and social media manipulation

Overall, no major changes to terms and policies directly related to computational propaganda were observed, leading to the conclusion that current terms and policies provide plenty of opportunities to address these issues. The language of the terms and policies relating to users and advertisers tends to be widely drawn, offering flexibility for creative interpretation and different degrees and forms of enforcement. The major change indicated by the official blogs of the companies is that they have ramped up their enforcement activities, often through a combination of new automated efforts and increased investment in human content moderation.

🤝 Ah! Kollege: The Man Who Reads 1,000 Articles a Day. Robert Cottrell ist Kurator des Newsletters The Browser, den ich in seiner Gratis-Blogform vor einigen Jahren täglich gelesen habe.

👩‍🎨 Is Instagram Changing Art? (Youtube): „Many of us who make and appreciate art spend loads of time on Instagram. How is it changing the way we interpret and interact with art? And is it actually changing the art that gets made?“

📲 One Nation, tracked: Twelve Million Phones, One Dataset, Zero Privacy. An investigation into the smartphone tracking industry from Times Opinion.

EVERY MINUTE OF EVERY DAY, everywhere on the planet, dozens of companies — largely unregulated, little scrutinized — are logging the movements of tens of millions of people with mobile phones and storing the information in gigantic data files. The Times Privacy Project obtained one such file, by far the largest and most sensitive ever to be reviewed by journalists. It holds more than 50 billion location pings from the phones of more than 12 million Americans as they moved through several major cities, including Washington, New York, San Francisco and Los Angeles.

Each piece of information in this file represents the precise location of a single smartphone over a period of several months in 2016 and 2017. The data was provided to Times Opinion by sources who asked to remain anonymous because they were not authorized to share it and could face severe penalties for doing so.

👎 Die Angst der Rechten wegen angeblicher Überfremdung ist nichts anderes als eine künstlich erzeugten, durch Plattformen gepushte, virale Moral Panic und die Streuung von Gerüchten auf Facebook ist nichts anderes als digitaler Gossip in seiner niederträchtigsten Form: Immigration panic: how the west fell for manufactured rage: But the greatest facilitator of race-hatred against refugees isn’t a tabloid; it’s Facebook. Researchers at the University of Warwick recently studied every anti-refugee attack – 3,335, over two years – in Germany. They found that among the strongest predictors of the attacks was whether the attackers are on Facebook. The social network aids the dissemination of rumours, such as that all refugees are welfare cheats or rapists; and, unmediated by gatekeepers or editors, the rumours spread, and ordinary people are roused to violence. Wherever Facebook usage rose to one standard deviation above normal, the researchers found, attacks on refugees increased by 50%. When there were internet outages in areas with high Facebook usage, the attacks dropped significantly.

🤖 Algorithmic Bias, once more: Federal study confirms racial bias of many facial-recognition systems, casts doubt on their expanding use: Asian and African American people were up to 100 times more likely to be misidentified than white men, depending on the particular algorithm and type of search. Native Americans had the highest false-positive rate of all ethnicities, according to the study, which found that systems varied widely in their accuracy.

💀 Das alteingesessene, im Jahr 2001 gegründete britische Online-Magazin The Inquirer macht dicht.

🦠 The perfect content moderators: People with aphantasia, the inability to call up mental images, might be naturally resilient to post-traumatic stress disorder.

🤖 Guter Artikel mit einer sehr guten und allgemeinverständlichen Erklärung von Deepfake-Tech: I created my own deepfake—it took two weeks and cost $552.

An autoencoder is structured like two funnels with the narrow ends stuck together. One side of the network is an encoder that takes the image and squeezes it down to a small number of variables—in the Faceswap model I used, it’s 1024 32-bit floating-point values. The other side of the neural network is a decoder. It takes this compact representation, known as a “latent space,” and tries to expand it into the original image.

Artificially constraining how much data the encoder can pass to the decoder forces the two networks to develop a compact representation for a human face. You can think of an encoder as a lossy compression algorithm—one that tries to capture as much information about a face as possible given limited storage space. The latent space must somehow capture important details like which direction the subject is facing, whether the subject’s eyes are open or closed, and whether the subject is smiling or frowning.

But crucially, the autoencoder only needs to record aspects of a person’s face that change over time. It doesn’t need to capture permanent details like eye color or nose shape. If every photo of Mark Zuckerberg shows him with blue eyes, for example, the Zuck decoder network will learn to automatically render his face with blue eyes. There’s no need to clutter up the crowded latent space with information that doesn’t change from one image to another. As we’ll see, the fact that autoencoders treat a face’s transitory features differently from its permanent ones is key to their ability to generate deepfakes.

Which Face Is Real?: Our aim is to make you aware of the ease with which digital identities can be faked, and to help you spot these fakes at a single glance.

🙃 Karl Kraus and the Birth of Fake News:

Karl Kraus was the son of a wealthy Viennese paper merchant. Born a Jew, 145 years ago this week, he renounced his religion of birth, converted to Roman Catholicism, only to renounce that faith in turn. When asked why, Kraus attributed his departure from the Catholic Church to “anti-Semitism.” Kraus was a funny man.

Kraus loved paradoxes and published a magazine, Die Fackel, full of them. “An aphorism can never be the whole truth; it is either a half truth or a truth-and-a-half,” he wrote. Kraus also gave popular stage performances, in which he played piano, read Shakespeare’s sonnets, and acted out parts from his monumental masterpiece, the 800-page play, Die letzten Tage der Menschheit, usually translated as The Last Days of Mankind. […]

Deceptions—whether white, black, or rainbow-colored—accumulate. When they are formulated as social dogma, they’re represented forcefully by media as orthodoxy, which we must wholly accept, as good citizens, or entirely reject, as immoral fools, traitors, and lunatics. As Russell says, “Kraus responded to agitation for Anschluss, annexation by Germany, declaring that “the hypnotic power of newsprint was creating a ‘counterfeit reality’ in which ‘nothing is real except for the lies.’”

Bertolt Brecht wrote of Kraus, on hearing of his death in 1936: “As the epoch raised its hand to end its own life, he [Kraus] was the hand.” Brecht, presumably referred to the 19th century: The era Kraus foresaw ending was our own, of lies spread by digital platforms with unprecedented velocity and social penetration and enforced as a single compounded orthodoxy.

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

Studie über die Motivation zur Verbreitung von Falschmeldungen

Neue Studie über die Verbreitung von Fake News: User teilen Falschnachrichten weder bewusst, noch weil sie eine politische Agenda verfolgen. Die Forscher spekulieren auf eine Ablenkung durch andere psychologische Motivationen, die die eigentlich gegebene Präferenz von „echten, wahren Nachrichten“ aushebeln und vom individuellen Fact-Checking ablenken.

Leider untersucht die Studie diese „ablenkenden“ psychologischen Motivationen nicht weiter und sie interessieren mich brennend. Ich tippe auf eine Mischung aus Cognitive Bias und Info-Bits, die individuelle „Stories“ unserer Persönlichkeitsstruktur ergänzen. In anderen Worten: Man verbreitet Falschnachrichten – ohne dem Wahrheitsgehalt dieser Meldungen wirklich Aufmerksamkeit zu schenken –, nicht, weil das einer politischen Agenda nützt, sondern die Formung unserer (digitalen) Identität und die „Geschichtserzählung“ unserer Persönlichkeit unterstützt.

Paper: Understanding and reducing the spread of misinformation online

The spread of false and misleading news on social media is of great societal concern. Why do people share such content, and what can be done about it? In a first survey experiment (N=1,015), we demonstrate a disconnect between accuracy judgments and sharing intentions: even though true headlines are rated as much more accurate than false headlines, headline veracity has little impact on sharing. We argue against a “post-truth” interpretation, whereby people deliberately share false content because it furthers their political agenda.

Instead, we propose that the problem is simply distraction: most people do not want to spread misinformation, but are distracted from accuracy by other salient motives when choosing what to share. Indeed, when directly asked, most participants say it is important to only share accurate news. Accordingly, across three survey experiments (total N=2775) and an experiment on Twitter in which we messaged N=5,482 users who had previously shared news from misleading websites, we find that subtly inducing people to think about the concept of accuracy increases the quality of the news they share. Together, these results challenge the popular post-truth narrative. Instead, they suggest that many people are capable of detecting low-quality news content, but nonetheless share such content online because social media is not conducive to thinking analytically about truth and accuracy. Furthermore, our results translate directly into a scalable anti-misinformation intervention that is easily implementable by social media platforms.

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

Wissenschaftler bestätigen neurobiologische Existenz des Kollektiven Gedächtnisses

Geisteswissenschaftler und Psychologen gehen davon aus, dass das Konzept eines „kollektiven Gedächtnisses“ lediglich eine wissenschaftliche Metapher darstellt und in der Realität nicht existiert. Diese Annahme wurde nun von einer neurobiologischen Studie falsifiziert: Das Kollektive Gedächtnis beeinflusst nachweislich die Formung von Erinnerungen der Individuen. Kollektive Erinnerungen sind damit nachweislich und messbar real und der Hinweis auf die neuen Regeln zur Formung dieses kollektiven Gedächtnisses durch Vernetzung ergibt sich von selbst (vgl: Digitalmediale Vernetzung der Massen als neuer Mechanismus gesellschaftlicher Kanons).

Collective Consciousness next, please.

Paper: Collective memory shapes the organization of individual memories in the medial prefrontal cortex
Medical Express: Collective memory shapes the construction of personal memories

In the last century, French sociologist Maurice Halbwachs declared that personal memories are influenced by their social contexts. From this perspective, the memory function of individuals cannot be understood without taking into account the group to which they belong and the social contexts related to collective memory.

Until now, these theories had never been tested by neuroscientists. Inserm researchers Pierre Gagnepain and Francis Eustache, in association with their colleagues from the Matrice project led by CNRS historian Denis Peschanski took a closer look using brain imaging techniques. For the first time, they have shown in the brain the link between collective memory and personal memories. Their innovative research has been published in Nature Human Behaviour.

Collective memory is composed of symbols, accounts, narratives and images that construct a community identity. To investigate this concept further, the researchers began by analyzing the media coverage of WWII in order to identify the shared collective representations associated with it. They studied the content of 30 years of WWII reports and documentaries broadcast and transcribed on French television between 1980 and 2010.

Using an algorithm, they analyzed this unprecedented corpus and identified groups of words regularly used when discussing major themes associated with the collective memory of WWII, such as the D-Day Landings. “Our algorithm automatically identified the central themes and the words repeatedly associated with them, thereby revealing our collective representations of this crucial period in our history,” says Gagnepain.

But what is the link between these collective representations of the war and individual memory? To answer that question, the researchers recruited 24 volunteers to visit the Caen Memorial Museum and asked them to view photos from that period, which were accompanied by captions.

Based on the words contained in the captions, the team was able to define the degree of association between the photos and the various themes identified previously.

If words that had previously been associated with the theme of the D-Day landings were found in the caption, for example, the photo was then considered to be linked to this theme, as well. In this way, the researchers were able to establish proximity between each of the images: When two photos were linked to the same themes, they were considered to be “close” in collective memory.

Gagnepain and his colleagues then turned their attention to the perception of these photos in the memory of the individuals. They tried to find out whether the same degree of proximity between the photos was perceived in individual memories. The volunteers underwent an MRI examination during which they were asked to recall the images seen at the Memorial Museum the day before. The researchers were especially interested in the activity of their median prefrontal cortex, a brain region linked to social cognition.

The researchers thus compared the level of proximity between the photos by looking at the collective representations of WWII in the media and, via brain imaging, by looking at the individual memories that people had of these images following a visit to the Memorial Museum. The team showed that when photo A was considered close to photo B—because it was linked in the same way to the same collective theme—it also had a higher probability of triggering brain activity similar to photo B in the brains of the subjects.

This novel approach enabled indirect comparison between collective memory and individual memory. “Our data demonstrate that collective memory, which exists beyond the individual level, organizes and shapes personal memory. It constitutes a shared mental model making it possible to link the memories of individuals across time and space,” says Gagnepain.

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

Digitalmediale Vernetzung der Massen als neuer Mechanismus gesellschaftlicher Kanons

Interessante Analyse, die sinkenden Vertrauensverlust in Institutionen, Medien und Hochkultur als zweite Sekularisierung bezeichnet und denselben Prozess ausmachen will, der bereits bei der Verdrängung der Religion als zentrale, sinnstiftende, gesellschaftliche Instanz stattfand: Losing Faith in the Humanities – The decline of religion and the decline of the study of culture are part of the same big story.

Ich stimme dieser Analyse nicht zu. Der Artikel beruft sich auf ein Ende des historischen Kanonisierungsprozesses, der Bildung kultureller Richtungsvorgaben als Orientierungsvektor für Gesellschaften. Der Literaturkanon ist das, was man gelesen haben sollte, der normierte Verhaltenskodex wie der freundliche Gruß am Morgen ist kulturell-gesellschaftlicher Kanon, genau wie die besten 100 Filme aller Zeiten.

Der Kanon ist eine Form des kollektiven Gedächtnisses und auch wenn der Text seine fatalistische Sicht im Verlauf des Artikels aufgibt, so ist Kanonisierung und die ständige Neuformierung des kollektiven Gedächtnisses als historisierender Prozess nicht am Ende, sie werden aber durch Vernetzung und soziomediale Veränderungen in der Kommunikationsweise der gesellschaftlichen Akteure von Individuum bis Institution neu verdrahtet. Kanonisierung unterliegt heute einer neuen „Last der Diversität“, die ihre eigene Form der Kritik bitter benötigt, denn auch hier manifestieren sich neue Macht-Mechanismen, die nicht unbedacht bleiben können und dürfen.

Aber all das bedeutet nicht das Ende des Kanons oder das Ende von Hochkultur als Richtungsvorgabe für Intellektualität und die Künste. Letztlich ist es das, was mit der „Kritik am alten weißen Mann“ tatsächlich gemeint ist und diese Kritik hat im Kern vollkommen Recht, es ist keine Kritik am weißen Mann und erst Recht nicht einzelnen Ethnien oder Geschlechtern, es ist eine Kritik an unhinterfragter Übernahme anachronistischer Kanons, die in den kommenden Jahrzehnten neu formiert und von einer Trillarde Stimmen neu gebildet werden. Wir erleben nur die Geburtsschmerzen.

Jan Assmann schreibt dazu in seinem hervorragenden 1999er Buch Das kulturelle Gedächtnis folgendes:

Ein Kanon antwortet auf die Frage: „Wonach sollen wir uns richten?“ Diese Frage wird immer dann dringend, wenn die Antwort nicht situativ vorgegeben ist und fallweise gefunden werden kann, d.h. wenn die Wirklichkeit die in der traditionellen und selbstverständlichen Realitätskonstuktion angelegte Typik der Situationen übersteigt und die überkommenen „Maßstäbe“ nicht mehr greifen. Typische Situation solcher Orientierungslosigkeit durch Komplexitätssteigerung ergeben sich bei drastischen Steigerungen des Möglichkeitsraums [Vgl. meine Vorträge über die Explosion der Optionen durch AI, Editierung und Ausrechenbarkeit aller Möglichkeiten]. Der Wandel von ritueller zu textueller Kohärenz [stellte einen solchen weitreichenden Fall einer solchen Ausweitung dar]. Im Rahmen der Schriftkultur verliert die Tradition ihre alternativenlose Selbstverständlichkeit und wird prinzipiell veränderbar. Entsprechendes gilt aber weit über den Bereich der Schriftkultur hinaus. Wenn plötzlich, etwa durch eine weitreichende technische oder künstlerische Erfindung oder auch negativ durch das Verblassen traditioneller Maßstäbe, z.B. der Tonalität in der Musik, sehr viel mehr möglich ist, manifestiert sich ein Bedürfnis, zu verhindern, dass „anything goes“. [Vgl.: Anything Goes Weird], eine Angst vor Sinnverlust durch Entropie.

Änderungen an der Formierung gesellschaftlicher Kanons (es gibt viele davon, Nerdcore ist beispielsweise Teil des Kanons der deutschen Netzkultur, Picasso Teil des Kunst-Kanons, Scorsese des Film-Kanons, Jeff Mills des Techno-Kanons, Hegel des philosophischen Kanons und so weiter) gibt es in der Geschichte immer wieder, vor allem bei Veränderungen der medialen Synchronisation der Gesellschaften. Der Übergang der oralen Kultur der mündlich überlieferten Mythen, Sagen und Märchen wurde verdrängt durch die Schriftkultur mit dem Ergebnis ganz neuer Denkweisen und der Erfindung von Wissenschaft und Philosophie, magisches Denken machte Platz für generationenübergreifendes, schriftlich fixiertes Wissen, das wiederum den Aufstieg monolithisch wahrgenommener monotheistischer Religionen als Orientierungspunkt für Gesellschaften ermöglichte. Diese wiederum mussten tausend Jahre später mit der Erfindung des Buchdrucks und dem erneuten Übergang von Schriftkultur zur Kultur der Massenmedien weichen und mit ihr die alt eingesessenen monolithischen Religionen. Die neuen gesellschaftlichen Orientierungspunkte waren die Erfindung von Bürokratie, der Buchhaltung, der Datenerfassung und den Institutionen sekularer Regierungen und letztlich auch der Nation an sich.

Es wird sich zeigen, welche neuen Orientierungspunkte sich durch die massenhafte, globale Vernetzung und die emotionale Selektionspräferenz der medialen Netze herauskristalisieren, aber wie auch immer diese gesellschaftlichen Synchronisationsmechanismen aussehen werden: Sie werden immer noch Kanons produzieren.

Are the humanities over? Are they facing an extinction event? There are certainly reasons to think so. It is widely believed that humanities graduates can’t easily find jobs; political support for them seems to be evaporating; enrollments in many subjects are down. As we all know.

Even if the situation turns out to be less than terminal, something remarkable is underway. Bewilderment and demoralization are everywhere. Centuries-old lineages and heritages are being broken. And so we are under pressure to come up with new ways of thinking that can take account of the profundity of what is happening. In this situation, we need to think big.

I want to propose that such big thinking might begin with the idea that, in the West, secularization has happened not once but twice. It happened first in relation to religion, and second, more recently, in relation to culture and the humanities. We all understand what religious secularization has been — the process by which religion, and especially Christianity, has been marginalized, so that today in the West, as Charles Taylor has famously put it, religion has become just one option among a smorgasbord of faith/no-faith choices available to individuals.

A similar process is underway in the humanities. Faith has been lost across two different zones: first, religion; then, high culture. The process that we associate with thinkers like Friedrich Schiller, Samuel Taylor Coleridge, and Matthew Arnold, in which culture was consecrated in religion’s place, and that in more modest forms survived until quite recently, has finally been undone. We now live in a doubly secularized age, post-religious and postcanonical. The humanities have become merely a (rather eccentric) option for a small fraction of the population.

Despite the humanities’ variety and dispersion, they accrue a power that is hard to extinguish.
Cultural secularization resembles earlier religious secularization. What happened to Christian revelation and the Bible is now happening to the idea of Western civilization and “the best that has been thought and said,” in Arnold’s famous phrase. As a society, the value of a canon that carries our cultural or, as they once said, “civilizational” values can no longer be assumed. These values are being displaced and critiqued by other ostensibly more “enlightened” ways of thinking. The institution — the academic humanities — that officially preserved and disseminated civilizational history is being hollowed out, partly from within. Only remnants are left.

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).

[Das Geile Neue Internet 19.12.2019] Viral Revolts; Die Krise der Wahrheit; Algorithmische Monokulturen

✊ Martin Gurri über die Gemeinsamkeiten der globalen Welle von Aufständen von den Protesten der Gelbwesten über die Revolte in Hong Kong bis Chile, den Sudan und den Irak: Revolt as Consumer Backlash.

The message of revolt of 2019, mediated by random factors, evidently has met a profound need of the network. In more concrete terms: when the whole world is watching, a local demand for political change can start to go global in an instant. At a certain point, the process becomes self-sustaining and self-reinforcing: that threshold may have been crossed in November, when at least eight significant street uprisings were rumbling along concurrently (Bolivia, Catalonia, Chile, Colombia, Hong Kong, Iraq, Iran, and Lebanon – with France, the Netherlands, Nicaragua, and Venezuela simmering in the background). Whether local circumstances are democratic or dictatorial, prosperous or impoverished, the fashion for revolt is felt to be almost mandatory. The public is now competing with itself in the rush to say No.

Vorher: Martin Gurri on Ritual Rage on the Web, Martin Gurri über den Aufstand der Öffentlichkeit

💩 Casey Newton über die Content-Moderatoren von Google: The Terror Queue. Ich wiederhole mich: Content Moderator ist der wohl derzeit wichtigste Beruf der Online-Branche, es sind beinahe schon wortwörtlich die Klempner der Informations-Sanitätssystems und befreien unsere Feeds von Dreck und Exkrementen. Dafür werden sie gnadenlos unterbezahlt und kämpfen sie reihenweise mit PTSD.

– Google created a dedicated queue for videos believed to contain violent extremism and staffed it with dozens of low-paid immigrants from the Middle East. Moderators make $18.50 an hour — about $37,000 a year — and have not received a raise in two years.
– Austin moderators are required to view five hours of gruesome video per day. This comes despite the fact that YouTube CEO Susan Wojcicki promised to reduce their burden to four hours per day last year.
– Workers on the site describe feeling anxiety, depression, night terrors, and other severe mental health consequences after doing the job for as little as six months.

Vorher auf NC: Facebooks Inhalts-Moderatoren stecken sich mit Verschwörungstheorien an, Content Moderatoren und Community Manager sind die wichtigsten Berufe im Internet

🙃 Kyle Chayka auf Vox über das, was ich hier als The Big Flat Now bezeichne: Monokultur, die sich aus partikularen Mikro-Geschmäckern aufbaut, die wiederum algorithmisch zu monolithischem Einheitsbrei aufgetürmt werden. Spotify befeuert Nischengeschmäcker in endlosen Playlists, die stundenlang den gleichen Sound liefern und Netflix bietet Serien für alle Geschmäcker für stundenlanges Binge-Watching der immer gleichen Formate. Es entsteht eine Parallelbewegung aus Fragmentarisierung und Homogenisierung.

Rather than the monoculture dictated by singular auteurs or industry gatekeepers, we are moving toward a monoculture of the algorithm. Recommendation algorithms — on Netflix, TikTok, YouTube, or Spotify — are responsible for much of how we move through the range of on-demand streaming media, since there’s too much content for any one user to parse on their own. We can make decisions, but they are largely confined to the range of options presented to us. The homepage of Netflix, for example, offers only a window into the platform’s available content, often failing to recommend what we actually want. We can also opt out of decision-making altogether and succumb to autoplay. […]

We thought the long tail of the internet would bring diversity; instead we got sameness and the perpetuation of the oldest biases, like gender discrimination. The best indicator of what gets recommended is what’s already popular, according to the investor Matthew Ball, a former head of strategy at Amazon Studios. “Netflix isn’t really trying to pick individual items from obscurity and get you to watch it,” Ball said. “The feedback mechanisms are reiterating a certain homogeneity of consumption.”

Instead of discrete, brand-name cultural artifacts, monoculture is now culture that appears increasingly similar to itself wherever you find it. It exists in the global morass of Marvel movies designed to sell equally well in China and the United States; the style of K-Pop, in music and performance, spreading outside of Korea; or the profusion of recognizably minimalist indie cafes from Australia to everywhere else. These are all forms of monoculture that don’t rely on an enforced, top-down sameness, but create sameness from the bottom up. Maybe the post-internet monoculture is now made up of what is aesthetically recognizable even if it is not familiar — we quickly feel we understand it even if we don’t know the name of the specific actor, musician, show, or director.

✔️ Übermedien über Dilemmata des Factchecking auf Facebook 1: Faktencheck mit Haken: Das Facebook-Dilemma von Correctiv.

✔️ Übermedien über Dilemmata des Factchecking auf Facebook 2: Wieso Gretas Bahn-Foto von Facebook als Fake markiert wurde.

🤝 Alte Weisheit neu bestätigt: Eine Studie hat eine Hirnregion identifiziert, die durch Confirmation Bias aktiviert wird. Die Wissenschaftler schlagen für Debatten vor, zunächst einen gemeinsamen Nenner mit dem Diskussionspartner zu finden.

A Nature Neuroscience study looked at participants’ brains as they made choices while considering a partner’s decisions. The researchers found that a small region toward the front of the brain called the posterior medial prefrontal cortex, associated with judging performance and mistakes, was more active during the task. Specifically, it was active when individuals were processing someone’s agreement with their opinion, but not when they were dealing with an opposing view.

🤖 Symantec hat ein Archiv aus 10 Millionen von 3836 Twitter-Accounts analysiert, die der russischen „Internet Research Agency“ („Putins Troll Army“) zugerechnet werden: Twitterbots: Anatomy of a Propaganda Campaign. Die Analyse bietet nur wenige neue Erkenntnisse: Die Kampagnen wurde von langer Hand geplant, die Accounts im Schnitt ein halbes Jahr vor dem ersten Tweet angelegt, einige wenige Accounts, die vorgaben, authentische User zu sein und manuell bedient wurden, erhielten automatisierte Retweets und Likes von jeder Menge Bots und die verlinkten Target-Inhalte, oft von Fake News-Outlets, enthielten sowohl konservative als auch progressive Talking-Points.

🤜 Inside the hate factory: how Facebook fuels far-right profit: Guardian investigation reveals a covert plot to control some of Facebook’s largest far-right pages and harvest Islamophobic hate for profit

🙃 Facebook May Face Another Fake News Crisis in 2020. And in 2021. And 2022.: Facebook’s fight against misinformation, like its struggle with content moderation, is one that it is unlikely to truly win without fundamental changes to the platform.

☝️ Steven Shapin in einem (zu) langen Stück über die epistemologische Krise der Wahrheit und den Vertrauensverlust in akademische Institutionen, die seines Erachtens nicht durch zu wenig Wissenschaft im öffentlichen Diskurs entstanden, sondern durch ein Zuviel an scheinwissenschaftlicher Methode und Unkenntnis sozial generierten Wissens (also „Wem kann ich vertrauen?“, „Wo und bei wem finde ich vertrauenswürdige Information?“ und so weiter).

The problem we confront is better described not as too little science in public culture but as too much. Given the absurdities and errors abroad in the land, it may seem crazy to say this, yet the point can be pressed. Consider, again, the climate change deniers, the anti-vaxxers, and the creationists. They’re wrong-headed of course, but, like the Moon-landing deniers and the Flat-Earthers, their rejection of Right Thinking is not delivered as anti-science. Instead, it comes garnished with the supposed facts, theories, approved methods, and postures of objectivity and disinterestedness associated with genuine science. Wrong-headedness often advertises its embrace of officially cherished scientific values — skepticism, disinterestedness, universalism, the distinction between secure facts and provisional theories — and frequently does so more vigorously than the science rejected. The deniers’ notion of science sometimes seems, so to speak, hyperscientific, more royalist than the king. And, if you want examples of hyperscientific tendencies in so-called pseudoscience, there are now sensitive studies of the biblical astronomy craze instigated in the 1950s by the psychiatrist Immanuel Velikovsky, or you can consider the meticulous methodological attentiveness of parapsychology, or you can reflect on why it might be that students of the human sciences are deluged with lessons on The Scientific Method while chemists and geologists are typically content with mastering just the various methods of their specialties. The Truth-Deniers find scientific facts and theories shamefully ignored by the elites; they embrace conceptions of a coherent, stable, and effective Scientific Method that the elites are said to violate; they insist on the necessity of radical scientific skepticism, universal replication, and openness to alternative views that the elites contravene. On those criteria, who’s really anti-scientific? Who are the real Truth-Deniers?

If you follow the claims of the Truth-Deniers, you can’t but recognize this surfeit of science — so many facts and theories unknown at elite universities, such an abundance of scientific papers and institutions, such a cacophonous chorus of scientific voices. This is a world in which the democratic “essence” of science is taken very seriously and scientific aristocracy and elitism are condemned. Why should such institutions as Oxford, Harvard, and their like monopolize scientific Truth? It’s hard to fault the principle of scientific democracy, but, as a normal practice, it’s faulted all the time.

📖 Buch über Human Rights in the Age of Platforms inklusive kostenlosem eBook als PDF.

Today such companies as Apple, Facebook, Google, Microsoft, and Twitter play an increasingly important role in how users form and express opinions, encounter information, debate, disagree, mobilize, and maintain their privacy. What are the human rights implications of an online domain managed by privately owned platforms? According to the Guiding Principles on Business and Human Rights, adopted by the UN Human Right Council in 2011, businesses have a responsibility to respect human rights and to carry out human rights due diligence. But this goal is dependent on the willingness of states to encode such norms into business regulations and of companies to comply. In this volume, contributors from across law and internet and media studies examine the state of human rights in today’s platform society.

The contributors consider the “datafication” of society, including the economic model of data extraction and the conceptualization of privacy. They examine online advertising, content moderation, corporate storytelling around human rights, and other platform practices. Finally, they discuss the relationship between human rights law and private actors, addressing such issues as private companies’ human rights responsibilities and content regulation.

👾 Das Game Metal Gear Solid 2 hat unsere neue geile Internet-Welt ziemlich genau vorhergesagt: We are living in Hideo Kojima’s dystopian nightmare. Can he save us?

Metal Gear Solid 2 was about everything else passed on, memetics, cultural traits and social norms, and how social evolution is threatened by junk data crowding the Internet’s discourse. Crowding caused by what the game derisively called a “sea of garbage you people produce.”

Fast forward to today: Kojima’s dystopian future has become our current reality.

It’s a reality where studies show Americans even struggle to find common understanding around what caused the Civil War. Social media, a parallel digital society, has a reputation for being self-absorbed and mean. Some of those who built that space, like Facebook’s Mark Zuckerberg, fear the “erosion of truth,” but won’t take action against lies and manipulated facts spread on their platform. Governments and media constantly call into question the actuality of our lived experiences.

——————————————

Folgt Nerdcore.de auf Twitter (https://twitter.com/NerdcoreDe und https://twitter.com/renewalta) und Facebook (https://www.facebook.com/Crackajackz), wenn ihr meine Arbeit mögt, könnt ihr mich hier finanziell unterstützen (https://nerdcore.de/jetzt-nerdcore-unterstuetzen/).