Casual Observer
Legenda
- Poruka
- 68.237
Društvene mreže su pune toga, ali niko ne odustaje, glavno je širiti moralnu paniku i utvrđivati i svoja, ali i istovetna predubeđenja svojih istomišljenika. U svakoj mogućoj oblasti, od politike, preko zabave do “nauke” i teorija zavere. Još ako je u pitanju nešto zapaljivo, neka priča koja se širi kao požar i izaziva opšte zgražavanje…
Ovo ima i svoju društvenu funkciju to “je jeftin način da se signalizira pripadnost grupi ili posvećenost nekom cilju."
Autorska tema sa izvorom.
Rob Bauer, predsednik vojnog komiteta NATO-a, navodno je rekao: „Nadležnije je ne čekati, već gađati lansere u Rusiji u slučaju da nas Rusija napadne. Moramo prvi da udarimo.” Ovi komentari, navodno dati 2024. godine, kasnije su protumačeni kao nagoveštaji da bi NATO trebalo da pokuša preventivni udar na Rusiju, što je ideja koju su mnogi ljudi smatrali nečuveno opasnom.
Ali mnogo ljudi je takođe propustilo nešto u vezi sa citatom: Bauer to nikada nije rekao. Bilo je izmišljeno. Uprkos tome, navodna izjava je dobila skoro 250.000 pregleda na Tviteru i neprekidno su je dalje širili i ljudi poput Aleksa Džonsa.
___________
Rob Bauer, the chair of a NATO military committee, reportedly said, “It is more competent not to wait, but to hit launchers in Russia in case Russia attacks us. We must strike first.” These comments, supposedly made in 2024, were later interpreted as suggesting NATO should attempt a preemptive strike against Russia, an idea that lots of people found outrageously dangerous.
But lots of people also missed a thing about the quote: Bauer has never said it. It was made up. Despite that, the purported statement got nearly 250,000 views on X and was mindlessly spread further by the likes of Alex Jones.
Why do stories like this get so many views and shares? “The vast majority of misinformation studies assume people want to be accurate, but certain things distract them,” says William J. Brady, a researcher at Northwestern University. “Maybe it’s the social media environment. Maybe they’re not understanding the news, or the sources are confusing them. But what we found is that when content evokes outrage, people are consistently sharing it without even clicking into the article.” Brady co-authored a study on how misinformation exploits outrage to spread online. When we get outraged, the study suggests, we simply care way less if what’s got us outraged is even real.
Parsing through the Twitter data, the team used a machine-learning model to predict which posts would cause outrage. “It was trained on 26,000 tweets posted around 2018 and 2019. We got raters from across the political spectrum, we taught them what we meant by outrage, and got them to label the data we later used to train our model,” Brady says.
The purpose of the model was to predict whether a message was an expression of moral outrage, an emotional state defined in the study as “a mixture of anger and disgust triggered by perceived moral transgressions.” After training, the AI was effective. “It performed as good as humans,” Brady claims. Facebook data was a bit more tricky because the team did not have access to comments; all they had to work with were reactions. The reaction the team chose as a proxy for outrage was anger. Once the data was sorted into outrageous and not outrageous categories, Brady and his colleagues went on to determine whether the content was trustworthy news or misinformation.
“We took what is now the most widely used approach in the science of misinformation, which is a domain classification approach,” Brady says. The process boiled down to compiling a list of domains with very high and very low trustworthiness based on work done by fact-checking organizations. This way, for example, The Chicago Sun-Times was classified as trustworthy; Breitbart, not so much. “One of the issues there is that you could have a source that produces misinformation which one time produced a true story. We accepted that. We went with statistics and general rules,” Brady acknowledged. His team confirmed that sources classified in the study as misinformation produced news that was fact-checked as false six to eight times more often than reliable domains, which Brady’s team thought was good enough to work with.
https://arstechnica.com/science/2024/12/people-will-share-misinformation-that-sparks-moral-outrage/
Science, 2024. DOI: 10.1126/science.adl2829
Ovo ima i svoju društvenu funkciju to “je jeftin način da se signalizira pripadnost grupi ili posvećenost nekom cilju."
Autorska tema sa izvorom.
Rob Bauer, predsednik vojnog komiteta NATO-a, navodno je rekao: „Nadležnije je ne čekati, već gađati lansere u Rusiji u slučaju da nas Rusija napadne. Moramo prvi da udarimo.” Ovi komentari, navodno dati 2024. godine, kasnije su protumačeni kao nagoveštaji da bi NATO trebalo da pokuša preventivni udar na Rusiju, što je ideja koju su mnogi ljudi smatrali nečuveno opasnom.
Ali mnogo ljudi je takođe propustilo nešto u vezi sa citatom: Bauer to nikada nije rekao. Bilo je izmišljeno. Uprkos tome, navodna izjava je dobila skoro 250.000 pregleda na Tviteru i neprekidno su je dalje širili i ljudi poput Aleksa Džonsa.
___________
Rob Bauer, the chair of a NATO military committee, reportedly said, “It is more competent not to wait, but to hit launchers in Russia in case Russia attacks us. We must strike first.” These comments, supposedly made in 2024, were later interpreted as suggesting NATO should attempt a preemptive strike against Russia, an idea that lots of people found outrageously dangerous.
But lots of people also missed a thing about the quote: Bauer has never said it. It was made up. Despite that, the purported statement got nearly 250,000 views on X and was mindlessly spread further by the likes of Alex Jones.
Why do stories like this get so many views and shares? “The vast majority of misinformation studies assume people want to be accurate, but certain things distract them,” says William J. Brady, a researcher at Northwestern University. “Maybe it’s the social media environment. Maybe they’re not understanding the news, or the sources are confusing them. But what we found is that when content evokes outrage, people are consistently sharing it without even clicking into the article.” Brady co-authored a study on how misinformation exploits outrage to spread online. When we get outraged, the study suggests, we simply care way less if what’s got us outraged is even real.
Tracking the outrage
The rapid spread of misinformation on social media has generally been explained by something you might call an error theory—the idea that people share misinformation by mistake. Based on that, most solutions to the misinformation issue relied on prompting users to focus on accuracy and think carefully about whether they really wanted to share stories from dubious sources. Those prompts, however, haven’t worked very well. To get to the root of the problem, Brady’s team analyzed data that tracked over 1 million links on Facebook and nearly 45,000 posts on Twitter from different periods ranging from 2017 to 2021.Parsing through the Twitter data, the team used a machine-learning model to predict which posts would cause outrage. “It was trained on 26,000 tweets posted around 2018 and 2019. We got raters from across the political spectrum, we taught them what we meant by outrage, and got them to label the data we later used to train our model,” Brady says.
The purpose of the model was to predict whether a message was an expression of moral outrage, an emotional state defined in the study as “a mixture of anger and disgust triggered by perceived moral transgressions.” After training, the AI was effective. “It performed as good as humans,” Brady claims. Facebook data was a bit more tricky because the team did not have access to comments; all they had to work with were reactions. The reaction the team chose as a proxy for outrage was anger. Once the data was sorted into outrageous and not outrageous categories, Brady and his colleagues went on to determine whether the content was trustworthy news or misinformation.
“We took what is now the most widely used approach in the science of misinformation, which is a domain classification approach,” Brady says. The process boiled down to compiling a list of domains with very high and very low trustworthiness based on work done by fact-checking organizations. This way, for example, The Chicago Sun-Times was classified as trustworthy; Breitbart, not so much. “One of the issues there is that you could have a source that produces misinformation which one time produced a true story. We accepted that. We went with statistics and general rules,” Brady acknowledged. His team confirmed that sources classified in the study as misinformation produced news that was fact-checked as false six to eight times more often than reliable domains, which Brady’s team thought was good enough to work with.
https://arstechnica.com/science/2024/12/people-will-share-misinformation-that-sparks-moral-outrage/
Science, 2024. DOI: 10.1126/science.adl2829
Poslednja izmena: