The Overview - May 29, 2020
The Overview is a weekly roundup of eclectic content in-between essay newsletters & "Conversations" podcast episodes to scratch your brain's curiosity itch.
Hello Eclectic Spacewalkers,
I wish that you and your family are safe & healthy wherever you are in the world. :)
Here are some eclectic links for the week of May 29nd, 2020. This week's theme is "Covid19 & Misinformation."
Check out last week’s roundup HERE.
Lastly, be on the lookout for essay #11: Updated Operating Manual for Spaceship Earth on Monday (June 1st).
Enjoy, share, and subscribe!
Table of Contents:
Articles via Nautilus, Prospect Magazine, BuzzFeed News, ProPublica, Carnegie Mellon School of Computer Science, and Nature.
Book - The Misinformation Age: How False Beliefs Spread by Cailin O'Connor & James Owen Weatherall
Discussion - Noam Chomsky on the “Limits” of Knowledge & Thought with Bryan Magee
Documentary - Merchants of Doubt
Lecture - Naomi Oreskes at BYU Kennedy Center on Merchants of Doubt
Paper - How to Beat Science and Influence People: Policy Makers and Propaganda in Epistemic Networks by James Owen Weatherall & Cailin O’Connor
Podcast - I Can’t Believe it’s Not News: A Podcast About Fake News, The Truth About Fake News, and Knowledge, Truth, and Science.
TED Talks - How we can protect truth in the age of misinformation; How you can help transform the internet into a place of trust; Fake news is about to get much worse. Here's a solution.; How to see past your own perspective and find truth; Conspiracy Theories and the Problem of Disappearing Knowledge
Twittersphere - Ben Norton & Bonnie Bley
Website - IdeaMarkets.org & Golden.com
Articles
Why False Claims About COVID-19 Refuse to Die: Tracking the information zombie apocalypse — via Nautilus
“Some of these arguments have noted that the case fatality rate in certain countries with extensive testing, such as Iceland, Germany, and Norway, is substantially lower. References to the low CFR in these countries have continued to circulate on social media, even though the CFR in all of these locations has crept up over time. In the academic realm, John Ioannidis, a Stanford professor and epidemiologist, noted in an editorial, “The harms of exaggerated information and non‐evidence‐based measures,” published on March 19 in the European Journal of Clinical Investigation, that Germany’s CFR in early March was only 0.2 percent. But by mid-April it had climbed to 2.45 percent, far closer to the original WHO estimate. (Ioannidis has not updated the editorial to reflect the changing numbers.) Even Iceland, which has tested more extensively than any other nation, had a CFR of 0.47 percent on April 13, more than 4 times higher than it was a month ago. None of this means that the WHO figure was correct—but it does mean some arguments that it is wildly incorrect must be revisited.
What do we do about false claims that refuse to die? Especially when these claims have serious implications for decision-making in light of a global pandemic? To some degree, we have to accept that in a world with rapid information sharing on social media, information zombies will appear. Still, we must combat them. Science journals and science journalists rightly recognize that there is intense interest in COVID-19 and that the science is evolving rapidly. But that does not obviate the risks of spreading information that is not properly vetted or failing to emphasize when arguments depend on data that is very much in flux.
Wherever possible, media reporting on COVID-19 developments should be linked to authoritative sources of information that are updated as the information changes. The Oxford-based Centre for Evidence-Based Medicine maintains several pages that review the current evidence on rapidly evolving questions connected to COVID-19—including whether current data supports the use of hydroxychloroquine and the current best estimates for COVID-19 fatality rates. Authors and platforms seeking to keep the record straight should not just remove or revise now-false information, but should clearly state what has changed and why. Platforms such as Twitter should provide authors, especially scientists and members of the media, the ability to explain why Tweets that may be referenced elsewhere have been deleted. Scientific preprint archives should encourage authors to provide an overview of major changes when articles are revised.
And we should all become more active sharers of retraction. It may be embarrassing to shout one’s errors from the rooftops, but that is what scientists, journals, and responsible individuals must do to slay the information zombies haunting our social networks.”
—
The epidemiology of misinformation — via Prospect Magazine
“The disturbing story of how the web is weaving weird connections between hippies, Nazis, Russian agents and the rest of us to spread lies about Covid-19
Some of the onus to clean up the Covid-19 infosphere is, however, on us all: we need to become more critical consumers (and producers) of information. That may be easier said than done. After all, conspiracy theorists and contrarians (it is not always easy to tell them apart) think that they are already doing this: witness the screeds of analysis some of them conduct to “disprove” the conclusions of the International Panel on Climate Change. Eamonn Holmes defended his comments about the 5G conspiracy on the grounds that he had “an inquiring mind.” And advice to read sceptically and critically all too easily morphs into the claim of the former Supreme Court judge and lockdown sceptic, Lord Sumption, that “it is the right and duty of every citizen to look and see what the scientists have said and to analyse it for themselves and to draw common sense conclusions.” As he exemplifies himself, the likely result of that is cherry-picking to suit one’s prejudices. The idea that this complex science can be adjudicated by “common sense” is ludicrous, even dangerous. Real wisdom, in contrast, lies in identifying and heeding the most trustworthy opinions, while recognising too that even these might incur uncertainties and errors, and that experts won’t always agree when “the science” itself is still evolving.
A free society must of course make room for rightwing libertarianism and leftwing Luddite paranoia. The problem today is that their distorting messages are now apt to become amplified out of proportion. They have just what it takes to become viral memes: simplistic and emotive messages (“You’re going to die! You’re living in a police state!”) that require no context to do their work. Canny agents of misinformation know how to tailor it to their advantage.
Containing a misinformation epidemic is then also partly a matter of finding the right medicine. Some have suggested the idea of “inoculating” populations in advance with reliable information, so that false ideas can never get a foothold (although that is surely much harder now there is such widespread distrust of “elites”). We need agreed and enforceable standards and regulations for social media. We need diagnostic tools to rapidly identify and isolate “super-spreaders,” and “virologists” of misinformation who can find and attack its weak spots. And we need to understand why different people have different levels of immunity and susceptibility to bad ideas—and to recognise that understanding misinformation, like disease, is in many respects an inescapably sociopolitical affair. As with the Covid-19 pandemic itself, the infodemic depends on how we all behave collectively as well as individually, and demands that we think about the consequences of our own actions on others. “We need to understand human collective behaviour in a crisis,” says biologist Carl Bergstrom of the University of Washington in Seattle.
It is no only our bodily but also our societal health that is at stake. Democracy struggles, as recent years have shown us, when citizens lack any shared set of facts. What we are witnessing with the coronavirus infodemic has implications way beyond strategies for managing disease pandemics (of which there will be others). The problem has been dimly acknowledged for years now with climate change, but it was happening on too “flat a curve” to be fully recognised. For Covid-19 those same features hove into view within a matter of weeks, and the deaths, which are expected to occur gradually over many years as climate change worsens, were in this case piling up daily. “This is like climate change run very, very fast,” says Bergstrom. Let’s hope we can learn something from it.”
—
The Information Apocalypse Is Already Here, And Reality Is Losing — via BuzzFeed News
“We’ve spent more than three years preparing for an information apocalypse. Why couldn’t we stop it with the coronavirus?
In its first few days of release, “The Plandemic” — a short film filled with so much coronavirus disinformation that it has since been banned by major tech platforms — racked up more than 8 million views across YouTube, Facebook, Instagram, and Twitter, peddling outright falsehoods and conspiratorial claims about the origins of the current pandemic.
This wasn't so much the result of the film's promptly and widely debunked content as it was the professional credentials of its main character, Judy Mikovits — a disgraced research scientist with a PhD in biochemistry and a resume that includes 22 years working for the National Cancer Institute. Mikovits lent a seemingly authoritative voice to a slop bucket of virus disinformation that was already circulating. She offered a PhD endorsement of long-debunked falsehoods about the coronavirus. She was a reason to believe — so much so that she is now a bestselling author on Amazon.
Mikovits was a perplexity to regular people trying to make sense of the current pandemic. As one person wrote in an email to BuzzFeed News shortly after “The Plandemic” was released: “[I can’t] figure out how to vet this disturbing video. It is either the truth or a very clever fabrication that plays all the anti-vax notes.”
In other words, “The Plandemic” had untethered viewers from our common reality and left them unable to distinguish fact from fiction. And it had done it entirely without technological wizardry. This was the future researchers and the media warned us about. And an avalanche of fake news hearings, news literacy efforts, and investments in fact-checking infrastructure since 2016 couldn’t stop it.”
—
I’m an Investigative Journalist. These Are the Questions I Asked About the Viral “Plandemic” Video — via ProPublica
ProPublica health care reporter Marshall Allen describes the questions he asks to assess coronavirus misinformation, starting with a viral video that claims the coronavirus is part of a “hidden agenda.”
He had me at, “I have no idea.” That sums it up. This is a vast pandemic and massive catastrophe. Our country wasn’t prepared for it, and the response by our top leaders has been disjointed. We’re restricted to our homes. Many people have lost their jobs and some are afraid or sick or dying. That makes us vulnerable to exploitation by people who will present inaccurate or intellectually dishonest information that promises to tell us the truth.
Perhaps “Plandemic” is guilty of sloppy storytelling, or maybe people really do believe the things they’re saying in the video. Or perhaps they’re being intentionally dishonest, or it’s a biased connecting of the dots rooted in personal and professional grievances. I don’t know because I can’t get inside their heads to judge their motives.
Ultimately, we’re all going to need to be more savvy consumers when it comes to information, no matter how slickly it’s presented. This may be but a signal of what’s to come in the run-up to the 2020 presidential election, when memes and ads of unknown origin come across our social media feeds. There are standards for judging the credibility of the media we take in every day, so let’s apply them.”
—
Nearly Half Of The Twitter Accounts Discussing ‘Reopening America’ May Be Bots — via Carnegie Mellon School of Computer Science
“Carley said multiple factors contribute to the surge. First, more individuals have time on their hands to create do-it-yourself bots. But the number of sophisticated groups that hire firms to run bot accounts also has increased. The nature of the pandemic matters, too. "Because it’s global, it’s being used by various countries and interest groups as an opportunity to meet political agendas," she said.
Carley's research team uses multiple methods to determine who is or isn't a bot. Artificial intelligence processes account information and looks at things such as the number of followers, frequency of tweeting and an account's mentions network.
"Tweeting more frequently than is humanly possible or appearing to be in one country and then another a few hours later is indicative of a bot," Carley said.
More than 100 types of inaccurate COVID-19 stories have been identified, such as those about potential cures. But bots are also dominating conversations about ending stay-at-home orders and "reopening America."
Many factors of the online discussions about “reopening America” suggest that bot activity is orchestrated. One indicator is the large number of bots, many of which are accounts that were recently created. Accounts that are possibly humans with bot assistants generate 66% of the tweets. Accounts that are definitely bots generate 34% of the tweets.
"When we see a whole bunch of tweets at the same time or back to back, it's like they're timed," Carley said. "We also look for use of the same exact hashtag, or messaging that appears to be copied and pasted from one bot to the next."
A subset of tweets about "reopening America" reference conspiracy theories, such as hospitals being filled with mannequins or the coronavirus being linked to 5G towers.
"Conspiracy theories increase polarization in groups. It’s what many misinformation campaigns aim to do," Carley said. "People have real concerns about health and the economy, and people are preying on that to create divides."
Carley said that spreading conspiracy theories leads to more extreme opinions, which can in turn lead to more extreme behavior and less rational thinking.”
—
Social-media companies must flatten the curve of misinformation — via Nature
“The pandemic lays bare the failure to quarantine online scams, hoaxes and lies amid political battles.
“Moderating content after something goes wrong is too late. Preventing misinformation requires curating knowledge and prioritizing science, especially during a public crisis. In my experience, tech companies prefer to downplay the influence of their platforms, rather than to make sure that influence is understood. Proper curation requires these corporations to engage independent researchers, both to identify potential manipulation and to provide context for ‘authoritative content’.
Early this April, I attended a virtual meeting hosted by the World Health Organization, which had convened journalists, medical researchers, social scientists, tech companies and government representatives to discuss health misinformation. This cross-sector collaboration is a promising and necessary start. As I listened, though, I could not help but to feel teleported back to 2017, when independent researchers first began uncovering the data trails of the Russian influence operations. Back then, tech companies were dismissive. If we can take on health misinformation collaboratively now, then we will have a model for future efforts.
As researchers, we must demand that tech companies become more transparent, accountable and socially beneficial. And we must hold technology companies to this commitment long after the pandemic. Only after several scandals did companies such as Twitter, Facebook and Google News introduce more rigorous standards to flag and disrupt bots and other tools for disinformation.
Curating knowledge is just as important as moderating content. Social-media companies must flatten the curve of misinformation.”
Books —
The Misinformation Age: How False Beliefs Spread by Cailin O'Connor & James Owen Weatherall
“The social dynamics of “alternative facts”: why what you believe depends on who you know
Why should we care about having true beliefs? And why do demonstrably false beliefs persist and spread despite bad, even fatal, consequences for the people who hold them?
Philosophers of science Cailin O’Connor and James Weatherall argue that social factors, rather than individual psychology, are what’s essential to understanding the spread and persistence of false beliefs. It might seem that there’s an obvious reason that true beliefs matter: false beliefs will hurt you. But if that’s right, then why is it (apparently) irrelevant to many people whether they believe true things or not?
The Misinformation Age, written for a political era riven by “fake news,” “alternative facts,” and disputes over the validity of everything from climate change to the size of inauguration crowds, shows convincingly that what you believe depends on who you know. If social forces explain the persistence of false belief, we must understand how those forces work in order to fight misinformation effectively.”
Additional Book: Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming by Naomi Oreskes & Erik M. Conway
Discussion — Noam Chomsky on the “Limits” of Knowledge & Thought with Bryan Magee
Documentary — Merchants of Doubt
Buy:
Stream:
http://www.documentarymania.com/player.php?title=Merchants%20of%20Doubt
Lecture— Merchants of Doubt - Naomi Oreskes - BYU Kennedy Center
Paper — How to Beat Science and Influence People: Policy Makers and Propaganda in Epistemic Networks by James Owen Weatherall, Cailin O’Connor
Abstract:
“In their recent book Merchants of Doubt[New York: Bloomsbury 2010], Naomi Oreskes and Erik Conway describe the “tobacco strategy”, which was used by the tobacco industry to influence policy makers regarding the health risks of tobacco products.
The strategy involved two parts, consisting of (1) promoting and sharing independent research supporting the industry’s preferred position and (2) funding additional research, but selectively publishing the results.
We introduce a model of the Tobacco Strategy, and use it to argue that both prongs of the strategy can be extremely effective—even when policy makers rationally update on all evidence available to them. As we elaborate, this model helps illustrate the conditions under which the Tobacco Strategy is particularly successful. In addition, we show how journalists engaged in ‘fair’ reporting can inadvertently mimic the effects of industry on public belief.”
Conclusion:
“What is perhaps most interesting about the results we have presented is not that they show what can work, but rather the insight they provide into how those strategies work. As we have emphasized above, in our discussion on unwitting propagandists, the basic mechanism at play in our models is that the propagandist biases the total evidence on which the policy-makers update their beliefs. The fact that each result that is shared is, in some sense, “real” ends up being irrelevant, because it is the statistical properties of the total available evidence that matter. For this reason, we see that producing results favorable to industry—say, through industry funding—is not necessary to manufacture favorable public beliefs, at least in cases where the evidence is probabilistic and spurious results are possible.
On this point: one might have expected that actually producing biased science would have a stronger influence on public opinion than merely sharing others’ results. But when one compares the two treatments we consider above, there are strong senses in which the less invasive, more subtle strategy of selective sharing is more effective than biased production, all things considered, particularly when the scientific community is large and the problem at hand is difficult (or the power of generic experiments is low). The reason is that such a scientific community will produce, on its own, plenty of research that, taken without proper context, lends credence to falsehoods. Merely sharing this already-available evidence is cost-effective, and much less risky than producing one’s own research—which, after all, can cost a great deal to produce, fail to generate the desired results, and ultimately be disqualified or ignored because of the association with industry. From this point of view, producing industry science (or even outright fraud) is simply not worth it. In many cases, another more effective, less expensive, and more difficult to detect strategy is available. Indeed, if industrial interests do wish to spend their money on science, perhaps rather than producing their own results and publishing them selectively, they would do better to adopt a different strategy, which Holman and Bruner [2017] have called “industrial selection”.
The idea behind industrial selection is that there are many experimental protocols, methodologies, and even research questions that scientists may adopt, often for their own reasons. Some protocols may tend to produce more industry-friendly results than others. Industrial selection involves identifying methods already present among the community of scientists that tend to favor one’s preferred outcome, and then funding scientists who al-ready use those methods. This extra funding works to amplify these scientists’ results, by allowing them to publish more, perform higher powered studies, and train more students who will go on to produce consonant results. Indeed, there is a strong potential feedback effect between industrial selection and selective sharing: by increasing the number of ambient industry-favorable results in ascientific literature, and then further amplifying those results by sharing them selectively, propagandists can have an even stronger effect…
We will conclude by noting a few ways in which the incentive structure of science contributes to the success of the would-be propagandist, particularly when employing the selective sharing strategy. As we noted above, selective sharing is most effective when (a) the problem is difficult (in the sense that p B is close to.5); (b) the number of data points per scientist per round is small; (c) there are many scientists working on a given problem; and (d) policy makers are connected to few scientists. This situation interacts with how scientists are rewarded for their work in complex, troubling ways. For instance, although there are some incentives for scientists to make their work accessible and to try to communicate it with the public and policy makers, it continues to be the case that academic scientists are evaluated on the basis of their research productivity—and there are sometimes reputational hazards associated with “popularizing” too much. This sort of effect can limit the number of scientists to whom policy makers are connected.”
Podcasts
I Can’t Believe it’s Not News: A Podcast About Fake News
“Discussing fake news one article at a time. Thoughts and opinions on how to respond to fake news. Combating the changes to our society based on disinformation and propaganda. Spread Jam. Not lies.”
—
The Truth About Fake News — BBC Academy
“The BBC’s media editor Amol Rajan asks James Ball, special correspondent at BuzzFeed News, and Mark Frankel, social media editor at BBC News, about the different meanings of 'fake news' and how journalists should respond to it.”
—
Additional podcasts:
—
Liam Kofi Bright on Knowledge, Truth, and Science — Sean Carrol’s Mindscape Podcast
“Everybody talks about the truth, but nobody does anything about it. And to be honest, how we talk about truth — what it is, and how to get there — can be a little sloppy at times. Philosophy to the rescue! I had a very ambitious conversation with Liam Kofi Bright, starting with what we mean by “truth” (correspondence, coherence, pragmatist, and deflationary approaches), and then getting into the nitty-gritty of how we actually discover it. There’s a lot to think about once we take a hard look at how science gets done, how discoveries are communicated, and what different kinds of participants can bring to the table.”
TED Talks—
How we can protect truth in the age of misinformation | Sinan Aral
How you can help transform the internet into a place of trust | Claire Wardle
Fake news is about to get much worse. Here's a solution. | Aviv Ovadya | TEDxMileHigh
How to see past your own perspective and find truth | Michael Patrick Lynch
Conspiracy Theories and the Problem of Disappearing Knowledge | Quassim Cassam | TEDxWarwick
Twitterverse—
—
Original Tweet HERE
Websites —
IdeaMarkets.org
“Centralized media institutions create “fiat narratives” by forcing self-serving interpretations of reality on the public. In the same way that fiat currencies are valuable only because governments say they are, fiat narratives are true only because media corporations say they are.
Media corporations have near-limitless power to enforce fiat narratives using tactics like these:
Limiting the scope of discussion (“Democrat vs. Republican”)
Astroturfing (paying people to imitate a grassroots movement)
Distributing airtime ($4B of free airtime to Donald Trump in 2016)
Sponsoring entertainment and media (CIA involvement in media)
Great firewalls (China)
Banning media outlets (Facebook and Twitter)
Interrupting members of US Congress to cover Justin Bieber (Video)
Scripting local news on a national scale (Video)
many more
No amount of public outcry or reasoned dissent can change fiat narratives if doing so would threaten the power of their issuers.
In the same way that fiat currencies are valuable only because governments say they are, fiat narratives are true only because media corporations say they are.
In the same way that Bitcoin allows people to transact without trusting banks and governments to manage fiat currency, a decentralized market that measures the perceived value of ideas allows people to define reality without trusting fiat narratives created by centralized media and other reality-defining institutions.”
Watch our interview with the founder of IdeaMarkets Mike Elias HERE:
—
Golden.com
Golden’s mission is to collect, organize and express 10+ billion topics in an accessible way, presented in neutrally-written and comprehensive topic pages.
“An abundance of knowledge is available to us via the Internet, but navigating the cacophony of formats, designs, sources and standards is challenging. Google, Wikipedia, Bing, DuckDuckGo, Quora, StackExchange, Github, etc are our go-to places for seeking information. However, the knowledge reference products in the market today fall short of mapping, in a highly organized way, everything we collectively know as a human society and what exists on the web.
Specific examples of topics that have been hard to find canonical information on include: Consensus mechanisms, new legal instruments like the Simple Agreement for Future Tokens, new fund constructions like cryptocurrency hedge funds, Arrington XRP Capital; companies/projects like Benchling, HelloSign, CryptoKitties, Ginkgo Bioworks, Momentus Space, Brave web browser, Google AutoML, Algorand; academic areas e.g., Quantum Artificial Life and Lenia, CRISPR/Cas13, phage-assisted continuous evolution, constructor theory, to name just a small sample.
It is now possible and necessary to map everything that exists systematically and pull together academic papers, documentaries, videos, podcasts, further reading, guides and meta-data that surrounds these topics.”
That’s it for this week. Until next time - Ad Astra!
More on Eclectic Spacewalk:
Subscribe to Substack Newsletter
Listen to all podcasts on Anchor
Follow Eclectic Spacewalk on Twitter