How Amazon became an engine for anti-vaccine conspiracy theories

Search for “vaccines” on Amazon’s bookstore, and a banner encourages shoppers to “learn more” about COVID-19, with a link to the Centers for Disease Control. But the text almost vanishes amid the eye-catching book covers spreading out below, many of which carry Amazon’s orange “bestseller” badge. One top-ranked book that promises “the other side of the story” of vaccine science is #1 on Amazon’s list for “Health Policy.” Next to it, smiling infants grace the cover of the top-selling book in “Teen Health,” co-authored by an Oregon pediatrician whose license was suspended last year over an approach to vaccinations that placed “many of his patients at serious risk of harm.” Anyone Who Tells You That Vaccines Are Safe and Effective Is Lying , by a prominent English conspiracy theorist, promises “the facts about vaccination — so that you can make up your own mind.” There are no warning notices or fact checks—studies have shown no link between vaccines and autism, for instance—but there are over 1,700 five-star ratings and a badge: the book is #1 on Amazon’s list for “Children’s Vaccination & Immunization.” Offered by small publishers or self-published through Amazon’s platform, the books rehearse the falsehoods and conspiracy theories that fuel vaccine opposition, steepening the impact of the pandemic and slowing a global recovery. They also illustrate how the world’s biggest store has become a megaphone for anti-vaccine activists, medical misinformers, and conspiracy theorists, pushing dangerous falsehoods in a medium that carries more apparent legitimacy than just a tweet. “Without question, Amazon is one of the greatest single promoters of anti-vaccine disinformation, and the world leader in pushing fake anti-vaccine and COVID-19 conspiracy books,” says Peter Hotez, a pediatrician and vaccine expert at the Baylor College of Medicine. For years, journalists and researchers have warned of the ways fraudsters, extremists, and conspiracy theorists use Amazon to earn cash and attention. To Hotez, who has devoted much of his career to educating the public about vaccines, the real-world consequences aren’t academic. In the US and elsewhere, he says, vaccination efforts are now up against a growing ecosystem of activist groups, foreign manipulators, and digital influencers who “peddle fake books on Amazon.” Anti-vaccine titles dominate search results for “vaccines”; the first autocomplete suggestion is “vaccines are dangerous” (Amazon) Letting the truth loose The Seattle giant is known for a relatively minimalist approach to policing content. The goal, founder Jeff Bezos said in 1998, was “to make every book available—the good, the bad and the ugly.” Customer reviews would “let truth loose.” Amazon’s algorithms and recommendation boxes would make it a place where, as it says on its website, “customers can find everything they need and want.” These days, they can publish everything they want, too: Amazon’s self-publishing platforms allow authors to make paper books, audio books, or e-books. The latter, Amazon says , “takes less than five minutes and your book appears on Kindle stores worldwide within 24–48 hours.” Gradually, Amazon has taken a tougher approach to content moderation, and to a seemingly ceaseless onslaught of counterfeits, fraud, defective products, and toxic speech. The company says its automated and human reviewers now evaluate thousands of products a day to ensure they abide by its offensive content policies . For books, its prohibitions are brief and vague: material “that we determine is hate speech, promotes the abuse or sexual exploitation of children, contains pornography, glorifies rape or pedophilia, advocates terrorism, or other material we deem inappropriate or offensive.” Sometimes that includes health misinformation. In 2019, the company removed a number of titles that connected autism to vaccines after Rep. Adam Schiff wrote to Bezos to say he was concerned Amazon was “surfacing and recommending products and content that discourage parents from vaccinating their children,” citing “strong evidence” that vaccine misinformation had helped fuel a deadly measles epidemic in Washington that year. After the start of the pandemic, Amazon removed over one million fraudulent products related to COVID-19, including “cures” like herbal treatments, prayer healing, and vitamin supplements. It also pulled an unknown number of books that pushed pandemic conspiracy theories, and added banners linking customers to credible information for some search terms. January 6 led to another purge across Big Tech, and Amazon also pulled alt-right and QAnon merchandise for breaking its rules on hate speech. Later that month, it removed dozens of books promoting Holocaust denial, and finally removed the white supremacist novel The Turner Diaries . It even banned Parler from its cloud service, citing the right-wing social network’s lax content moderation. Despite its sweeps, however, Amazon is still flooded with misinformation, and helping amplify it too: A series of recent studies and a review by Fast Company show the bookstore is boosting misinformation around health-related terms like “autism” or “covid,” and nudging customers toward a universe of other conspiracy theory books. Read More …

Why the Colonial Pipeline ransomware attack is a sign of things to come

Ransomware has grown fouler than ever, but it’s also grown up. The practice of using malware to encrypt files on a victim’s devices and then demanding a ransom payment for unlocking them has advanced far beyond its origins as a nuisance for individual users. These days, it’s a massively profitable business that has spawned its own ecosystem of partner and affiliate firms. And as a succession of security experts made clear at the RSA Conference last week, we remain nowhere near developing an equivalent of a vaccine for this online plague. “It’s professionalized more than it’s ever been,” said Raj Samani, chief scientist at McAfee, in an RSA panel . “Criminals are starting to make more money,” said Jen Miller-Osborn, deputy director of threat intelligence at Palo Alto Networks’ Unit 42, in another session . Read More …

Why the Colonial Pipeline ransomware attack is a sign of things to come

Ransomware has grown fouler than ever, but it’s also grown up. The practice of using malware to encrypt files on a victim’s devices and then demanding a ransom payment for unlocking them has advanced far beyond its origins as a nuisance for individual users. These days, it’s a massively profitable business that has spawned its own ecosystem of partner and affiliate firms. And as a succession of security experts made clear at the RSA Conference last week, we remain nowhere near developing an equivalent of a vaccine for this online plague. “It’s professionalized more than it’s ever been,” said Raj Samani, chief scientist at McAfee, in an RSA panel . “Criminals are starting to make more money,” said Jen Miller-Osborn, deputy director of threat intelligence at Palo Alto Networks’ Unit 42, in another session . She added that the average ransomware payout now exceeds $300,000, fueled by such tactics as the “double extortion” method of exfiltrating sensitive data from targeted systems and then threatening to post it. That method figured in recent ransomware attacks against Colonial Pipeline and Washington, D.C.’s Metropolitan Police Department . “It’s such a lucrative business now for the criminals, it is going to take a full court press to change that business model,” agreed Michael Daniel, president and CEO of the Cyber Threat Alliance, in that panel. (Just five years ago, the $17,000 ransom reportedly paid by a compromised hospital was a newsworthy figure.) Having this much money sloshing around has given rise to networks of affiliates and brokers. Samani’s colleague John Fokker, head of cyber investigations at McAfee, explained the rise of “ransomware as a service” (“RaaS”), in which you can buy or rent exploit kits or back doors into companies. He showed one ad from an “access broker” that listed a price of $7,500 for compromised Virtual Private Network accounts at an unspecified Canadian firm. The ad vaguely described this target company as a “Consumer Goods (manufacturing, retailing, food etc…)” enterprise with about 9,000 employees and $3 billion in revenue. “The commoditization of these capabilities for the criminals makes it so easy,” said Phil Reiner, CEO of the Institute for Security and Technology, during one of the RSA panels. RSA speakers noted how often ransomware attacks start with exploitations of known, avoidable vulnerabilities. Samani called Microsoft’s Remote Desktop Protocol “the number-one most common entry vector for corporate networks related to ransomware attacks.” Fokker added that companies that use RDP often make this remote-access tool too easy to compromise, joking that RDP also means “really dumb passwords.” The pandemic has helped grease the skids further for ransomware attacks—both by requiring companies to rush into remote work and by making people a little more tempted to respond to COVID-themed phishing lures. As Samani put it, phishing is “still there, still works, people still click on links.” Two other factors make ransomware especially resistant to any suppression attempts. One is cryptocurrency enabling hard-to-trace online funds transfers. Bitcoin and other digital currencies may not be too useful for everyday transactions , but they suit the business of ransomware well Read More …

Amid worker and regulator complaints, Google is facing a turning point

By any measure, Google is a colossus of the tech industry, with a market capitalization of nearly $1.5 trillion , a massive army of lobbyists , and elite academics at its disposal . But lately, its reputation has been hurt by a highly publicized feud with well-respected ethical AI researchers, and revelations about its toxic workplace, previously hidden under NDAs , are roiling the tech giant’s PR-spun Disneyland-like facade. Now, it’s facing a multitude of challenges including talent attrition, resistance from an increasingly influential union, and increased public scrutiny. Privacy-centered competitors are nipping at its ankles, antitrust regulations loom on the horizon, and user interest in de-Googling their online activities is mounting. These headwinds are threatening the tech giant’s seemingly unassailable industry dominance and may bring us closer to a “de-Googled” world, where Google is no longer the default. At war with its workers In December 2020, the tech giant dismissed eminent scholar Timnit Gebru over a research paper that analyzed the bias inherent in large AI models that analyze human language—a type of AI that undergirds Google Search. Google’s whiplash-inducing reversal on ethics and diversity as soon as its core business was threatened was not entirely surprising. However, its decision to cover this up with a bizarre story claiming that Gebru resigned sparked widespread incredulity. Since Gebru’s ouster, Google has since fired her colleague Margaret Mitchell and restructured its “responsible AI” division under the leadership of another Black woman , now known to have deep links to surveillance technologies. These events sent shock waves through the research community beholden to Google for funding and triggered much-needed introspection about the insidious influence of Big Tech in this space . Last week, the organizers of the Black in AI, Queer in AI, and Widening NLP groups announced their decision to end their sponsorship relationship with Google in response. While the prestige and lucrative compensation that comes from working at Google is still a huge draw for many who don’t consider these issues a dealbreaker, some, such as Black in AI cofounder and scholar Rediet Abebe , were always wary. As Abebe explained in a tweet, her decision to back out of an internship at the tech giant was triggered by Google’s mistreatment of BIPOC, involvement with military warfare technologies, and ouster of Meredith Whittaker , another well-known AI researcher who played a lead role in the Google Walkout in 2018 . Abebe is not the only one who has decided to walk away from Google. In response to this latest AI ethics debacle, leading researcher Luke Stark turned down a significant monetary award , other talented engineers resigned , and Gebru’s much-respected manager Samy Bengio also left the company. A few years back this level of pushback would be unimaginable given Google’s formidable clout, but the tech giant seems to have met its match in Gebru and other workers who refuse to back down. Even with its formidable PR machinery spinning out an announcement touting an expanded AI ethics team, the damage has been done, and Google’s misguided actions will hurt its ability to attract credible talent for the foreseeable future. More ex-employees are also coming out with details of their horrifying experience s, adding fuel to the rising calls for better employee protections. These disclosures have renewed support for tech workers as hundreds of Google employees unionized after many years of activism, despite union-busting efforts by their employer. Read More …

It’s time to take videos of Black Americans dying offline

Since 2013, when Black Lives Matter erupted on the scene to challenge the acquittal of Florida resident George Zimmerman for killing 17-year old Trayvon Martin, images of Black Americans dying on-screen have become as constant as air. In the last week, videos pertaining to at least four instances of police violence against Black Americans have circulated online. At the same time, a Minnesota jury found former police officer Derek Chauvin guilty for the murder of George Floyd. The video of Chauvin kneeling on Floyd’s neck while Floyd gasped for breath sparked a movement for police accountability that led to Chauvin’s conviction on all charges. But that video, which has continued to circulate, is also deeply traumatizing. Now Allissa V. Richardson, an author and journalism professor at the University of Southern California, is calling for more guardrails around publishing visual accounts of violence against Black people Read More …