A Trump social network could get sued out of existence

Donald Trump is “holding high-powered meetings” to start his own social network in the next two to three months, according to the ex-president’s adviser Jason Miller, who appeared on the Fox News show Media Buzz on Sunday. The former president was, of course, booted from Twitter and suspended from YouTube and Facebook (pending review), after spewing misinformation about the 2020 election and, arguably, inciting a riot at the Capitol on January 6. Sunday night, many on the right were joyous about the idea of a Trump social networking site. “BarYohai,” a commenter on FoxNews.com, summed up the sentiment nicely : This is how the free market works. People “vote” with their wallets. Trump’s social media platform will be widely successful and, additionally, it will create an incentive for people to close their Twitter (and perhaps even Facebook) accounts. Amazon and other self-appointed “speech police” will also feel the economic pain as dissatisfied customers seek substitutes for, and then “cancel” the “cancel culture” businesses. But running a social network is hard, as Trump may soon find out if Miller is right. People post untrue, defamatory, threatening, and conspiratorial things on social networks, requiring a major investment in content moderation staff and systems. It might get even harder this year if Congress decides to scale back or repeal Section 230 of the Communications Decency Act, which shields social networks from civil suits arising from hosting (or removing) user content. Actually, repealing Section 230 was one of Trump’s go-to threats against the Big Tech companies that run social networks, especially Twitter. Days after Twitter began applying truth labels to his tweets, Trump released an executive order directing Congress to remove the 230 protections. #BREAKING : President Trump signs executive order strip liability protection from companies that censure content: “Companies that engage in censoring or any political conduct will not be able to keep their liability shield.” https://t.co/D5ooUw1fNz pic.twitter.com/FHs7kUvJH1 — The Hill (@thehill) May 28, 2020 Many of Trump’s executive orders had little effect, but that one spurred some of his GOP devotees in Congress, such as Missouri Senator Josh Hawley, to introduce bills restricting the Section 230 protections. Hawley’s 2019 Ending Support for Internet Censorship Act reserves Section 230 protections only for content removals the social network can prove were “politically neutral.” A House bill from Arizona Republican Paul Gosar proposed revoking Section 230’s legal exemptions for social networks that remove content they deem “objectionable.” Other bills condition the legal protections on more transparent content monitoring and faster removal of toxic content. Reforming Section 230 is one of the few issues in Congress that’s garnered support from both Democrats and Republicans, if for different reasons Read More …

Google’s former ad chief is challenging its search engine monopoly

The government is getting its antitrust game on this year after leaving it mostly dormant for the better part of two decades, and its sights are set squarely on Big Tech. Democratic Senator Amy Klobuchar from Minnesota is leading Congress’s powerful Senate Judiciary antitrust committee. “We’ve got to look at everything when it comes to putting rules in for tech,” she says Read More …

The CDC’s program to track vaccine effectiveness over time leaves out 60 million Americans

The digital divide can be deadly. That has been the stark lesson of the COVID-19 pandemic, which has revealed how decades of underinvestment in digital infrastructure have left millions of Americans cut off from help during COVID-19. This has prevented many from finding vaccine appointments, it has thwarted efforts to release contact-tracing apps, and now it’s undermining the safety of the vaccine. The COVID-19 vaccines have been widely heralded as incredibly safe and effective, far exceeding even the most optimistic hopes for how quickly and effectively we could develop the jab. But given the historic speed with which the vaccines were rolled out, more data is needed. This is why the CDC developed v-safe , a long-term vaccine surveillance program. Post-injection surveillance is crucial, not only to monitor for side effects (which are quite rare and mild), but also to remind users about their second dose and monitor how long the vaccines remain effective. The problem is that the CDC made a crucial error, one that could undermine v-safe and lead to blind spots in the data it collects. You see, v-safe requires a smartphone. That may not sound like a big hurdle, but the truth is that at least one in five Americans lacks access to a smartphone. Read More …

Here’s how human consciousness works—and how a machine might replicate it

I recently attended a panel discussion titled Being Human in the Age of Intelligent Machines. At one point during the evening, a philosophy professor from Yale said that if a machine ever became conscious, then we would probably be morally obligated to not turn it off. The implication was that if something is conscious, even a machine, then it has moral rights, so turning it off is equivalent to murder. Wow! Imagine being sent to prison for unplugging a computer. Should we be concerned about this? Most neuroscientists don’t talk much about consciousness. They assume that the brain can be understood like every other physical system, and consciousness, whatever it is, will be explained in the same way. Since there isn’t even an agreement on what the word consciousness means, it is best to not worry about it. Philosophers, on the other hand, love to talk (and write books) about consciousness. Some believe that consciousness is beyond physical description. That is, even if you had a full understanding of how the brain works, it would not explain consciousness. Philosopher David Chalmers famously claimed that consciousness is “the hard problem,” whereas understanding how the brain works is “the easy problem.” This phrase caught on, and now many people just assume that consciousness is an inherently unsolvable problem. Personally, I see no reason to believe that consciousness is beyond explanation. I don’t want to get into debates with philosophers, nor do I want to try to define consciousness. However, the Thousand Brains Theory suggests physical explanations for several aspects of consciousness Read More …

‘This is bigger than just Timnit’: How Google tried to silence a critic and ignited a movement

Timnit Gebru—a giant in the world of AI and then co-lead of Google’s AI ethics team—was pushed out of her job in December. Gebru had been fighting with the company over a research paper that she’d coauthored, which explored the risks of the AI models that the search giant uses to power its core products—the models are involved in almost every English query on Google , for instance. The paper called out the potential biases (racial, gender, Western, and more) of these language models, as well as the outsize carbon emissions required to compute them. Google wanted the paper retracted, or any Google-affiliated authors’ names taken off; Gebru said she would do so if Google would engage in a conversation about the decision. Instead, her team was told that she had resigned. After the company abruptly announced Gebru’s departure, Google AI chief Jeff Dean insinuated that her work was not up to snuff—despite Gebru’s credentials and history of groundbreaking research . The backlash was immediate. Thousands of Googlers and outside researchers leaped to her defense and charged Google with attempting to marginalize its critics , particularly those from underrepresented backgrounds. A champion of diversity and equity in the AI field, Gebru is a Black woman and was one of the few in Google’s research organization. “It wasn’t enough that they created a hostile work environment for people like me [and are building] products that are explicitly harmful to people in our community. It’s not enough that they don’t listen when you say something,” Gebru says. “Then they try to silence your scientific voice.” In the aftermath, Alphabet CEO Sundar Pichai pledged an investigation; the results were not publicly released, but a leaked email recently revealed that the company plans to change its research publishing process, tie executive compensation to diversity numbers, and institute a more stringent process for “sensitive employee exits.” In addition, the company appointed engineering VP Marian Croak to oversee the AI ethics team and report to Dean Read More …