Facebook to world: Sorry. World to Facebook: Doesn’t cut it.
Facebook news has become a cottage industry of bad PR headlines since the revelation that the company’s leaky data-sharing policies also extended to user data that it let an academic researcher pull from the platform, which then reportedly let U.K. data analytics company Cambridge Analytica look it over and could reportedly involve some 50 million users. Not a lot of this is clear, other than it came from one former CA employee.
Cambridge Analytica at one time worked with the Trump campaign, which used social media quite effectively during the 2016 campaign to communicate with supporters. All of this is routine in campaigns. Many in the media celebrated when the Obama campaign extracted Facebook data on millions of its users in 2012 to target voters. But this is the Trump era, so the opposition media fanned another bonfire.
Except one could argue that this one didn’t need much of a spark to light it up, given how many problems are stacking up over Big Tech and Facebook in particular. And it may have implications for Section 230 of the Communications Decency Act (CDA), the law that gave rise to Facebook and other social media platforms and shields them from being treated as publishers, even though they are major distributors of news these days.
After a week of headlines over the furor, with Facebook’s investors shaving nearly $50 billion in value from the company’s $500-billion market valuation, on Sunday the saga turned to full-page ad apologies in major newspapers’ print editions such as the Wall Street Journal, New York Times and the U.K.’s Daily Mail.
According to TheHill.com, CEO Mark Zuckerberg took out the ads promising to “do better,” after the data harvesting uproar.
“You may have heard about a quiz app built by a university researcher that leaked Facebook data of millions of people in 2014. This was a breach of trust, and I’m sorry we didn’t do more at the time,” Zuckerberg wrote. “We’re now taking steps to make sure this doesn’t happen again.”
“Thank you for believing in this community. I promise to do better for you,” the message concludes.
His parsed his words. The issue may have been a breach in trust, but it wasn’t a data breach. Facebook let the researcher have the data. Cambridge Analytica has said it is reviewing its practices since the issue came up (which was followed by UK government officials staging a rather over-the-top raid on the company over the issue).
The ads cap a tumultuous week that saw Zuckerberg and Facebook buffeted by ill winds from all points along the political spectrum.
Democrats and Republicans are mad about how the Facebook platform sold ads to Russians during the 2016 elections, and that it doesn’t do enough to police the fake news that proliferate across its platform.
Republican lawmakers, like Democrats, are also requesting Zuckerberg come to Washington and answer more questions about its data-sharing practices.
And free-speech advocates, especially conservatives, are crying foul over clear evidence that Facebook’s recent algorithm changes have resulted in much lower traffic and engagement numbers of conservative sites such as Breitbart news and a 45 percent drop in engagement with President Trump’s Facebook posts. The same algorithm changes are registering nary a nudge on left-leaning sites’ traffic.
It keeps going. The Federal Trade Commission is looking into whether Facebook lived up to a consent decree it signed in 2011 promising it would behave better with user’s data. According to the proposed settlement, Facebook is required to “take several steps to make sure it lives up to its promises in the future, including giving consumers clear and prominent notice and obtaining consumers’ express consent before their information is shared beyond the privacy settings they have established.”
And now this: Tech journalist Sean Gallagher, writing for ArsTechnica, has documented how “Facebook also had about two years’ worth of phone call metadata from [a user’s] Android phone, including names, phone numbers, and the length of each call made or received.” (Gallagher updated his story late Sunday to note Facebook’s response to how it collected info about Android phone calls and SMS data on some users.)
But as a WSJ editorial noted, all of this “despair over the Trump campaign and Facebook has had an incidental benefit: People are finally realizing that the sprawling social network isn’t merely a place to share cat photos. Facebook is the world’s biggest media conglomerate, depository of consumer data and communications network.”
The European Commission is looking at Google and Facebook’s data practices that turn the whole opt-out concept of how your data is used all the way around. According to The New York Times:
In May, the European Union is instituting a comprehensive new privacy law, called the General Data Protection Regulation. The new rules treat personal data as proprietary, owned by an individual, and any use of that data must be accompanied by permission — opting in rather than opting out — after receiving a request written in clear language, not legalese.
Which brings us to the oft-cited Section 230 of the Communications Decency Act, part of the 1996 Telecommunications act that re-ordered many rules for media and mergers for years to come in an effort to promote new forms of communications for consumers.
The CDA was designed to shield tech start-ups from liability for user-generated content – such as the comments from users about articles. Many theories hold that Google, Facebook, YouTube and other content-sharing and content-finding platforms wouldn’t have thrived if users were allowed to sue them for defamatory statements that users post. Section 230 of the CDA makes the U.S. unique in the world regarding a hands-off approach from government over online speech.
But in an era of increasing frustration with the Big Tech players who control how we find information and how much of it we see in our news feeds, Section 230 may be up for even more changes than ones passed this week. The WSJ notes:
The Senate passed legislation this [past] week that exempts sex-trafficking from Section 230, and several Senators have threatened more changes if tech companies don’t clean up their act. The Federal Trade Commission is investigating Facebook’s privacy protections, and Mr. Zuckerberg has said he’s open to more regulation. But it would be far better for Facebook to take more responsibility for its content than for politicians and bureaucrats to do so.”
The changes proposed by the Senate, and headed for the President’s desk, are aimed at cracking down on websites that clearly promote the exploitation of child prostitution. After initially resisting the bill, the Internet Association, a lobbying group that includes Google and Facebook, supported the legislation.
As NPR notes in a comprehensive story looking at how the changes to Section 230 have already upended the rules for the big platform providers, the important words in the federal code are:
“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
But Section 230 is also tied to some of the worst stuff on the Internet, protecting sites when they host revenge porn, extremely gruesome videos or violent death threats. The broad leeway given to Internet companies represents “power without responsibility,” Georgetown University law professor Rebecca Tushnet wrote in an oft-cited paper.
“We’re at an inflection point, when the great wave of optimism about tech is giving way to growing alarm,” said Heather Grabbe, director of the Open Society European Policy Institute, in a NYT article about the EU’s data privacy plans with Facebook and Google in mind. “This is the moment when Europeans turn to the state for protection and answers, and are less likely than Americans to rely on the market to sort out imbalances.”
Section 230 of the CDA is unique because it protects online sharing platforms from being treated as publishers even if they police — edit, if you will — the content of their content sharing sites, such as removing defamatory material and enforcing their own online community standards for posting.
But that’s where the companies are creating new problems for speech in general, and online speech in particular.
As noted above, Facebook’s policies with its platform, such as the new algorithm change in January, resulted in significant drop in engagement for Trump pages and conservative outlets.
Google is facing lawsuits over its censorship of conservative points of view on its YouTube platform and for firing an engineer over a memo questioning its “ideological echo chamber” in its workplace culture. That culture finds its way into how search results are treated.
It is a major benefactor of Democratic politicians, then decides that it will start “fact checking” right-leaning publications such as The Daily Caller, while virtually ignoring leftists’ sites.
So the same features of Section 230 that protect these companies from being treated like publishers, as they police their own content, is creating content policing of another sort: the suppression of a free exchange of ideas by political points of view the companies don’t like as a matter of policy.
And in case you weren’t following Facebook’s behavior on other shores around the world, Casey Newton of TheVerge, which is sympathetic to Facebook’s lefty culture, rounded up a few, noting that, in March alone:
–-Facebook, Instagram, and WhatsApp were forced to shut down temporarily in Sri Lankaafter inflammatory messages posted to the service incited mob violence against the country’s Muslim minority.
–United Nations investigators blamed Facebook for spreading hate speech that incited violence against the Rohingya minority in Myanmar.
–The Facebook search bar briefly auto-filled its search bar with suggestions for porn.
–A far-right Italian politician credited Facebook with his party’s surprising electoral victory, after reports that Russian state media used the platform to promote stories suggesting Italy faced an immigration crisis.
[Italy does have a crisis.]
–Facebook banned far-right group Britain First, which had more than 2 million followers, for inciting violence against minorities.
[Quick note: In the U.K., where a bad joke can get its citizens’ version of free speech curtailed, it’s hard to know how reactionary the platform was to this group compared to its tolerance of far-left groups whose rhetoric is no less offensive to many.]
–Facebook’s chief security officer is quitting after reportedly arguing too forcefully that the company should investigate and disclose Russian activity on the platform.
Section 230 of the CDA has a problem, reports Wired.
And Facebook itself may have a Section 230 problem if regulators step in, which carries very big implications for how we communicate and share ideas in a 21st Century democracy built on open expression.
The problem is, a lot of it this problem has been created by the very tech giants who have benefited from the CDA the most.
Tech entrepreneur Scott Galloway, writing in Esquire, sums up a solution that is gaining currency in Washington, the finance world, academia and media: It is time to break up the Big Four of tech: Amazon, Google, Apple and Facebook so that new companies can innovate — and a marketplace of ideas necessary to feed that innovation can breathe again.
(Updated Monday, March 26, 2018 to clarify that many of the details on Facebook’s third party data sharing practices are not clear and hard to verify, and to add links to Gallagher’s article on Facebook and Android phone data, and Galloway’s Esquire article. It’s pretty much a major update to the piece. –The Management)