Nov 192017
 

Originally posted on Techdirt November 15, 2017.

Well, I was wrong: last week I lamented that we might never know how the Ninth Circuit ruled on Glassdoor’s attempt to quash a federal grand jury subpoena served upon it demanding it identify users. Turns out, now we do know: two days after the post ran the court publicly released its decision refusing to quash the subpoena. It’s a decision that doubles-down on everything wrong with the original district court decision that also refused to quash it, only now with handy-dandy Ninth Circuit precedential weight.

Like the original ruling, it clings to the Supreme Court’s decision in Branzburg v. Hayes, a case where the Supreme Court explored the ability of anyone to resist a grand jury subpoena. But in doing so it manages to ignore other, more recent, Supreme Court precedents that should have led to the opposite result.

Here is the fundamental problem with both the district court and Ninth Circuit decisions: anonymous speakers have the right to speak anonymously. (See, e.g., the post-Branzburg Supreme Court decision McIntyre v. Ohio Elections Commission). Speech rights also carry forth onto the Internet. (See, e.g., another post-Branzburg Supreme Court decision, Reno v. ACLU). But if the platforms hosting that speech can always be forced to unmask their users via grand jury subpoena, then there is no way for that right to ever meaningfully exist in the context of online speech. Continue reading »

Nov 192017
 

Cross-posted from Techdirt November 14, 2017.

Earlier this year I wrote about Yelp’s appeal in Montagna v. Nunis. This was a case where a plaintiff had subpoenaed Yelp to unmask one of its users and Yelp tried to resist the subpoena. In that case, not only had the lower court refused to quash the subpoena, but it sanctioned Yelp for having tried to quash it. Per the court, Yelp had no right to try to assert the First Amendment rights of its users as a basis for resisting a subpoena. As we said in the amicus brief I filed for the Copia Institute in Yelp’s appeal of the ruling, if the lower court were right it would be bad news for anonymous speakers, because if platforms could not resist unfounded subpoenas then users would lose an important line of defense against all the unfounded subpoenas seeking to unmask them for no legitimate reason.

Fortunately, a California appeals court just agreed it would be problematic if platforms could not push back against these subpoenas. Not only has this decision avoided creating inconsistent law in California (earlier this year a different California appeals court had reached a similar conclusion), but now there is even more language on the books affirming that platforms are able to try to stand up for their users’ First Amendment rights, including their right to speak anonymously. As we noted, platforms can’t always push back against these discovery demands, but it is often in their interests to try protect the user communities that provide the content that make their platforms valuable. If they never could, it would seriously undermine those user communities and all the content these platforms enable.

The other bit of good news from the decision is that the appeals court overturned the sanction award against Yelp. It would have significantly chilled platforms if they had to think twice before standing up for their users because of how much it could cost them financially for trying to do so.

But any celebration of this decision needs to be tempered by the fact that the appeals court also decided to uphold the subpoena in question. While it didn’t fault Yelp for having tried to defend its users, and, importantly, it found that it had the legal ability to, it gave short shrift to that defense.

The test that California uses to decide whether to uphold or quash a subpoena is a test from a case called Krinsky, which asks whether the plaintiff has made a “prima facie” case. In other words, we don’t know if the plaintiff necessarily would win, but we want to ensure that it’s at least possible for plaintiffs to prevail on their claims before we strip speakers of their anonymity for no good reason. That’s all well and good, but thanks to the appeals court’s extraordinarily generous read of the statements at issue in this case, one that went out of its way to infer the possibility of falsity in what were at their essence statements of opinion (which is ordinarily protected by the First Amendment), the appeals court decided that the test had been satisfied.

This outcome is not only unfortunate for the user whose identity will now be revealed to the plaintiff but for all future speakers now that there is an appellate decision on the books running through the “prima facie” balancing test in a way that so casually dismisses the protections speech normally has. It at least would have been better if the question considering whether the subpoena should be quashed had been remanded to the lower court, where, even if that court still reached a decision too easily-puncturing of the First Amendment protection for online speech it would have posed less of a risk to other speech in the future.

Nov 122017
 

This post appeared on Techdirt on 11/10/17.  The anniversary it notes is today.

We have been talking a lot lately about how important Section 230 is for enabling innovation and fostering online speech, and, especially as Congress now flirts with erasing its benefits, how fortuitous it was that Congress ever put it on the books in the first place.

But passing the law was only the first step: for it to have meaningful benefit, courts needed to interpret it in a way that allowed for it to have its protective effect on Internet platforms. Zeran v. America Online was one of the first cases to test the bounds of Section 230’s protection, and the first to find that protection robust. Had the court decided otherwise, we likely would not have seen the benefits the statute has since then afforded.

This Sunday the decision in Zeran turns 20 years old, and to mark the occasion Eric Goldman and Jeff Kosseff have gathered together more than 20 essays from Internet lawyers and scholars reflecting on the case, the statute, and all of its effects. I have an essay there, “The First Hard Case: ‘Zeran v. AOL’ and What It Can Teach Us About Today’s Hard Cases,” as do many other advocates, including lawyers involved with the original case. Even people who are not fans of Section 230 and its legacy are represented. All of these pieces are worth reading and considering, especially by anyone interested in setting policy around these issues.

Nov 062017
 

This post is the second in a series that ran on Techdirt about the harm to online speech through unfettered discovery on platforms that they are then prevented from talking about.

In my last post, I discussed why it is so important for platforms to be able to speak about the discovery demands they receive, seeking to unmask their anonymous users. That candor is crucially important in ensuring that unmasking demands can’t damage the key constitutional right to speak anonymously, without some sort of check against their abuse.

The earlier post rolled together several different types of discovery instruments (subpoenas, warrants, NSLs, etc.) because to a certain extent it doesn’t matter which one is used to unmask an anonymous user. The issue raised by all of them is that if their power to unmask an anonymous user is too unfettered, then it will chill all sorts of legitimate speech. And, as noted in the last post, the ability for a platform receiving an unmasking demand to tell others it has received it is a critical check against unworthy demands seeking to unmask the speakers behind lawful speech.

The details of each type of unmasking instrument do matter, though, because each one has different interests to balance and, accordingly, different rules governing how to balance them. Unfortunately, the rules that have evolved for any particular one are not always adequately protective of the important speech interests any unmasking demand necessarily affects. As is the case for the type of unmasking demand at issue in this post: a federal grand jury subpoena.

Grand jury subpoenas are very powerful discovery instruments, and with good reason: the government needs a powerful weapon to be able to investigate serious crimes. There are also important constitutional reasons for why we equip grand juries with strong investigatory power, because if charges are to be brought against people, it’s important for due process reasons that they have been brought by the grand jury, as opposed to a more arbitrary exercise of government power. Grand juries are, however, largely at the disposal of government prosecutors, and thus a grand jury subpoena essentially functions as a government unmasking demand. The ability to compel information via a grand jury subpoena is therefore not a power we can allow to exist unchecked.

Which brings us to the story of the grand jury subpoena served on Glassdoor, which Paul Levy and Ars Technica wrote about earlier this year. It’s a story that raises three interrelated issues: (1) a poor balancing of the relevant interests, (2) a poor structural model that prevented a better balancing, and (3) a gag that has made it extraordinarily difficult to create a better rule governing how grand jury subpoenas should be balanced against important online speech rights. Continue reading »

Nov 042017
 

The following post originally appeared on Techdirt on 11/3/17.

The news about the DOJ trying to subpoena Twitter calls to mind an another egregious example of the government trying to unmask an anonymous speaker earlier this year. Remember when the federal government tried to compel Twitter to divulge the identity of a user who had been critical of the Trump administration? This incident was troubling enough on its face: there’s no place in a free society for a government to come after a critic of it. But largely overlooked in the worthy outrage over the bald-faced attempt to punish a dissenting voice was the government’s simultaneous attempt to prevent Twitter from telling anyone that the government was demanding this information. Because Twitter refused to comply with that demand, the affected user was able to get counsel and the world was able to know how the government was abusing its authority. As the saying goes, sunlight is the best disinfectant, and by shining a light on the government’s abusive behavior it was able to be stopped.

That storm may have blown over, but the general issues raised by the incident continue to affect Internet platforms – and by extension their users and their speech. A significant problem we keep having to contend with is not only what happens when the government demands information about users from platforms, but what happens when it then compels the same platforms to keep those demands a secret. These secrecy demands are often called different things and are born from separate statutory mechanisms, but they all boil down to being some form of gag over the platform’s ability to speak, with the same equally troubling implications. We’ve talked before about how important it is that platforms be able to protect their users’ right to speak anonymously. That right is part and parcel of the First Amendment because there are many people who would not be able to speak if they were forced to reveal their identities in order to do so. Public discourse, and the benefit the public gets from it, would then suffer in the absence of their contributions. But it’s one thing to say that people have the right to speak anonymously; it’s another to make that right meaningful. If civil plaintiffs, or, worse, the government, can too easily force anonymous speakers to be unmasked then the right to speak anonymously will only be illusory. For it to be something speakers can depend on to enable them to speak freely there have to be effective barriers preventing that anonymity from too casually being stripped by unjust demands. Continue reading »

Nov 042017
 

The following post originally appeared on Techdirt on 10/27/17.

It isn’t unusual or unwarranted for Section 230 to show up as a defense in situations where some might not expect it. Its basic principles may apply to more situations than may necessarily be readily apparent. But to appear as a defense in the Cockrum v. Campaign for Donald Trump case is pretty unexpected. From page 37 of the campaign’s motion to dismiss the case against it, the following two paragraphs are what the campaign slipped in on the subject:

Plaintiffs likewise cannot establish vicarious liability by alleging that the Campaign conspired with WikiLeaks. Under section 230 of the Communications Decency Act (47 U.S.C. § 230), a website that provides a forum where “third parties can post information” is not liable for the third party’s posted information. Klayman v. Zuckerberg, 753 F.3d 1354, 1358 (D.C. Cir. 2014). That is so even when even when the website performs “editorial functions” “such as deciding whether to publish.” Id. at 1359. Since WikiLeaks provided a forum for a third party (the unnamed “Russian actors”) to publish content developed by that third party (the hacked emails), it cannot be held liable for the publication.

That defeats the conspiracy claim. A conspiracy is an agreement to commit “an unlawful act.” Paul v. Howard University, 754 A.2d 297, 310 (D.C. 2000). Since WikiLeaks’ posting of emails was not an unlawful act, an alleged agreement that it should publish those emails could not have been a conspiracy.

This is the case brought against the campaign for allegedly colluding with Wikileaks and the Russians to disclose the plaintiffs’ private information as part of the DNC email trove that ended up on Wikileaks. Like Eric Goldman, who has an excellent post on the subject, I’m not going to go into the relative merits of the lawsuit itself, but I would note that it is worth consideration. Even if it’s true that the Trump campaign and Wikileaks were somehow in cahoots to hack the DNC and publish the data taken from it, whether and how the consequences of that disclosure can be recognized by law is a serious issue, as is whether this particular lawsuit by these particular plaintiffs with these particular claims is one that the law can permit to go forward without causing collateral effects to other expressive endeavors, including whistleblower journalism generally. On these points there may or may not be issues with the campaign’s motion to dismiss overall. But the shoehorning of a Section 230 argument into its defensive strategy seems sufficiently weird and counterproductive to be worth commenting on in and of itself. Continue reading »

Nov 042017
 

The following post first appeared on Techdirt on 10/25/17.

The last two posts I wrote about SESTA discussed how, if it passes, it will result in collateral damage to the important speech interests Section 230 is intended to protect. This post discusses how it will also result in collateral damage to the important interests that SESTA itself is intended to protect: those of vulnerable sex workers.

Concerns about how SESTA would affect them are not new: several anti-trafficking advocacy groups and experts have already spoken out about how SESTA, far from ameliorating the risk of sexual exploitation, will only exacerbate the risk of it in no small part because it disables one of the best tools for fighting it: the Internet platforms themselves:

[Using the vilified Backpage as an example, in as much as] Backpage acts as a channel for traffickers, it also acts as a point of connection between victims and law enforcement, family, good samaritans, and NGOs. Countless news reports and court documents bear out this connection. A quick perusal of news stories shows that last month, a mother found and recovered her daughter thanks to information in an ad on Backpagea brother found his sister the same way; and a family alerted police to a missing girl on Backpage, leading to her recovery. As I have written elsewhere, NGOs routinely comb the website to find victims. Nicholas Kristof of the New York Times famously “pulled out [his] laptop, opened up Backpage and quickly found seminude advertisements for [a victim], who turned out to be in a hotel room with an armed pimp,” all from the victim’s family’s living room. He emailed the link to law enforcement, which staged a raid and recovered the victim.

And now there is yet more data confirming what these experts have been saying: when there have been platforms available to host content for erotic services, it has decreased the risk of harm to sex workers. Continue reading »

Oct 212017
 

The following is the second in a pair of posts on Techdirt about how SESTA’s attempt to carve-out “trafficking” from Section 230’s platform protection threatens legitimate online speech having nothing to do with actual harm to trafficking victims.

Think we’re unduly worried about how “trafficking” charges will get used to punish legitimate online speech? We’re not.

A few weeks ago a Mississippi mom posted an obviously joking tweet offering to sell her three-year old for $12.

I tweeted a funny conversation I had with him about using the potty, followed by an equally-as-funny offer to my followers: 3-year-old for sale. $12 or best offer.

The next thing she knew, Mississippi authorities decided to investigate her for child trafficking.

The saga began when a caseworker and supervisor from Child Protection Services dropped by my office with a Lafayette County sheriff’s deputy. You know, a typical Monday afternoon.

They told me an anonymous male tipster called Mississippi’s child abuse hotline days earlier to report me for attempting to sell my 3-year-old son, citing a history of mental illness that probably drove me to do it.

Beyond notifying me of the charges, they said I’d have to take my son out of school so they could see him and talk to him that day, presumably protocol to ensure children aren’t in immediate danger. So I went to his preschool, pulled my son out of a deep sleep during naptime, and did everything in my power not to cry in front of him on the drive back to my office.

All of this for a joke tweet.

This story is bad enough on its own. As it stands now, actions by the Mississippi authorities will chill other Mississippi parents from blowing off steam with facetious remarks on social media. But at least the chilling harm is contained within Mississippi’s borders. If SESTA passes, that chill will spread throughout the country. Continue reading »

Oct 212017
 

The following is the first of a pair of posts on SESTA highlighting how carving out an exception to Section 230’s platform protection for sex trafficking rips a huge hole in the critical protection for online speech that Section 230 in its current form provides.

First, if you are someone who likes stepped-up ICE immigration enforcement and does not like “sanctuary cities,” you might cheer the implications of this post, but it isn’t otherwise directed at you. It is directed at the center of the political ven diagram of people who both feel the opposite about these immigration policies, and yet who are also championing SESTA. Because this news from Oakland raises the specter of a horrific implication for online speech championing immigrant rights if SESTA passes: the criminal prosecution of the platforms which host that discussion.

Much of the discussion surrounding SESTA is based on some truly horrific tales of sex abuse, crimes that more obviously fall under what the human trafficking statutes are clearly intended to address. But with news that ICE is engaging in a very broad reading of the type of behavior the human trafficking laws might cover and prosecuting anyone that happens to help an immigrant, it’s clear that the type of speech that SESTA will carve out from Section 230’s protection will go far beyond the situations the bill originally contemplated. Continue reading »

Why Protecting The Free Press Requires Protecting Trump’s Tweets (cross-post)

 Analysis/commentary, Intermediary liability, Regulating speech  Comments Off on Why Protecting The Free Press Requires Protecting Trump’s Tweets (cross-post)
Jul 062017
 

The following was originally posted on Techdirt.

Sunday morning I made the mistake of checking Twitter first thing upon waking up. As if just a quick check of Twitter would ever be possible during this administration… It definitely wasn’t this past weekend, because waiting for me in my Twitter stream was Trump’s tweet of the meme he found on Reddit showing him physically beating the crap out of a personified CNN.

But that’s not what waylaid me. What gave me pause were all the people demanding it be reported to Twitter for violating its terms of service. The fact that so many people thought that was a good idea worries me, because the expectation that when bad speech happens someone will make it go away is not a healthy one. My concern inspired a tweet storm, which has now been turned into this post. Continue reading »