Nov 192017
 

Originally posted on Techdirt November 15, 2017.

Well, I was wrong: last week I lamented that we might never know how the Ninth Circuit ruled on Glassdoor’s attempt to quash a federal grand jury subpoena served upon it demanding it identify users. Turns out, now we do know: two days after the post ran the court publicly released its decision refusing to quash the subpoena. It’s a decision that doubles-down on everything wrong with the original district court decision that also refused to quash it, only now with handy-dandy Ninth Circuit precedential weight.

Like the original ruling, it clings to the Supreme Court’s decision in Branzburg v. Hayes, a case where the Supreme Court explored the ability of anyone to resist a grand jury subpoena. But in doing so it manages to ignore other, more recent, Supreme Court precedents that should have led to the opposite result.

Here is the fundamental problem with both the district court and Ninth Circuit decisions: anonymous speakers have the right to speak anonymously. (See, e.g., the post-Branzburg Supreme Court decision McIntyre v. Ohio Elections Commission). Speech rights also carry forth onto the Internet. (See, e.g., another post-Branzburg Supreme Court decision, Reno v. ACLU). But if the platforms hosting that speech can always be forced to unmask their users via grand jury subpoena, then there is no way for that right to ever meaningfully exist in the context of online speech. Continue reading »

Nov 192017
 

Cross-posted from Techdirt November 14, 2017.

Earlier this year I wrote about Yelp’s appeal in Montagna v. Nunis. This was a case where a plaintiff had subpoenaed Yelp to unmask one of its users and Yelp tried to resist the subpoena. In that case, not only had the lower court refused to quash the subpoena, but it sanctioned Yelp for having tried to quash it. Per the court, Yelp had no right to try to assert the First Amendment rights of its users as a basis for resisting a subpoena. As we said in the amicus brief I filed for the Copia Institute in Yelp’s appeal of the ruling, if the lower court were right it would be bad news for anonymous speakers, because if platforms could not resist unfounded subpoenas then users would lose an important line of defense against all the unfounded subpoenas seeking to unmask them for no legitimate reason.

Fortunately, a California appeals court just agreed it would be problematic if platforms could not push back against these subpoenas. Not only has this decision avoided creating inconsistent law in California (earlier this year a different California appeals court had reached a similar conclusion), but now there is even more language on the books affirming that platforms are able to try to stand up for their users’ First Amendment rights, including their right to speak anonymously. As we noted, platforms can’t always push back against these discovery demands, but it is often in their interests to try protect the user communities that provide the content that make their platforms valuable. If they never could, it would seriously undermine those user communities and all the content these platforms enable.

The other bit of good news from the decision is that the appeals court overturned the sanction award against Yelp. It would have significantly chilled platforms if they had to think twice before standing up for their users because of how much it could cost them financially for trying to do so.

But any celebration of this decision needs to be tempered by the fact that the appeals court also decided to uphold the subpoena in question. While it didn’t fault Yelp for having tried to defend its users, and, importantly, it found that it had the legal ability to, it gave short shrift to that defense.

The test that California uses to decide whether to uphold or quash a subpoena is a test from a case called Krinsky, which asks whether the plaintiff has made a “prima facie” case. In other words, we don’t know if the plaintiff necessarily would win, but we want to ensure that it’s at least possible for plaintiffs to prevail on their claims before we strip speakers of their anonymity for no good reason. That’s all well and good, but thanks to the appeals court’s extraordinarily generous read of the statements at issue in this case, one that went out of its way to infer the possibility of falsity in what were at their essence statements of opinion (which is ordinarily protected by the First Amendment), the appeals court decided that the test had been satisfied.

This outcome is not only unfortunate for the user whose identity will now be revealed to the plaintiff but for all future speakers now that there is an appellate decision on the books running through the “prima facie” balancing test in a way that so casually dismisses the protections speech normally has. It at least would have been better if the question considering whether the subpoena should be quashed had been remanded to the lower court, where, even if that court still reached a decision too easily-puncturing of the First Amendment protection for online speech it would have posed less of a risk to other speech in the future.

Nov 062017
 

This post is the second in a series that ran on Techdirt about the harm to online speech through unfettered discovery on platforms that they are then prevented from talking about.

In my last post, I discussed why it is so important for platforms to be able to speak about the discovery demands they receive, seeking to unmask their anonymous users. That candor is crucially important in ensuring that unmasking demands can’t damage the key constitutional right to speak anonymously, without some sort of check against their abuse.

The earlier post rolled together several different types of discovery instruments (subpoenas, warrants, NSLs, etc.) because to a certain extent it doesn’t matter which one is used to unmask an anonymous user. The issue raised by all of them is that if their power to unmask an anonymous user is too unfettered, then it will chill all sorts of legitimate speech. And, as noted in the last post, the ability for a platform receiving an unmasking demand to tell others it has received it is a critical check against unworthy demands seeking to unmask the speakers behind lawful speech.

The details of each type of unmasking instrument do matter, though, because each one has different interests to balance and, accordingly, different rules governing how to balance them. Unfortunately, the rules that have evolved for any particular one are not always adequately protective of the important speech interests any unmasking demand necessarily affects. As is the case for the type of unmasking demand at issue in this post: a federal grand jury subpoena.

Grand jury subpoenas are very powerful discovery instruments, and with good reason: the government needs a powerful weapon to be able to investigate serious crimes. There are also important constitutional reasons for why we equip grand juries with strong investigatory power, because if charges are to be brought against people, it’s important for due process reasons that they have been brought by the grand jury, as opposed to a more arbitrary exercise of government power. Grand juries are, however, largely at the disposal of government prosecutors, and thus a grand jury subpoena essentially functions as a government unmasking demand. The ability to compel information via a grand jury subpoena is therefore not a power we can allow to exist unchecked.

Which brings us to the story of the grand jury subpoena served on Glassdoor, which Paul Levy and Ars Technica wrote about earlier this year. It’s a story that raises three interrelated issues: (1) a poor balancing of the relevant interests, (2) a poor structural model that prevented a better balancing, and (3) a gag that has made it extraordinarily difficult to create a better rule governing how grand jury subpoenas should be balanced against important online speech rights. Continue reading »

Apr 072016
 

The following is Section III.B of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Question #12 asks if the notice-and-takedown process sufficiently protects against fraudulent, abusive, or unfounded notices and what should be done to address this concern.  Invalid takedown notices are most certainly a problem,[1] and the reason is that the system causes them to be a problem.  As discussed in Section II.B the notice-and-takedown regime is inherently a censorship regime, and it can be a very successful censorship regime because takedown notice senders can simply point to content they want removed and use the threat of liability as the gun to the service provider’s head to force it to remove it, lest the service provider risk its safe harbor protection.

Thanks to courts under-enforcing subsection 512(f) they can do this without fear of judicial oversight.[2]  But it isn’t just the lax subsection 512(f) standard that allows abusive notices to be sent without fear of accountability.  Even though the DMCA includes put-back provisions at subsection 512(g) we see relatively few instances of it being used.[3]  The DMCA is a complicated statute and the average non-lawyer may not know these provisions exist or be able to know how to use them.  Furthermore, trying to use them puts users in the crosshairs of the party gunning for their content (and, potentially, them as people) by forcing them to give up their right to anonymous speech in order to keep that speech from being censored.  All of these complications are significant deterrents to users being able to effectively defend their own content, content that would have already been censored (these measures would only allow the content to be restored, after the censorship damage has already been done).[4]  Ultimately there are no real checks on abusive takedown notices apart from what the service provider is willing and able to risk reviewing and rejecting.[5]  Given the enormity of this risk, however, it cannot remain the sole stopgap measure to keep this illegitimate censorship from happening.

Continuing on, Question #13 asks whether subsection 512(d), addressing “information location tools,” has been a useful mechanism to address infringement “that occurs as a result of a service provider’s referring or linking to infringing content.”  Purely as a matter of logic the answer cannot possibly be yes: simply linking to content has absolutely no bearing on whether content is or is not infringing.  The entire notion that there could be liability on a service provider for simply knowing where information resides stretches U.S. copyright law beyond recognition.  That sort of knowledge, and the sharing of that knowledge, should never be illegal, particularly in light of the Progress Clause, upon which the copyright law is predicated and authorized, and particularly when the mere act of sharing that knowledge in no way itself directly implicates any exclusive right held by a copyright holder in that content.[6]  Subsection 512(d) exists entirely as a means and mode of censorship, once again blackmailing service providers into the forced forgetting of information they once knew, and irrespective of whether the content they are being forced to forget is ultimately infringing or not.  As discussed above in Section II.B above, there is no way for the service provider to definitively know.
Continue reading »

Jul 072015
 

The following is cross-posted from Popehat.

There is no question that the right of free speech necessarily includes the right to speak anonymously. This is partly because sometimes the only way for certain speech to be possible at all is with the protection of anonymity.

And that’s why so much outrage is warranted when bullies try to strip speakers of their anonymity simply because they don’t like what these people have to say, and why it’s even more outrageous when these bullies are able to. If anonymity is so fragile that speakers can be so easily unmasked, fewer people will be willing to say the important things that need to be said, and we all will suffer for the silence.

We’ve seen on these blog pages examples of both government and private bullies make specious attacks on the free speech rights of their critics, often by using subpoenas, both civil and criminal, to try to unmask them. But we’ve also seen another kind of attempt to identify Internet speakers, and it’s one we’ll see a lot more of if the proposal ICANN is currently considering is put into place.

Continue reading »