Sep 012018
 

This post originally appeared on Techdirt on 2/22/18.

Last week was a big week for dramatically bad copyright rulings from the New York federal courts: the one finding people liable for infringement if they embed others’ content in their own webpages, and this one about 5Pointz, where a court has found a building owner liable for substantial monetary damages for having painted his own building. While many have hailed this decision, including those who have mistakenly viewed it as a win for artists, this post explains why it is actually bad for everyone. Continue reading »

Sep 012018
 

This post originally appeared on Techdirt on 2/3/18.

With the event at Santa Clara earlier this month, and the companion essays published here, we’ve been talking a lot lately about how platforms moderate content. It can be a challenging task for a platform to figure out how to balance dealing with the sometimes troubling content it can find itself intermediating on the one hand and free speech concerns on the other. But at least, thanks to Section 230, platforms have been free to do the best they could to manage these competing interests. However you may think they make these decisions now, they would not come out any better without that statutory protection insulating them from legal consequence if they did not opt to remove absolutely everything that could tempt trouble. If they had to contend with the specter of liability in making these decisions it would inevitably cause platforms to play a much more censoring role at the expense of legitimate user speech.

Fearing such a result is why the Copia Institute filed an amicus brief at the Ninth Circuit last year in Fields v. Twitter, one of the many “how dare you let terrorists use the Internet” cases that keep getting filed against Internet platforms. While it’s problematic that they keep getting filed, they have fortunately not tended to get very far. I say “fortunately,” because although it is terrible what has happened to the victims of these attacks, if platforms could be liable for what terrorists do it would end up chilling platforms’ ability to intermediate any non-terrorist speech. Thus we, along with the EFF and the Internet Association (representing many of the bigger Internet platforms), had all filed briefs urging the Ninth Circuit to find, as the lower courts have tended to, that Section 230 insulates platforms from these types of lawsuits.

A few weeks ago the Ninth Circuit issued its decision. The good news is that this decision affirms that the end has been reached in this particular case and hopefully will deter future ones. However the court did not base its reasoning on the existence of Section 230. While somewhat disappointing because we saw this case as an important opportunity to buttress Section 230’s critical statutory protection, by not speaking to it at all it also didn’t undermine it, and the fact the court ruled this way isn’t actually bad. By focusing instead on the language of the Anti-Terrorism Act itself (this is the statute barring the material support of terrorists), it was still able to lessen the specter of legal liability that would otherwise chill platforms and force them to censor more speech.

In fact, it may even be better that the court ruled this way. The result is not fundamentally different than what a decision based on Section 230 would have led to: like with the ATA, which the court found would have required some direct furtherance by the platform of the terrorist act, so would Section 230 have required the platform’s direct interaction with the creation of user content furthering the act in order for the platform to potentially be liable for its consequences. But the more work Section 230 does to protect platforms legally, the more annoyed people seem to get at it politically. So by not being relevant to the adjudication of these sorts of tragic cases it won’t throw more fuel on the political fire seeking to undermine the important speech-protective work Section 230 does, and then it hopefully will remain safely on the books for the next time we need it.

Sep 012018
 

This post originally appeared on Techdirt on 1/29/18.

Never mind all the other reasons Deputy Attorney General Rod Rosenstein’s name has been in the news lately… this post is about his comments at the State of the Net conference in DC on Monday. In particular: his comments on encryption backdoors.

As he and so many other government officials have before, he continued to press for encryption backdoors, as if it were possible to have a backdoor and a functioning encryption system. He allowed that the government would not itself need to have the backdoor key; it could simply be a company holding onto it, he said, as if this qualification would lay all concerns to rest.

But it does not, and so near the end of his talk I asked the question, “What is a company to do if it suffers a data breach and the only thing compromised is the encryption key it was holding onto?”

There were several concerns reflected in this question. One relates to what the poor company is to do. It’s bad enough when they experience a data breach and user information is compromised. Not only does a data breach undermine a company’s relationship with its users, but, recognizing how serious this problem is, authorities are increasingly developing policy instructing companies on how they are to respond to such a situation, and it can expose the company to significant legal liability if it does not comport with these requirements.

But if an encryption key is taken it is so much more than basic user information, financial details, or even the pool of potentially rich and varied data related to the user’s interactions with the company that is at risk. Rather, it is every single bit of information the user has ever depended on the encryption system to secure that stands to be compromised. What is the appropriate response of a company whose data breach has now stripped its users of all the protection they depended on for all this data? How can it even begin to try to mitigate the resulting harm? Just what would government officials, who required the company to keep this backdoor key, now propose it do? Particularly if the government is going to force companies to be in this position of holding onto these keys, these answers are something they are going to need to know if they are going to be able to afford to be in the encryption business at all.

Which leads to the other idea I was hoping the question would capture: that encryption policy and cybersecurity policy are not two distinct subjects. They interrelate. So when government officials worry about what bad actors do, as Rosenstein’s comments reflected, it can’t lead to the reflexive demand that encryption be weakened simply because, as they reason, bad actors use encryption. Not when the same officials are also worried about bad actors breaching systems, because this sort of weakened encryption so significantly raises the cost of these breaches (as well as potentially makes them easier).

Unfortunately Rosenstein had no good answer. There was lots of equivocation punctuated with the assertion that experts had assured him that it was feasible to create backdoors and keep them safe. Time ran out before anyone could ask the follow-up question of exactly who were these mysterious experts giving him this assurance, especially in light of so many other experts agreeing that such a solution is not possible, but perhaps this answer is something Senator Wyden can find out

Sep 012018
 

This post originally appeared on Techdirt on 1/24/18.

A few weeks ago we posted an update on Montagna v. Nunis. This was a case where a plaintiff subpoenaed Yelp for the identity of a user. The trial court originally denied Yelp’s attempt to quash the subpoena – and sanctioned it for trying – on the grounds that platforms had no right to stand in for their users to assert their First Amendment rights. We filed an amicus brief in support of Yelp’s appeal of that decision, which fortunately the Court of Appeal reversed, joining another Court of Appeal that earlier in the year had also decided that of course it was ok for platforms to try to quash subpoenas seeking to unmask their users.

Unfortunately, that was only part of what this Court of Appeal decided. Even though it agreed that Yelp could TRY to quash a subpoena, it decided that it couldn’t quash this particular one. That’s unfortunate for the user, who was just unmasked. But what made it unfortunate for everyone is that this decision was fully published, which means it can be cited as precedent by other plaintiffs who want to unmask users. While having the first part of the decision affirming Yelp’s right to quash the subpoena is a good thing, the logic that the Court used in the second part is making it a lot easier for plaintiffs to unmask users – even when they really shouldn’t be entitled to.

So Yelp asked the California Supreme Court to partially depublish the ruling – or, in other words, make the bad parts of it stop being precedent that subsequent litigants can cite in their unmasking attempts (there are rules that prevent California lawyers from citing unpublished cases in their arguments, except under extremely limited circumstances). And this week we filed our own brief at the California Supreme Court in support of Yelp’s request, arguing that the Court of Appeal’s analysis was inconsistent with other California policy and precedent protecting speech, and that without its depublication it will lead to protected speech being chilled.

None of this will change the outcome of the earlier decision – the user will remain unmasked. But hopefully it will limit the effect of that Court of Appeal’s decision with respect to the unmasking to the facts of that particular case.

Sep 012018
 

This post originally appeared on Techdirt on 1/22/18.

Shortly after Trump was elected I wrote a post predicting how things might unfold on the tech policy front with the incoming administration. It seems worth taking stock, now almost a year into it, to see how those predictions may have played out. Continue reading »

Sep 012018
 

This post originally appeared on Techdirt 12/12/17.

Last week, Mike and I were at a conference celebrating the 20th anniversary of the Supreme Court decision in Reno v. ACLU, a seminal case that declared that the First Amendment applied online. What makes the case so worth a conference celebrating it is not just what it meant as a legal matter – it’s a significant step forward in First Amendment jurisprudence – but also what it meant as a practical matter. This decision was hugely important in allowing the internet to develop into what it is today, and that evolution may not be something we adequately appreciate. It’s easy to forget and pretend the internet we know today was always a ubiquitous presence, but that wasn’t always so, and it wasn’t so back then. Indeed, it’s quite striking just how much has changed in just two decades.

So this seemed like a good occasion to look back at how things were then. The attached paper is a re-publication of the honors thesis I wrote in 1996 as a senior at the University of California at Berkeley. As the title indicates, it was designed to study internet adoption among my fellow students, who had not yet all started using it. Even those who had were largely dependent on the University to provide them their access, and that access had only recently started to be offered on any significant a campus-wide basis. And not all of the people who had started using the internet found it to be something their lives necessarily needed. (For instance, when asked if they would continue to use the internet after the University no longer provided their access, a notable number of people said no.) This study tried to look at what influences or reasons the decision to use, or not use, the internet pivoted upon.

I do of course have some pause, now a few decades further into my career, calling attention to work I did as a stressed-out undergraduate. However, I still decided to dig it up and publish it, because there aren’t many snapshots documenting internet usage from that time. And that’s a problem, because it’s important to understand how the internet transitioned from being an esoteric technology used only by some into a much more pervasive one seemingly used by nearly everyone, and why that change happened, especially if we want to understand how it will continue to change, and how we might want to shape that change. All too often it seems tech policy is made with too little serious consideration of the sociology behind how people use the internet – the human decisions internet usage represents – and it really needs to be part of the conversation more. Hopefully studies like this one can help with that.

Nov 192017
 

Originally posted on Techdirt November 15, 2017.

Well, I was wrong: last week I lamented that we might never know how the Ninth Circuit ruled on Glassdoor’s attempt to quash a federal grand jury subpoena served upon it demanding it identify users. Turns out, now we do know: two days after the post ran the court publicly released its decision refusing to quash the subpoena. It’s a decision that doubles-down on everything wrong with the original district court decision that also refused to quash it, only now with handy-dandy Ninth Circuit precedential weight.

Like the original ruling, it clings to the Supreme Court’s decision in Branzburg v. Hayes, a case where the Supreme Court explored the ability of anyone to resist a grand jury subpoena. But in doing so it manages to ignore other, more recent, Supreme Court precedents that should have led to the opposite result.

Here is the fundamental problem with both the district court and Ninth Circuit decisions: anonymous speakers have the right to speak anonymously. (See, e.g., the post-Branzburg Supreme Court decision McIntyre v. Ohio Elections Commission). Speech rights also carry forth onto the Internet. (See, e.g., another post-Branzburg Supreme Court decision, Reno v. ACLU). But if the platforms hosting that speech can always be forced to unmask their users via grand jury subpoena, then there is no way for that right to ever meaningfully exist in the context of online speech. Continue reading »

Nov 192017
 

Cross-posted from Techdirt November 14, 2017.

Earlier this year I wrote about Yelp’s appeal in Montagna v. Nunis. This was a case where a plaintiff had subpoenaed Yelp to unmask one of its users and Yelp tried to resist the subpoena. In that case, not only had the lower court refused to quash the subpoena, but it sanctioned Yelp for having tried to quash it. Per the court, Yelp had no right to try to assert the First Amendment rights of its users as a basis for resisting a subpoena. As we said in the amicus brief I filed for the Copia Institute in Yelp’s appeal of the ruling, if the lower court were right it would be bad news for anonymous speakers, because if platforms could not resist unfounded subpoenas then users would lose an important line of defense against all the unfounded subpoenas seeking to unmask them for no legitimate reason.

Fortunately, a California appeals court just agreed it would be problematic if platforms could not push back against these subpoenas. Not only has this decision avoided creating inconsistent law in California (earlier this year a different California appeals court had reached a similar conclusion), but now there is even more language on the books affirming that platforms are able to try to stand up for their users’ First Amendment rights, including their right to speak anonymously. As we noted, platforms can’t always push back against these discovery demands, but it is often in their interests to try protect the user communities that provide the content that make their platforms valuable. If they never could, it would seriously undermine those user communities and all the content these platforms enable.

The other bit of good news from the decision is that the appeals court overturned the sanction award against Yelp. It would have significantly chilled platforms if they had to think twice before standing up for their users because of how much it could cost them financially for trying to do so.

But any celebration of this decision needs to be tempered by the fact that the appeals court also decided to uphold the subpoena in question. While it didn’t fault Yelp for having tried to defend its users, and, importantly, it found that it had the legal ability to, it gave short shrift to that defense.

The test that California uses to decide whether to uphold or quash a subpoena is a test from a case called Krinsky, which asks whether the plaintiff has made a “prima facie” case. In other words, we don’t know if the plaintiff necessarily would win, but we want to ensure that it’s at least possible for plaintiffs to prevail on their claims before we strip speakers of their anonymity for no good reason. That’s all well and good, but thanks to the appeals court’s extraordinarily generous read of the statements at issue in this case, one that went out of its way to infer the possibility of falsity in what were at their essence statements of opinion (which is ordinarily protected by the First Amendment), the appeals court decided that the test had been satisfied.

This outcome is not only unfortunate for the user whose identity will now be revealed to the plaintiff but for all future speakers now that there is an appellate decision on the books running through the “prima facie” balancing test in a way that so casually dismisses the protections speech normally has. It at least would have been better if the question considering whether the subpoena should be quashed had been remanded to the lower court, where, even if that court still reached a decision too easily-puncturing of the First Amendment protection for online speech it would have posed less of a risk to other speech in the future.

Nov 122017
 

This post appeared on Techdirt on 11/10/17.  The anniversary it notes is today.

We have been talking a lot lately about how important Section 230 is for enabling innovation and fostering online speech, and, especially as Congress now flirts with erasing its benefits, how fortuitous it was that Congress ever put it on the books in the first place.

But passing the law was only the first step: for it to have meaningful benefit, courts needed to interpret it in a way that allowed for it to have its protective effect on Internet platforms. Zeran v. America Online was one of the first cases to test the bounds of Section 230’s protection, and the first to find that protection robust. Had the court decided otherwise, we likely would not have seen the benefits the statute has since then afforded.

This Sunday the decision in Zeran turns 20 years old, and to mark the occasion Eric Goldman and Jeff Kosseff have gathered together more than 20 essays from Internet lawyers and scholars reflecting on the case, the statute, and all of its effects. I have an essay there, “The First Hard Case: ‘Zeran v. AOL’ and What It Can Teach Us About Today’s Hard Cases,” as do many other advocates, including lawyers involved with the original case. Even people who are not fans of Section 230 and its legacy are represented. All of these pieces are worth reading and considering, especially by anyone interested in setting policy around these issues.

Nov 062017
 

This post is the second in a series that ran on Techdirt about the harm to online speech through unfettered discovery on platforms that they are then prevented from talking about.

In my last post, I discussed why it is so important for platforms to be able to speak about the discovery demands they receive, seeking to unmask their anonymous users. That candor is crucially important in ensuring that unmasking demands can’t damage the key constitutional right to speak anonymously, without some sort of check against their abuse.

The earlier post rolled together several different types of discovery instruments (subpoenas, warrants, NSLs, etc.) because to a certain extent it doesn’t matter which one is used to unmask an anonymous user. The issue raised by all of them is that if their power to unmask an anonymous user is too unfettered, then it will chill all sorts of legitimate speech. And, as noted in the last post, the ability for a platform receiving an unmasking demand to tell others it has received it is a critical check against unworthy demands seeking to unmask the speakers behind lawful speech.

The details of each type of unmasking instrument do matter, though, because each one has different interests to balance and, accordingly, different rules governing how to balance them. Unfortunately, the rules that have evolved for any particular one are not always adequately protective of the important speech interests any unmasking demand necessarily affects. As is the case for the type of unmasking demand at issue in this post: a federal grand jury subpoena.

Grand jury subpoenas are very powerful discovery instruments, and with good reason: the government needs a powerful weapon to be able to investigate serious crimes. There are also important constitutional reasons for why we equip grand juries with strong investigatory power, because if charges are to be brought against people, it’s important for due process reasons that they have been brought by the grand jury, as opposed to a more arbitrary exercise of government power. Grand juries are, however, largely at the disposal of government prosecutors, and thus a grand jury subpoena essentially functions as a government unmasking demand. The ability to compel information via a grand jury subpoena is therefore not a power we can allow to exist unchecked.

Which brings us to the story of the grand jury subpoena served on Glassdoor, which Paul Levy and Ars Technica wrote about earlier this year. It’s a story that raises three interrelated issues: (1) a poor balancing of the relevant interests, (2) a poor structural model that prevented a better balancing, and (3) a gag that has made it extraordinarily difficult to create a better rule governing how grand jury subpoenas should be balanced against important online speech rights. Continue reading »