Jul 062017
 

The following was originally posted on Techdirt.

Sunday morning I made the mistake of checking Twitter first thing upon waking up. As if just a quick check of Twitter would ever be possible during this administration… It definitely wasn’t this past weekend, because waiting for me in my Twitter stream was Trump’s tweet of the meme he found on Reddit showing him physically beating the crap out of a personified CNN.

But that’s not what waylaid me. What gave me pause were all the people demanding it be reported to Twitter for violating its terms of service. The fact that so many people thought that was a good idea worries me, because the expectation that when bad speech happens someone will make it go away is not a healthy one. My concern inspired a tweet storm, which has now been turned into this post. Continue reading »

May 262017
 

The following was cross-posted on Techdirt.

We often talk about how protecting online speech requires protecting platforms, like with Section 230 immunity and the safe harbors of the DMCA. But these statutory shields are not the only way law needs to protect platforms in order to make sure the speech they carry is also protected.

Earlier this month, I helped Techdirt’s think tank arm, the Copia Institute, file an amicus brief in support of Yelp in a case called Montagna v. Nunis. Like many platforms, Yelp lets people post content anonymously. Often people are only willing to speak when they can do so without revealing who they are (note how many people participate in the comments here without revealing their real names), which is why the right to speak anonymously has been found to be part and parcel of the First Amendment right of free speech . It’s also why sites like Yelp let users post anonymously, because often that’s the only way they will feel comfortable posting reviews candid enough to be useful to those who depend on sites like Yelp to help them make informed decisions.

But as we also see, people who don’t like the things said about them often try to attack their critics, and one way they do this is by trying to strip these speakers of their anonymity. True, sometimes online speech can cross the line and actually be defamatory, in which case being able to discover the identity of the speaker is important. This case in no way prevents legitimately aggrieved plaintiffs from using subpoenas to discover the identity of those whose unlawful speech has injured them to sue them for relief. Unfortunately, however, it is not just people with legitimate claims who are sending subpoenas; in many instances they are being sent by people objecting to speech that is perfectly legal, and that’s a problem. Unmasking the speakers behind protected speech not only violates their First Amendment rights to speak anonymously but it also chills the speech the First Amendment is designed to foster generally by making the critical anonymity protection that plenty of legal speech depends on suddenly illusory.

There is a lot that can and should be done to close off this vector of attack on free speech. One important measure is to make sure platforms are able to resist the subpoenas they get demanding they turn over whatever identifying information they have. There are practical reasons why they can’t always fight them — for instance, like DMCA takedown notices, they may simply get too many — but it is generally in their interest to try to resist illegitimate subpoenas targeting the protected speech posted anonymously on their platforms so that their users will not be scared away from speaking on their sites.

But when Yelp tried to resist the subpoena connected with this case, the court refused to let them stand in to defend the user’s speech interest. Worse, it sanctioned(!) Yelp for even trying, thus making platforms’ efforts to stand up for their users even more risky and expensive than they already are.

So Yelp appealed, and we filed an amicus brief supporting their effort. Fortunately, earlier this year Glassdoor won an important California State appellate ruling that validated attempts by platforms to quash subpoenas on behalf of their users. That decision discussed why the First Amendment and California State Constitution required platforms to have this ability to quash subpoenas targeting protected speech, and hopefully this particular appeals court will agree with its sister court and make clear that platforms are allowed to fight off subpoenas like this. As we pointed out in our brief, both state and federal law and policy require online speech to be protected, and preventing platforms from resisting subpoenas is out of step with those stated policy goals and constitutional requirements.

More on the First Amendment problems with DMCA Section 512

 Analysis/commentary, Intermediary liability  Comments Off on More on the First Amendment problems with DMCA Section 512
Feb 232017
 

Over at Techdirt there’s a write-up of the latest comment I submitted on behalf of the Copia Institute as part of the Copyright Office’s study on the operation of Section 512 of the Digital Millennium Copyright Act. As as we’ve told the Copyright Office before, that operation has had a huge impact on online free speech. (Those comments have also been cross-posted here.)

In some ways this impact is good: providing platforms with protection from liability in their users’ content means that they can be available to facilitate that content and speech. But all too often and in all too many ways the practical impact on free speech has been a negative one, with speech being much more vulnerable to censorship via takedown notice than it ever would have been if the person objecting to it (even for copyright-related reasons) had to go to court to get an injunction to take it down. Not only is the speech itself more vulnerable than it should be, but the protection the platforms depend on ends up being more vulnerable as well because platforms must risk it every time they refuse to act on a takedown notice, no matter how invalid that notice may be.

Our earlier comment pointed out in some detail how the current operation of the DMCA has been running afoul of the protections the First Amendment is supposed to afford speech, and in this second round of comments we’ve highlighted some further deficiencies. In particular, we reminded the Copyright Office of the problems with “prior restraint,” which the First Amendment also prohibits. Prior restraint is what happens when speech is punished before there has been any adjudication to prove that it deserves to be punished. The reason the First Amendment prohibits prior restraint is that it does no good to punish speech, such as by removing it, if the First Amendment would otherwise protect it – once it has been removed the damage will have already been done.

Making sure that legitimate speech cannot be removed is why we normally require the courts to carefully adjudicate whether its removal can be ordered before its removal will be allowed. But with the DMCA there is no such judicial check: people can send demands for all sorts of content to be removed, even if it weren’t actually infringing, because there is little to deter them so long as Section 512(f) continues to have no teeth. Instead platforms are forced to treat every takedown notice as a legitimate demand, regardless of whether it is or not. Not only does this mean they need to delete the content but, in the wake of some recent cases, it seems they also must potentially hold each allegation against their user, regardless of whether it was valid or not, and then cut that user off from their services when they’ve accrued too many such accusations, again regardless of they were valid or not.

As we did before, we counseled the Copyright Office to return to first principles: the DMCA was supposed to enhance online free speech, and it’s important to make sure that all of its provisions work together to do just that. To the extent that it may be appropriate for the Copyright Office to make recommendations on this front, one is to remind all concerned that the penalty articulated in Section 512(f) to sanction bad takedown notices can and should be applied according to a flexible standard, rather than the rigid one courts have lately adopted. In any case, however, the Copyright Office certainly should not be advocating to changes in any provisions or their interpretations that make the DMCA any less compatible with the First Amendment than it has already tended to be.

Dec 172016
 

The following was recently published on Techdirt, although with a different title.

Regardless of what one thinks about the apparent result of the 2016 election, it will inevitably present a number of challenges for America and the world. As Mike wrote about last week, they will inevitably touch on many of the tech policy issues often discussed here. The following is a closer look at some of the implications (and opportunities) with respect to several of them, given the unique hallmarks of Trump and his proposed administration. Continue reading »

Apr 082016
 

The following is Section III.C of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Questions #16 and #17 more specifically contemplate the effectiveness of the put-back process articulated at subsection 512(g).  As explained in Section III.B this mechanism is not effective for restoring wrongfully removed content and is little used.  But it is worth taking a moment here to further explore the First Amendment harms wrought to both Internet users and service providers by the DMCA.[1]

It is part and parcel of First Amendment doctrine that people are permitted to speak, and to speak anonymously.[2]  Although that anonymity can be stripped in certain circumstances, there is nothing about the allegation of copyright infringement that should cause it to be stripped automatically.  Particularly in light of copyright law incorporating free speech principles[3] this anonymity cannot be more fragile than it would in any other circumstance where speech was subject to legal challenge.  The temptation to characterize all alleged infringers as malevolent pirates who get what they deserve must be resisted; the DMCA targets all speakers and all speech, no matter how fair or necessary to public discourse this speech is.

And yet, with the DMCA, not only is speech itself more vulnerable to censorship via copyright infringement claim than it would be for other types of allegations[4] but so are the necessary protections speakers depend on to be able to speak.[5]  Between the self-identification requirements of subsection 512(g) put-back notices and the ease of demanding user information with subsection 512(h) subpoenas that also do not need to be predicated on actual lawsuits,[6] Internet speakers on the whole must fear the loss of their privacy if anyone dares to construe an infringement claim, no matter how illegitimate or untested that claim may be.  Given the ease of concocting an invalid infringement claim,[7] and the lack of any incentive not to,[8] the DMCA gives all-too-ready access to the identities of Internet users to the people least deserving of it and at the expense of those who most need it.[9]

Furthermore, the DMCA also compromises service providers’ own First Amendment interests in developing the forums and communities they would so choose.  The very design of the DMCA puts service providers at odds with their users, forcing them to be antagonistic their own customers and their own business interests as a condition for protecting those interests.  Attempts to protect their forums or their users can expose them to tremendous costs and potentially incalculable risk, and all of this harm flows from mere allegation that never need be tested in a court of law.  The DMCA forces service providers to enforce censorship compelled by a mere takedown notice, compromise user privacy in response to subsection 512(h) subpoenas (or devote significant resources to trying to quash them), and, vis a vis Questions #22 and 23, disconnect users according to termination policies whose sufficiency cannot be known until a court decides they are not.[10]

The repeat infringer policy requirement of subsection 512(i)(A) exemplifies the statutory problem with many of the DMCA’s safe harbor requirements.  A repeat infringer policy might only barely begin to be legitimate if it applied to the disconnection of a user after a certain number of judicial findings of liability for acts of infringement that users had used the service provider to commit.  But as at least one service provider lost its safe harbor for not permanently disconnecting users after only a certain number of allegations, even though they were allegations that had never been tested in a court consistent with the principles of due process or prohibition on prior restraint.[11]

In no other context would we find these sorts of government incursions against the rights of speakers constitutional, robbing them of their speech, anonymity, and the opportunity to further speak, without adequate due process.  These incursions do not suddenly become constitutionally sound just because the DMCA coerces service providers to be the agent committing these acts instead.
Continue reading »

Apr 072016
 

The following is Section III.B of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Question #12 asks if the notice-and-takedown process sufficiently protects against fraudulent, abusive, or unfounded notices and what should be done to address this concern.  Invalid takedown notices are most certainly a problem,[1] and the reason is that the system causes them to be a problem.  As discussed in Section II.B the notice-and-takedown regime is inherently a censorship regime, and it can be a very successful censorship regime because takedown notice senders can simply point to content they want removed and use the threat of liability as the gun to the service provider’s head to force it to remove it, lest the service provider risk its safe harbor protection.

Thanks to courts under-enforcing subsection 512(f) they can do this without fear of judicial oversight.[2]  But it isn’t just the lax subsection 512(f) standard that allows abusive notices to be sent without fear of accountability.  Even though the DMCA includes put-back provisions at subsection 512(g) we see relatively few instances of it being used.[3]  The DMCA is a complicated statute and the average non-lawyer may not know these provisions exist or be able to know how to use them.  Furthermore, trying to use them puts users in the crosshairs of the party gunning for their content (and, potentially, them as people) by forcing them to give up their right to anonymous speech in order to keep that speech from being censored.  All of these complications are significant deterrents to users being able to effectively defend their own content, content that would have already been censored (these measures would only allow the content to be restored, after the censorship damage has already been done).[4]  Ultimately there are no real checks on abusive takedown notices apart from what the service provider is willing and able to risk reviewing and rejecting.[5]  Given the enormity of this risk, however, it cannot remain the sole stopgap measure to keep this illegitimate censorship from happening.

Continuing on, Question #13 asks whether subsection 512(d), addressing “information location tools,” has been a useful mechanism to address infringement “that occurs as a result of a service provider’s referring or linking to infringing content.”  Purely as a matter of logic the answer cannot possibly be yes: simply linking to content has absolutely no bearing on whether content is or is not infringing.  The entire notion that there could be liability on a service provider for simply knowing where information resides stretches U.S. copyright law beyond recognition.  That sort of knowledge, and the sharing of that knowledge, should never be illegal, particularly in light of the Progress Clause, upon which the copyright law is predicated and authorized, and particularly when the mere act of sharing that knowledge in no way itself directly implicates any exclusive right held by a copyright holder in that content.[6]  Subsection 512(d) exists entirely as a means and mode of censorship, once again blackmailing service providers into the forced forgetting of information they once knew, and irrespective of whether the content they are being forced to forget is ultimately infringing or not.  As discussed above in Section II.B above, there is no way for the service provider to definitively know.
Continue reading »

Jun 162013
 

While originally I intended this blog to focus only on issues where cyberlaw collided with criminal law, I’ve come to realize that this sort of analysis is advanced by discussion of the underlying issues separately, even when they don’t implicate either criminal law or even technology. For example, discussions about how copyright infringement is being criminally prosecuted is aided by discussion on copyright policy generally. Similarly, discussions about shield laws for bloggers are advanced by discussions of shield laws generally, so I’ve decided to import one I wrote recently on my personal blog for readers of this one:

Both Ken @ Popehat and “Gideon” at his blog have posts on the position reporter Jana Winter finds herself in. To briefly summarize, the contents of the diary of the alleged Aurora, CO, shooter ended up in her possession, ostensibly given to her by a law enforcement officer with access to it and in violation of judicial orders forbidding its disclosure. She then reported on those contents. She is not in trouble for having done the reporting; the problem is, the investigation into who broke the law by providing the information to her in the first place has reached an apparent dead end, and thus the judge in the case wants to compel her, under penalty of contempt that might include jailing, to disclose the source who provided it, despite her having promised to protect the source’s identity.

In his post Gideon make a compelling case for the due process issues at stake here. What’s especially notable about this situation is that the investigation isn’t just an investigation into some general wrongdoing; it’s wrongdoing by police that threatens to compromise the accused’s right to a fair trial. However you might feel about him and the crimes for which he’s charged, the very fact that you might have such strong feelings is exactly why the court was motivated to impose a gag order preventing the disclosure of such sensitive information: to attempt to preserve an unbiased jury who could judge him fairly, a right he is entitled to by the Constitution, irrespective of his ultimate innocence or guilt, which the police have no business trying to undermine.

Ken goes even further, noting the incredible danger to everyone when police and journalists become too chummy, as perhaps happened in the case here. Police power is power, and left unchecked it can often become tyrannically abusive. Journalists are supposed to help be that check, and when they are not, when they become little but the PR arm for the police, we are all less safe from the inherent danger that police power poses.

But that is why, as Ken and Gideon wrestle with the values of the First Amendment versus the values of the Fifth and Sixth the answer MUST resolve in favor of the First. There is no way to split the baby such that we can vindicate the latter interests here while not inadvertently jeopardizing these and other important interests further in the future. Continue reading »

Newsman’s privilege and blogging

 Judicial process, Regulating speech  Comments Off on Newsman’s privilege and blogging
Apr 092013
 

I found myself blogging about journalist shield law at my personal blog today. As explained in that post, an experience as the editor of the high school paper has made newsman’s privilege a topic near and dear to my heart. So I thought I would resurrect a post I wrote a few years ago on the now-defunct blog I kept as a law student about how newsman’s privilege interacts with blogging as food for thought here. Originally written and edited in 2006/2007, with a few more edits for clarity now.

At a blogging colloquium at Harvard Law School [note: in April 2006] Eugene Volokh gave a presentation on the free speech protections that might be available for blogging, with the important (and, in my opinion, eminently reasonable) suggestion that free speech protections should not be medium-specific. In other words, if these protections would be available to you if you’d put your thoughts on paper, they should be available if you’d put them on a blog. Continue reading »