Jul 122017
 

The following was also posted on Techdirt.

It’s always hard to write about the policy implications of tragedies – the last thing their victims need is the politicization of what they suffered. At the same time, it’s important to learn what lessons we can from these events in order to avoid future ones. Earlier Mike wrote about the chilling effects on Grenfell residents’ ability to express their concerns about the safety of the building – chilling effects that may have been deadly – because they lived in a jurisdiction that allowed critical speech to be easily threatened. The policy concern I want to focus on now is how copyright law also interferes with safety and accountability both in the US and elsewhere.

I’m thinking in particular about the litigation Carl Malamud has found himself faced with because he dared to post legally-enforceable standards on his website as a resource for people who wanted ready access to the law that governed them. (Disclosure: I helped file amicus briefs supporting his defense in this litigation.) A lot of the discussion about the litigation has focused on the need for people to know the details of the law that governs them: while ignorance of the law is no excuse, as a practical matter people need a way to actually know what the law is if they are going to be expected to comply with it. Locking it away in a few distant libraries or behind paywalls is not an effective way of disseminating that knowledge.

But there is another reason why the general public needs to have access to this knowledge. Not just because it governs them, but because others’ compliance with it obviously affects them. Think for instance about the tenants in these buildings, or any buildings anywhere: how can they be equipped to know if the buildings they live in meet applicable safety standards if they never can see what those standards are? They instead are forced to trust that those with privileged access to that knowledge will have acted on it accordingly. But as the Grenfell tragedy has shown, that trust may be misplaced. “Trust, but verify,” it has been famously said. But without access to the knowledge necessary to verify that everything has been done properly, no one can make sure that it has. That makes the people who depend on this compliance vulnerable. And as long as copyright law is what prevents them from knowing if there has been compliance, then it is copyright law that makes them so.  Continue reading »

Jul 062017
 

The following was originally posted on Techdirt.

Sunday morning I made the mistake of checking Twitter first thing upon waking up. As if just a quick check of Twitter would ever be possible during this administration… It definitely wasn’t this past weekend, because waiting for me in my Twitter stream was Trump’s tweet of the meme he found on Reddit showing him physically beating the crap out of a personified CNN.

But that’s not what waylaid me. What gave me pause were all the people demanding it be reported to Twitter for violating its terms of service. The fact that so many people thought that was a good idea worries me, because the expectation that when bad speech happens someone will make it go away is not a healthy one. My concern inspired a tweet storm, which has now been turned into this post. Continue reading »

Jun 132017
 

Cross-posted on Techdirt.

The Copia Institute filed another amicus brief this week, this time in Fields v. Twitter. Fields v. Twitter is one of a flurry of cases being brought against Internet platforms alleging that they are liable for the harms caused by the terrorists using their sites. The facts in these cases are invariably awful: often people have been brutally killed and their loved ones are seeking redress for their loss. There is a natural, and perfectly reasonable, temptation to give them some sort of remedy from someone, but as we argued in our brief, that someone cannot be an internet platform.

There are several reasons for this, including some that have nothing to do with Section 230. For instance, even if Section 230 did not exist and platforms could be liable for the harms resulting from their users’ use of their services, for them to be liable there would have to be a clear connection between the use of the platform and the harm. Otherwise, based on the general rules of tort law, there could be no liability. In this particular case, for instance, there is a fairly weak connection between ISIS members using Twitter and the specific terrorist act that killed the plaintiffs’ family members.

But we left that point to Twitter to ably argue. Our brief focused exclusively on the fact that Section 230 should prevent a court from ever even reaching the tort law analysis. With Section 230, a platform should never find itself having to defend against liability for harm that may have resulted from how people used it. Our concern is that in several recent cases with their own terrible facts, the Ninth Circuit in particular has found itself willing to make exceptions to that rule. As much as we were supporting Twitter in this case, trying to help ensure the Ninth Circuit does not overturn the very good District Court decision that had correctly applied Section 230 to dismiss the case, we also had an eye to the long view of reversing this trend. Continue reading »

May 262017
 

The following was cross-posted on Techdirt.

We often talk about how protecting online speech requires protecting platforms, like with Section 230 immunity and the safe harbors of the DMCA. But these statutory shields are not the only way law needs to protect platforms in order to make sure the speech they carry is also protected.

Earlier this month, I helped Techdirt’s think tank arm, the Copia Institute, file an amicus brief in support of Yelp in a case called Montagna v. Nunis. Like many platforms, Yelp lets people post content anonymously. Often people are only willing to speak when they can do so without revealing who they are (note how many people participate in the comments here without revealing their real names), which is why the right to speak anonymously has been found to be part and parcel of the First Amendment right of free speech . It’s also why sites like Yelp let users post anonymously, because often that’s the only way they will feel comfortable posting reviews candid enough to be useful to those who depend on sites like Yelp to help them make informed decisions.

But as we also see, people who don’t like the things said about them often try to attack their critics, and one way they do this is by trying to strip these speakers of their anonymity. True, sometimes online speech can cross the line and actually be defamatory, in which case being able to discover the identity of the speaker is important. This case in no way prevents legitimately aggrieved plaintiffs from using subpoenas to discover the identity of those whose unlawful speech has injured them to sue them for relief. Unfortunately, however, it is not just people with legitimate claims who are sending subpoenas; in many instances they are being sent by people objecting to speech that is perfectly legal, and that’s a problem. Unmasking the speakers behind protected speech not only violates their First Amendment rights to speak anonymously but it also chills the speech the First Amendment is designed to foster generally by making the critical anonymity protection that plenty of legal speech depends on suddenly illusory.

There is a lot that can and should be done to close off this vector of attack on free speech. One important measure is to make sure platforms are able to resist the subpoenas they get demanding they turn over whatever identifying information they have. There are practical reasons why they can’t always fight them — for instance, like DMCA takedown notices, they may simply get too many — but it is generally in their interest to try to resist illegitimate subpoenas targeting the protected speech posted anonymously on their platforms so that their users will not be scared away from speaking on their sites.

But when Yelp tried to resist the subpoena connected with this case, the court refused to let them stand in to defend the user’s speech interest. Worse, it sanctioned(!) Yelp for even trying, thus making platforms’ efforts to stand up for their users even more risky and expensive than they already are.

So Yelp appealed, and we filed an amicus brief supporting their effort. Fortunately, earlier this year Glassdoor won an important California State appellate ruling that validated attempts by platforms to quash subpoenas on behalf of their users. That decision discussed why the First Amendment and California State Constitution required platforms to have this ability to quash subpoenas targeting protected speech, and hopefully this particular appeals court will agree with its sister court and make clear that platforms are allowed to fight off subpoenas like this. As we pointed out in our brief, both state and federal law and policy require online speech to be protected, and preventing platforms from resisting subpoenas is out of step with those stated policy goals and constitutional requirements.

Dec 172016
 

The following was recently published on Techdirt, although with a different title.

Regardless of what one thinks about the apparent result of the 2016 election, it will inevitably present a number of challenges for America and the world. As Mike wrote about last week, they will inevitably touch on many of the tech policy issues often discussed here. The following is a closer look at some of the implications (and opportunities) with respect to several of them, given the unique hallmarks of Trump and his proposed administration. Continue reading »

Apr 082016
 

The following is Section III.C of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Questions #16 and #17 more specifically contemplate the effectiveness of the put-back process articulated at subsection 512(g).  As explained in Section III.B this mechanism is not effective for restoring wrongfully removed content and is little used.  But it is worth taking a moment here to further explore the First Amendment harms wrought to both Internet users and service providers by the DMCA.[1]

It is part and parcel of First Amendment doctrine that people are permitted to speak, and to speak anonymously.[2]  Although that anonymity can be stripped in certain circumstances, there is nothing about the allegation of copyright infringement that should cause it to be stripped automatically.  Particularly in light of copyright law incorporating free speech principles[3] this anonymity cannot be more fragile than it would in any other circumstance where speech was subject to legal challenge.  The temptation to characterize all alleged infringers as malevolent pirates who get what they deserve must be resisted; the DMCA targets all speakers and all speech, no matter how fair or necessary to public discourse this speech is.

And yet, with the DMCA, not only is speech itself more vulnerable to censorship via copyright infringement claim than it would be for other types of allegations[4] but so are the necessary protections speakers depend on to be able to speak.[5]  Between the self-identification requirements of subsection 512(g) put-back notices and the ease of demanding user information with subsection 512(h) subpoenas that also do not need to be predicated on actual lawsuits,[6] Internet speakers on the whole must fear the loss of their privacy if anyone dares to construe an infringement claim, no matter how illegitimate or untested that claim may be.  Given the ease of concocting an invalid infringement claim,[7] and the lack of any incentive not to,[8] the DMCA gives all-too-ready access to the identities of Internet users to the people least deserving of it and at the expense of those who most need it.[9]

Furthermore, the DMCA also compromises service providers’ own First Amendment interests in developing the forums and communities they would so choose.  The very design of the DMCA puts service providers at odds with their users, forcing them to be antagonistic their own customers and their own business interests as a condition for protecting those interests.  Attempts to protect their forums or their users can expose them to tremendous costs and potentially incalculable risk, and all of this harm flows from mere allegation that never need be tested in a court of law.  The DMCA forces service providers to enforce censorship compelled by a mere takedown notice, compromise user privacy in response to subsection 512(h) subpoenas (or devote significant resources to trying to quash them), and, vis a vis Questions #22 and 23, disconnect users according to termination policies whose sufficiency cannot be known until a court decides they are not.[10]

The repeat infringer policy requirement of subsection 512(i)(A) exemplifies the statutory problem with many of the DMCA’s safe harbor requirements.  A repeat infringer policy might only barely begin to be legitimate if it applied to the disconnection of a user after a certain number of judicial findings of liability for acts of infringement that users had used the service provider to commit.  But as at least one service provider lost its safe harbor for not permanently disconnecting users after only a certain number of allegations, even though they were allegations that had never been tested in a court consistent with the principles of due process or prohibition on prior restraint.[11]

In no other context would we find these sorts of government incursions against the rights of speakers constitutional, robbing them of their speech, anonymity, and the opportunity to further speak, without adequate due process.  These incursions do not suddenly become constitutionally sound just because the DMCA coerces service providers to be the agent committing these acts instead.
Continue reading »

Apr 072016
 

The following is Section III.B of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Question #12 asks if the notice-and-takedown process sufficiently protects against fraudulent, abusive, or unfounded notices and what should be done to address this concern.  Invalid takedown notices are most certainly a problem,[1] and the reason is that the system causes them to be a problem.  As discussed in Section II.B the notice-and-takedown regime is inherently a censorship regime, and it can be a very successful censorship regime because takedown notice senders can simply point to content they want removed and use the threat of liability as the gun to the service provider’s head to force it to remove it, lest the service provider risk its safe harbor protection.

Thanks to courts under-enforcing subsection 512(f) they can do this without fear of judicial oversight.[2]  But it isn’t just the lax subsection 512(f) standard that allows abusive notices to be sent without fear of accountability.  Even though the DMCA includes put-back provisions at subsection 512(g) we see relatively few instances of it being used.[3]  The DMCA is a complicated statute and the average non-lawyer may not know these provisions exist or be able to know how to use them.  Furthermore, trying to use them puts users in the crosshairs of the party gunning for their content (and, potentially, them as people) by forcing them to give up their right to anonymous speech in order to keep that speech from being censored.  All of these complications are significant deterrents to users being able to effectively defend their own content, content that would have already been censored (these measures would only allow the content to be restored, after the censorship damage has already been done).[4]  Ultimately there are no real checks on abusive takedown notices apart from what the service provider is willing and able to risk reviewing and rejecting.[5]  Given the enormity of this risk, however, it cannot remain the sole stopgap measure to keep this illegitimate censorship from happening.

Continuing on, Question #13 asks whether subsection 512(d), addressing “information location tools,” has been a useful mechanism to address infringement “that occurs as a result of a service provider’s referring or linking to infringing content.”  Purely as a matter of logic the answer cannot possibly be yes: simply linking to content has absolutely no bearing on whether content is or is not infringing.  The entire notion that there could be liability on a service provider for simply knowing where information resides stretches U.S. copyright law beyond recognition.  That sort of knowledge, and the sharing of that knowledge, should never be illegal, particularly in light of the Progress Clause, upon which the copyright law is predicated and authorized, and particularly when the mere act of sharing that knowledge in no way itself directly implicates any exclusive right held by a copyright holder in that content.[6]  Subsection 512(d) exists entirely as a means and mode of censorship, once again blackmailing service providers into the forced forgetting of information they once knew, and irrespective of whether the content they are being forced to forget is ultimately infringing or not.  As discussed above in Section II.B above, there is no way for the service provider to definitively know.
Continue reading »

Comments on DMCA Section 512: The DMCA functions as a system of extra-judicial censorship

 Analysis/commentary, Intermediary liability, Regulating speech  Comments Off on Comments on DMCA Section 512: The DMCA functions as a system of extra-judicial censorship
Apr 042016
 

The following is Section II.B of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Despite all the good that Section 230 and the DMCA have done to foster a robust online marketplace of ideas, the DMCA’s potential to deliver that good has been tempered by the particular structure of the statute.  Whereas Section 230 provides a firm immunity to service providers for potential liability in user-supplied content,[1] the DMCA conditions its protection.[2]  And that condition is censorship.  The irony is that while the DMCA makes it possible for service providers to exist to facilitate online speech, it does so at the expense of the very speech they exist to facilitate due to the notice and takedown system.

In a world without the DMCA, if someone wanted to enjoin content they would need to demonstrate to a court that it indeed owned a valid copyright and that the use of content in question infringed this copyright before a court would compel its removal.  Thanks to the DMCA, however, they are spared both their procedural burdens and also their pleading burdens.  In order to cause content to be disappeared from the Internet all anyone needs to do is send a takedown notice that merely points to content and claims it as theirs.

Although some courts are now requiring takedown notice senders to consider whether the use of the content in question was fair,[3] there is no real penalty for the sender if they get it wrong or don’t bother.[4]  Instead, service providers are forced to become judge and jury, even though (a) they lack the information needed to properly evaluate copyright infringement claims,[5] (b) the sheer volume of takedowns notices often makes case-by-case evaluation of them impossible, and (c) it can be a bet-the-company decision if the service provider gets it wrong because their “error” may deny them the Safe Harbor and put them on the hook for infringement liability.[6]  Although there is both judicial and statutory recognition that service providers are not in the position to police user-supplied content for infringement,[7] there must also be recognition that they are similarly not in the position to police for invalid takedowns.  Yet they must, lest there be no effective check on these censorship demands.

Ordinarily the First Amendment and due process would not permit this sort of censorship, the censorship of an Internet user’s speech predicated on mere allegation.  Mandatory injunctions are disfavored generally,[8] and particularly so when they target speech and may represent impermissible prior restraint on speech that has not yet been determined to be wrongful.[9]  To the extent that the DMCA causes these critical speech protections to be circumvented it is consequently only questionably constitutional.  For the DMCA to be statutorily valid it must retain, in its drafting and interpretation, ample protection to see that these important constitutional speech protections are not ignored.
Continue reading »

Comments on DMCA Section 512: Overview and conclusion

 Analysis/commentary, Intermediary liability, Regulating speech  Comments Off on Comments on DMCA Section 512: Overview and conclusion
Apr 022016
 

At Congress’s request the Copyright Office recently initiated several studies looking into how parts of copyright law have been working.  In addition to commenting in the studies about Section 1201 of the copyright act and “software-enabled consumer electronics,” I also commented in the study looking into Section 512 — the portion of the copyright statute that creates safe harbors for service providers intermediating others’ content — on behalf of Floor64/The Copia Institute, the parent company of Techdirt.com, which both advises and educates on intermediary issues and, with Techdirt, is an intermediary itself. There is a post on Techdirt about the comment generally, and the entire comment (all 3600+ words…) is downloadable there, but I decided to cross-post each main section of it here as a series of discrete essays, one per day, every day over this coming week.

To get started, here is an edited compilation of the sections that provide an overview of the argument.  Sections discussing each aspect of that argument will follow.

We file this comment to drive home the point that for the Internet to be the marketplace of ideas Congress anticipated it being in 1998, and, indeed, sought for it to be, it is integral for these businesses to retain durable and reliable protection from liability arising from user-generated content.  Furthermore, as long as Congress is taking the opportunity to study how the existing safe harbor has been functioning, we would flag several areas where it could be made to function better in light of these policy goals as well as areas where it should be changed to make it as protective of speech as the Constitution requires.

With respect to this study [which invited comment via responses to 30 questions], just as history is written by the victors, records are written by those asking the questions.  The hazard is that questions tend to presume answers, even when the answers that they elicit may not necessarily be the answers that are most illuminating.

While there is specific input that can be proffered with respect to various parts of the statute, it would not do the inquiry justice to remain focused on statutory minutiae.  The DMCA is ostensibly designed to confront a specific policy problem.  It is fair, reasonable, and indeed necessary to ensure that this problem is well-defined and well-understood before determining whether, and to what extent, the DMCA is an appropriate or appropriately calibrated solution to it. 

Ultimately, however, it is not possible to have a valid copyright law that in any part is inconsistent with the Progress Clause or First Amendment.  To the extent that the DMCA protects intermediaries and with them the speech they foster it is consistent with both of these constitutional precepts and limitations.  To the extent, however, that that DMCA suborns due process or otherwise compromises the First Amendment rights of either Internet users or service providers themselves to use and develop forums for information exchange on the Internet it is not.  The statutory infirmities that have been leading to the latter outcome should therefore be corrected to make the DMCA’s protections on intermediaries and the speech they foster as durable as this important policy interest requires.
Continue reading »

New Decision In Dancing Baby DMCA Takedown Case — And Everything Is Still A Mess (cross-post)

 Analysis/commentary, Intermediary liability, Regulating speech  Comments Off on New Decision In Dancing Baby DMCA Takedown Case — And Everything Is Still A Mess (cross-post)
Mar 202016
 

The following originally appeared on Techdirt.

I got very excited yesterday when I saw a court system alert that there was a new decision out in the appeal of Lenz v. Universal. This was the Dancing Baby case where a toddler rocking out to a Prince song was seen as such an affront to Prince’s exclusive rights in his songs that his agent Universal Music felt it necessary to send a DMCA takedown notice to YouTube to have the video removed. Heaven forbid people share videos of their babies dancing to unlicensed music.

Of course, they shouldn’t need licenses, because videos like this one clearly make fair use of the music at issue. So Stephanie Lenz, whose video this was, through her lawyers at the EFF, sued Universal under Section 512(f) of the DMCA for having wrongfully caused her video to be taken down.

Last year, the Ninth Circuit heard the case on appeal and then in September issued a decision that generally pleased no one. Both Universal and Lenz petitioned for the Ninth Circuit to reconsider the decision en banc. En banc review was particularly important because the decision suggested that the panel felt hamstrung by the Ninth Circuit’s earlier decision in Rossi v. MPAA, a decision which had the effect of making it functionally impossible for people whose content had been wrongfully taken down to ever successfully sue the parties who had caused that to happen.

Although the updated language exorcises some unhelpful, under-litigated ideas that suggested automated takedown systems could be a “valid and good faith” way of processing takedowns while considering fair use, the new, amended decision does little to remediate any of the more serious underlying problems from the last version. The one bright spot from before fortunately remains: the Ninth Circuit has now made clear that fair use is something that takedown notice senders must consider before sending them. But as for what happens when they don’t, or what happens when they get it wrong, that part is still a confusing mess. The reissued decision doubles-down on the contention from Rossi that a takedown notice sender must have just a subjectively reasonable belief – not an objectively reasonable one – that the content in question is infringing. And, according to the majority of the three-judge panel (there was a dissent), it is for a jury to decide whether that belief was reasonable.

The fear from September remains that there is no real deterrent to people sending wrongful takedown notices that cause legitimate, non-infringing speech to be removed from the Internet. It is expensive and impractical to sue to be compensated for the harm this censorship causes, and having to do it before a jury, with an extremely high subjective standard, makes doing so even more unrealistic.
Continue reading »