Earlier this year the Ninth Circuit Court of Appeals issued a troubling ruling in Multi Time Machine v. Amazon.com. If allowed to stand this ruling will raise the risk for ecommerce and other online (and even potentially offline) businesses that their normal and reasonable operation might be found to infringe trademarks, even though such a finding would seem to be far beyond what the Lanham Act, which governs trademarks, actually allows. In light of this unexpectedly and untenably heightened legal risk I filed an amicus brief at the Ninth Circuit on behalf of Rebecca Tushnet and other intellectual property professors in support of Amazon’s petition that this earlier ruling be reconsidered.
Cross-posted from Techdirt.
Earlier this week the Ninth Circuit heard oral arguments in the appeal of Lenz v. Universal. This was the case where Stephanie Lenz sued Universal because Universal had sent YouTube a takedown notice demanding it delete the home movie she had posted of her toddler dancing, simply because music by Prince was audible in the background. It’s a case whose resolution has been pending since 2007, despite the fact that it involves the interpretation of a fundamental part of the DMCA’s operation.
The portion of the DMCA at issue in this case is Section 512 of the copyright statute, which the DMCA added in 1998 along with Section 1201. As with Section 1201, Section 512 reflects a certain naivete by Congress in thinking any part of the DMCA was a good idea, rather than the innovation-choking and speech- chilling mess it has turned out to be. But looking at the statutory language you can kind of see how Congress thought it was all going to work, what with the internal checks and balances they put into the DMCA to prevent it from being abused. Unfortunately, while even as intended there are some severe shortcomings to how this balance was conceptualized, what’s worse is how it has not even been working as designed.
The following is cross-posted from Popehat.
There is no question that the right of free speech necessarily includes the right to speak anonymously. This is partly because sometimes the only way for certain speech to be possible at all is with the protection of anonymity.
And that’s why so much outrage is warranted when bullies try to strip speakers of their anonymity simply because they don’t like what these people have to say, and why it’s even more outrageous when these bullies are able to. If anonymity is so fragile that speakers can be so easily unmasked, fewer people will be willing to say the important things that need to be said, and we all will suffer for the silence.
We’ve seen on these blog pages examples of both government and private bullies make specious attacks on the free speech rights of their critics, often by using subpoenas, both civil and criminal, to try to unmask them. But we’ve also seen another kind of attempt to identify Internet speakers, and it’s one we’ll see a lot more of if the proposal ICANN is currently considering is put into place.
Back in December I traveled to Pasadena to observe the oral argument in the en banc appeal of Google v. Garcia, a case I filed an amicus brief in on behalf of Techdirt and the Organization for Transformative Works. (Actually, I ultimately wrote two briefs, one in support of the en banc appeal being granted and one as part of the appeal once it was.) After the hearing I wrote a synopsis of the arguments raised during the appeal on Techdirt (originally titled, “Celine Dion And Human Cannonballs“), which I’m now cross-posting here:
On Monday I filed an amicus brief in a case sometimes referred to as “Garcia v. Google.” The case is really Garcia v. Nakoula, with Garcia being an actress who was duped by the defendant to appear in a film he was making – a film that, unbeknownst to her, turned out to be an anti-Islam screed that led to her life being threatened by many who were not happy with its message and who sought to hold her accountable for it.
There’s little question that Nakoula wronged her, and likely in a way that the law would recognize. Holding him accountable is therefore uncontroversial. But Garcia didn’t just want to hold him accountable; Garcia wanted all evidence of this film removed from the world, and so she sued Google/YouTube too in an attempt to make it comply with her wish.
Garcia is obviously a sympathetic victim, but no law exists to allow her the remedy she sought. In fact, there are laws actively preventing it, such as 47 USC Section 230 and the Digital Millennium Copyright Act (DMCA), and, believe it or not, that’s actually a good thing! Even though it may, in cases like these, seem like a bad thing because it means bad content can linger online if the intermediary hosting it can’t be forced to delete it, such a rule helps preserve the Internet as a healthy, robust forum for online discourse. It’s really an all-or-nothing proposition: you can’t make case-by-case incursions on intermediaries’ statutory protection against having to take down “bad” content without chilling their ability to host good content too.
And yet that is what happened in this case when Garcia sought a preliminary injunction to force Google to delete all copies of it from YouTube (and prevent any new copies from being uploaded). Not at the district court, which denied her request, but at the Ninth Circuit Court of Appeals earlier this year when two out of three judges on the appeals’ panel chose to ignore the statutes precluding such an order and granted it against Google anyway.
Google has now petitioned for the Ninth Circuit to review this decision, and a few days ago nearly a dozen third parties weighed in with amicus briefs to persuade the court to revisit it. Most focused on the method by which the court reached its decision (i.e., by finding for Garcia a copyright interest in the film unsupported by the copyright statute). I, however, filed one on behalf of two intermediaries, Floor64 Inc. (a/k/a Techdirt.com) and the Organization for Transformative Works, intermediaries who both depend on the statutory protection that should have prevented the court’s order, arguing that by granting the injunction in contravention of these laws preventing it, the court has undermined these and other intermediaries’ future ability to host any user-generated content. As the saying goes, bad facts make bad law, and tempted though the court may have been in this case with these facts, if its order is allowed to stand the court will have made very bad law indeed.
For more detailed analysis read the brief and the TechDirt article about it. Additional amicus briefs and relevant case filings are also archived here, and Eric Goldman has a nice summary of the briefs as well.
The following was posted on Project-Disco.org earlier this week:
What would the Internet be without its intermediaries? Nothing, that’s what. Intermediaries are what carry, store, and serve every speck of information that makes up the Internet. Every cat picture, every YouTube comment, every Wikipedia article. Every streamed video, every customer review, every online archive. Every blog post, every tweet, every Facebook status. Every e-business, every search engine, every cloud service. No part of what we have come to take the Internet for exists without some site, server, or system intermediating that content so that we all can access it.
And yet, if we’re not careful, we can easily lose all the benefits these intermediaries bring us. Thankfully, in the United States we have some laws that help ensure they can exist, chief among them 47 U.S.C. Section 230. As my recent paper on the state of the law regarding intermediary liability explains, this law stands for the proposition that intermediaries are only responsible for what they themselves communicate through their systems – not what others use them to say. For example, newspapers that post articles online are only responsible for the content of the articles they publish, not the comments readers then post to them. Similarly consumer review sites are only responsible for the information they supply to their sites, not the user reviews themselves. This same principle also means that people who link to content (as search engines do) are not legally responsible for that content, even if that content should happen to be illegal in some way (like by being potentially defamatory).
The reason Section 230 has been so helpful in allowing the Internet to thrive and become this increasingly rich resource is that by relieving intermediaries of liability for the content passing through their systems it has allowed for much more, and much more diverse, content to take root on them than there would have been had intermediaries felt it necessary to police every byte that passed through their systems out of the fear that if they didn’t, and the wrong bit got through, an expensive lawsuit could be just around the corner. Because of that fear, even if those bits and bytes did not actually comprise anything illegal intermediaries would still be tempted to over-censor or even outright prohibit scads of content, no matter how valuable that content might actually be.
This past week California passed a law requiring website owners to allow minors (who are also residents of California) to delete any postings they may have made on the website. There is plenty to criticize about this law, including that it is yet another example of a legislative commandment cavalierly imposing liability on website owners with no contemplation of the technical feasibility of how they are supposed to comply with it.
But such discussion should be moot. This law is precluded by federal law, in this case 47 U.S.C. Section 230. By its provisions, Section 230 prevents intermediaries (such as websites) from being held liable for content others have posted on them. (See Section 230(c)(1)). Moreover, states are not permitted to undermine that immunity. (See Section 230(e)(3)). So, for instance, even if someone were to post some content to a website that might be illegal in some way under state law, that state law can’t make the website hosting that content itself be liable for it (nor can that state law make the website delete it). But that’s what this law proposes to do at its essence: make websites liable for content others have posted to them.
As such, even aside for the other Constitutional infirmities of this law such as those involving compelled speech for forcing website owners to either host or delete content at someone else’s behest (see a discussion from Eric Goldman about this and other Constitutional problems here), it’s also constitutionally pre-empted by a prior act of Congress.
Some might argue that the intent of the law is important and noble enough to forgive it these problems. Unlike in generations past, kids today truly do have something akin to a “permanent record” thanks to the ease of the Internet to collect and indefinitely store the digital evidence of everyone’s lives. But such a concern requires thoughtful consideration for how to best ameliorate those consequences, if it’s even possible to, without injuring important free speech principles and values the Internet also supports. This law offers no such solution.
I was asked to write the “Posts of the Week” for Techdirt this past weekend and used it as an opportunity to convey some of the ideas I explore here to that audience. The post was slightly constrained by the contours of the project — for instance, I could only punctuate my greater points with actual posts that appeared on Techdirt last week — but I think they held together with coherence, and I appreciated the chance to reframe some of the issues Techdirt was already exploring in this way.
In any case, I’ve decided to cross-post my summary here, partly because I always like to host a copy of my guest blog posts on one of my sites, and partly because it gives me a chance to update and annotate those ideas further. Please do go visit Techdirt though, which was kind enough to ask me to do this, to read more about the items described below.
At an event on CFAA reform last night I heard Brewster Kahle say what to my ears sounded like, “Law that follows technology tends to be ok. Law that tries to lead it is not.”
His comment came after an earlier tweet I’d made:
I think we need a per se rule that any law governing technology that was enacted more than 10 years ago is inherently invalid.
In posting that tweet I was thinking about two horrible laws in particular, the Computer Fraud and Abuse Act (CFAA) and the Electronic Communications Privacy Act (ECPA). The former attempts to forbid “hacking,” and the second ostensibly tried to update 1968’s Wiretap Act to cover information technology. In both instances the laws as drafted generally incorporated the attitude that technology as understood then would be the technology the world would have forever hence, a prediction that has obviously been false. But we are nonetheless left with laws like these on the books, laws that hobble further innovation by how they’ve enshrined in our legal code what is right and wrong when it comes to our computer code, as we understood it in 1986, regardless of whether, if considered afresh and applied to today’s technology, we would still think so.
To my tweet a friend did challenge me, however, “What about Section 230? (47 U.S.C. § 230).” This is a law from 1996, and he has a point. Section 230 is a piece of legislation that largely immunizes Internet service providers for liability in content posted on their systems by their users – and let’s face it: the very operational essence of the Internet is all about people posting content on other people’s systems. However, unlike the CFAA and ECPA, Section 230 has enabled technology to flourish, mostly by purposefully getting the law itself out of the way of the technology.
The above are just a few examples of some laws that have either served technology well – or served to hamper it. There are certainly more, and some laws might ultimately do a bit of both. But the general point is sound: law that is too specific is often too stifling. Innovation needs to be able to happen however it needs to, without undue hindrance caused by legislators who could not even begin to imagine what that innovation might look like so many years before. After all, if they could imagine it then, it would not be so innovative now.
The following case, Twentieth Century Fox v. Harris, is not a criminal matter. But I want to include it here nonetheless in part because it’s important to talk about copyright policy generally, particularly given the increasing trend for it to be criminalized. And partly because, in this case, hardly two weeks after I asserted that copyright infringement analogized more to trespass than to theft, a court independently reached the same conclusion.