Back in December I traveled to Pasadena to observe the oral argument in the en banc appeal of Google v. Garcia, a case I filed an amicus brief in on behalf of Techdirt and the Organization for Transformative Works. (Actually, I ultimately wrote two briefs, one in support of the en banc appeal being granted and one as part of the appeal once it was.) After the hearing I wrote a synopsis of the arguments raised during the appeal on Techdirt (originally titled, “Celine Dion And Human Cannonballs“), which I’m now cross-posting here:
On Monday I filed an amicus brief in a case sometimes referred to as “Garcia v. Google.” The case is really Garcia v. Nakoula, with Garcia being an actress who was duped by the defendant to appear in a film he was making – a film that, unbeknownst to her, turned out to be an anti-Islam screed that led to her life being threatened by many who were not happy with its message and who sought to hold her accountable for it.
There’s little question that Nakoula wronged her, and likely in a way that the law would recognize. Holding him accountable is therefore uncontroversial. But Garcia didn’t just want to hold him accountable; Garcia wanted all evidence of this film removed from the world, and so she sued Google/YouTube too in an attempt to make it comply with her wish.
Garcia is obviously a sympathetic victim, but no law exists to allow her the remedy she sought. In fact, there are laws actively preventing it, such as 47 USC Section 230 and the Digital Millennium Copyright Act (DMCA), and, believe it or not, that’s actually a good thing! Even though it may, in cases like these, seem like a bad thing because it means bad content can linger online if the intermediary hosting it can’t be forced to delete it, such a rule helps preserve the Internet as a healthy, robust forum for online discourse. It’s really an all-or-nothing proposition: you can’t make case-by-case incursions on intermediaries’ statutory protection against having to take down “bad” content without chilling their ability to host good content too.
And yet that is what happened in this case when Garcia sought a preliminary injunction to force Google to delete all copies of it from YouTube (and prevent any new copies from being uploaded). Not at the district court, which denied her request, but at the Ninth Circuit Court of Appeals earlier this year when two out of three judges on the appeals’ panel chose to ignore the statutes precluding such an order and granted it against Google anyway.
Google has now petitioned for the Ninth Circuit to review this decision, and a few days ago nearly a dozen third parties weighed in with amicus briefs to persuade the court to revisit it. Most focused on the method by which the court reached its decision (i.e., by finding for Garcia a copyright interest in the film unsupported by the copyright statute). I, however, filed one on behalf of two intermediaries, Floor64 Inc. (a/k/a Techdirt.com) and the Organization for Transformative Works, intermediaries who both depend on the statutory protection that should have prevented the court’s order, arguing that by granting the injunction in contravention of these laws preventing it, the court has undermined these and other intermediaries’ future ability to host any user-generated content. As the saying goes, bad facts make bad law, and tempted though the court may have been in this case with these facts, if its order is allowed to stand the court will have made very bad law indeed.
For more detailed analysis read the brief and the TechDirt article about it. Additional amicus briefs and relevant case filings are also archived here, and Eric Goldman has a nice summary of the briefs as well.
The following was posted on Project-Disco.org earlier this week:
What would the Internet be without its intermediaries? Nothing, that’s what. Intermediaries are what carry, store, and serve every speck of information that makes up the Internet. Every cat picture, every YouTube comment, every Wikipedia article. Every streamed video, every customer review, every online archive. Every blog post, every tweet, every Facebook status. Every e-business, every search engine, every cloud service. No part of what we have come to take the Internet for exists without some site, server, or system intermediating that content so that we all can access it.
And yet, if we’re not careful, we can easily lose all the benefits these intermediaries bring us. Thankfully, in the United States we have some laws that help ensure they can exist, chief among them 47 U.S.C. Section 230. As my recent paper on the state of the law regarding intermediary liability explains, this law stands for the proposition that intermediaries are only responsible for what they themselves communicate through their systems – not what others use them to say. For example, newspapers that post articles online are only responsible for the content of the articles they publish, not the comments readers then post to them. Similarly consumer review sites are only responsible for the information they supply to their sites, not the user reviews themselves. This same principle also means that people who link to content (as search engines do) are not legally responsible for that content, even if that content should happen to be illegal in some way (like by being potentially defamatory).
The reason Section 230 has been so helpful in allowing the Internet to thrive and become this increasingly rich resource is that by relieving intermediaries of liability for the content passing through their systems it has allowed for much more, and much more diverse, content to take root on them than there would have been had intermediaries felt it necessary to police every byte that passed through their systems out of the fear that if they didn’t, and the wrong bit got through, an expensive lawsuit could be just around the corner. Because of that fear, even if those bits and bytes did not actually comprise anything illegal intermediaries would still be tempted to over-censor or even outright prohibit scads of content, no matter how valuable that content might actually be.
This past week California passed a law requiring website owners to allow minors (who are also residents of California) to delete any postings they may have made on the website. There is plenty to criticize about this law, including that it is yet another example of a legislative commandment cavalierly imposing liability on website owners with no contemplation of the technical feasibility of how they are supposed to comply with it.
But such discussion should be moot. This law is precluded by federal law, in this case 47 U.S.C. Section 230. By its provisions, Section 230 prevents intermediaries (such as websites) from being held liable for content others have posted on them. (See Section 230(c)(1)). Moreover, states are not permitted to undermine that immunity. (See Section 230(e)(3)). So, for instance, even if someone were to post some content to a website that might be illegal in some way under state law, that state law can’t make the website hosting that content itself be liable for it (nor can that state law make the website delete it). But that’s what this law proposes to do at its essence: make websites liable for content others have posted to them.
As such, even aside for the other Constitutional infirmities of this law such as those involving compelled speech for forcing website owners to either host or delete content at someone else’s behest (see a discussion from Eric Goldman about this and other Constitutional problems here), it’s also constitutionally pre-empted by a prior act of Congress.
Some might argue that the intent of the law is important and noble enough to forgive it these problems. Unlike in generations past, kids today truly do have something akin to a “permanent record” thanks to the ease of the Internet to collect and indefinitely store the digital evidence of everyone’s lives. But such a concern requires thoughtful consideration for how to best ameliorate those consequences, if it’s even possible to, without injuring important free speech principles and values the Internet also supports. This law offers no such solution.
I was asked to write the “Posts of the Week” for Techdirt this past weekend and used it as an opportunity to convey some of the ideas I explore here to that audience. The post was slightly constrained by the contours of the project — for instance, I could only punctuate my greater points with actual posts that appeared on Techdirt last week — but I think they held together with coherence, and I appreciated the chance to reframe some of the issues Techdirt was already exploring in this way.
In any case, I’ve decided to cross-post my summary here, partly because I always like to host a copy of my guest blog posts on one of my sites, and partly because it gives me a chance to update and annotate those ideas further. Please do go visit Techdirt though, which was kind enough to ask me to do this, to read more about the items described below.
At an event on CFAA reform last night I heard Brewster Kahle say what to my ears sounded like, “Law that follows technology tends to be ok. Law that tries to lead it is not.”
His comment came after an earlier tweet I’d made:
I think we need a per se rule that any law governing technology that was enacted more than 10 years ago is inherently invalid.
In posting that tweet I was thinking about two horrible laws in particular, the Computer Fraud and Abuse Act (CFAA) and the Electronic Communications Privacy Act (ECPA). The former attempts to forbid “hacking,” and the second ostensibly tried to update 1968’s Wiretap Act to cover information technology. In both instances the laws as drafted generally incorporated the attitude that technology as understood then would be the technology the world would have forever hence, a prediction that has obviously been false. But we are nonetheless left with laws like these on the books, laws that hobble further innovation by how they’ve enshrined in our legal code what is right and wrong when it comes to our computer code, as we understood it in 1986, regardless of whether, if considered afresh and applied to today’s technology, we would still think so.
To my tweet a friend did challenge me, however, “What about Section 230? (47 U.S.C. § 230).” This is a law from 1996, and he has a point. Section 230 is a piece of legislation that largely immunizes Internet service providers for liability in content posted on their systems by their users – and let’s face it: the very operational essence of the Internet is all about people posting content on other people’s systems. However, unlike the CFAA and ECPA, Section 230 has enabled technology to flourish, mostly by purposefully getting the law itself out of the way of the technology.
The above are just a few examples of some laws that have either served technology well – or served to hamper it. There are certainly more, and some laws might ultimately do a bit of both. But the general point is sound: law that is too specific is often too stifling. Innovation needs to be able to happen however it needs to, without undue hindrance caused by legislators who could not even begin to imagine what that innovation might look like so many years before. After all, if they could imagine it then, it would not be so innovative now.
The following case, Twentieth Century Fox v. Harris, is not a criminal matter. But I want to include it here nonetheless in part because it’s important to talk about copyright policy generally, particularly given the increasing trend for it to be criminalized. And partly because, in this case, hardly two weeks after I asserted that copyright infringement analogized more to trespass than to theft, a court independently reached the same conclusion.
PayPal recently made news for implementing a policy denying its payment processing services to publications including obscene content. There are several things objectionable about this policy, including the lack of any clear way of delineating what content would qualify as “obscene,” and its overall censorious impact.
But I’m not entirely sure that PayPal is necessarily the appropriate target for criticism of this policy. It may be, to the extent that it is a truly discretionary policy PayPal has voluntarily chosen to pursue. If it could just as easily chosen not to pursue it it can be fairly criticized for the choice it did make. For this policy is not as simple as banning certain objectively horrible content 100% of all people would agree should be stricken from the face of the earth. After all, there *is* no objectively horrible content 100% of all people would agree is objectionable. Instead this policy has the effect of denying market opportunities to all sorts of writers producing all sorts of valid content, even if some people may not happen to like it. And it does this not just by denying particular publications access to its services but by forcing electronic publishers to overcensor all the works they publish lest PayPal services be shut off to their entire businesses.
While I was working on this post Eric Goldman beat me to the punch and posted something similar. But great minds and all that… Intermediary liability is also such a crucial issue related to the criminalization of online content I want to make sure plenty of discussion on it takes place here.
In addition to the First Amendment, in the US free speech on the Internet is also advanced by 47 U.S.C. Section 230, an important law that generally serves to immunize web hosts for liability in user generated content. (See Section 230(c)(1): “No provider . . . of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”). Note that this law doesn’t absolve the content of any intrinsic liability; it just means that the host can’t be held liable for it. Only the user who posted it can be.
This small rule has a big impact: if hosts could be held liable for everything their users’ posted, they would be forced to police and censor it. True, the effect of this immunity means that sometimes some vile content can make its way online and linger there, potentially harmfully. But it also means that by not forcing hosts to be censorious middlemen, they are not finding themselves tempted to kill innocuous, or even abjectly good, content. As a result all sorts of vibrant communities and useful information have been able to take root on the Web.
But for this immunity to really be meaningful, it’s not enough that it protect the host from a final award on damages. It’s extremely expensive to have to be dragged into court at all. If hosts legitimately fear needing to go through the judicial process to answer for users’ content, they may find it more worth their while to become censorious middlemen with respect to that content, in order to ensure they never need go down this road.
Which brings us to Fair Housing Council of San Fernando v. Roommates.com, both its seminal piece of Section 230 jurisprudence and its more recent epilogue, each part of the attempted civil prosecution of a web host for fair housing act violations.
This article in the Korea Times reports that several large online presences in Korea have stopped asking for users’ resident registration numbers when subscribing to their sites. They began to ask in 2007 as a means of ensuring compliance with the government’s requirement that users provide their real names. However, the government had no means to enforce that rule on foreign websites, and it has led to instances of identity theft.
Nexon recently had the private data of 13 million users hacked. Nate and Cyworld, its sister social networking service, had 35 million users’ details compromised after being hacked. After a series of private information leaks at large businesses like Nate, Nexon, Auction, and Hyundai Capital, now virtually all the resident registration numbers of Koreans are available.
As they hold the key to entering Internet sites, criminals can collect almost anyone’s details by collecting information from two or three websites, acquiring names, phone numbers, email addresses, home addresses, office addresses, shopping records, bank account numbers and even blood types.
Some victims submitted a petition to the court last month, requesting they be allowed to change their registration number. “We are on the verge of suffering from more damage as we are forced to continuously use our leaked registration numbers with no countermeasures being taken so far,” the complainants said in their suit.
The Korea Communications Commission is now planning regulation preventing resident numbers from being held online.