Nov 042017
 

The following post first appeared on Techdirt on 10/25/17.

The last two posts I wrote about SESTA discussed how, if it passes, it will result in collateral damage to the important speech interests Section 230 is intended to protect. This post discusses how it will also result in collateral damage to the important interests that SESTA itself is intended to protect: those of vulnerable sex workers.

Concerns about how SESTA would affect them are not new: several anti-trafficking advocacy groups and experts have already spoken out about how SESTA, far from ameliorating the risk of sexual exploitation, will only exacerbate the risk of it in no small part because it disables one of the best tools for fighting it: the Internet platforms themselves:

[Using the vilified Backpage as an example, in as much as] Backpage acts as a channel for traffickers, it also acts as a point of connection between victims and law enforcement, family, good samaritans, and NGOs. Countless news reports and court documents bear out this connection. A quick perusal of news stories shows that last month, a mother found and recovered her daughter thanks to information in an ad on Backpagea brother found his sister the same way; and a family alerted police to a missing girl on Backpage, leading to her recovery. As I have written elsewhere, NGOs routinely comb the website to find victims. Nicholas Kristof of the New York Times famously “pulled out [his] laptop, opened up Backpage and quickly found seminude advertisements for [a victim], who turned out to be in a hotel room with an armed pimp,” all from the victim’s family’s living room. He emailed the link to law enforcement, which staged a raid and recovered the victim.

And now there is yet more data confirming what these experts have been saying: when there have been platforms available to host content for erotic services, it has decreased the risk of harm to sex workers. Continue reading »

Oct 212017
 

The following is the second in a pair of posts on Techdirt about how SESTA’s attempt to carve-out “trafficking” from Section 230’s platform protection threatens legitimate online speech having nothing to do with actual harm to trafficking victims.

Think we’re unduly worried about how “trafficking” charges will get used to punish legitimate online speech? We’re not.

A few weeks ago a Mississippi mom posted an obviously joking tweet offering to sell her three-year old for $12.

I tweeted a funny conversation I had with him about using the potty, followed by an equally-as-funny offer to my followers: 3-year-old for sale. $12 or best offer.

The next thing she knew, Mississippi authorities decided to investigate her for child trafficking.

The saga began when a caseworker and supervisor from Child Protection Services dropped by my office with a Lafayette County sheriff’s deputy. You know, a typical Monday afternoon.

They told me an anonymous male tipster called Mississippi’s child abuse hotline days earlier to report me for attempting to sell my 3-year-old son, citing a history of mental illness that probably drove me to do it.

Beyond notifying me of the charges, they said I’d have to take my son out of school so they could see him and talk to him that day, presumably protocol to ensure children aren’t in immediate danger. So I went to his preschool, pulled my son out of a deep sleep during naptime, and did everything in my power not to cry in front of him on the drive back to my office.

All of this for a joke tweet.

This story is bad enough on its own. As it stands now, actions by the Mississippi authorities will chill other Mississippi parents from blowing off steam with facetious remarks on social media. But at least the chilling harm is contained within Mississippi’s borders. If SESTA passes, that chill will spread throughout the country. Continue reading »

Oct 212017
 

The following is the first of a pair of posts on SESTA highlighting how carving out an exception to Section 230’s platform protection for sex trafficking rips a huge hole in the critical protection for online speech that Section 230 in its current form provides.

First, if you are someone who likes stepped-up ICE immigration enforcement and does not like “sanctuary cities,” you might cheer the implications of this post, but it isn’t otherwise directed at you. It is directed at the center of the political ven diagram of people who both feel the opposite about these immigration policies, and yet who are also championing SESTA. Because this news from Oakland raises the specter of a horrific implication for online speech championing immigrant rights if SESTA passes: the criminal prosecution of the platforms which host that discussion.

Much of the discussion surrounding SESTA is based on some truly horrific tales of sex abuse, crimes that more obviously fall under what the human trafficking statutes are clearly intended to address. But with news that ICE is engaging in a very broad reading of the type of behavior the human trafficking laws might cover and prosecuting anyone that happens to help an immigrant, it’s clear that the type of speech that SESTA will carve out from Section 230’s protection will go far beyond the situations the bill originally contemplated. Continue reading »

The Importance Of Defending Section 230 Even When It’s Hard (cross-post)

 Analysis/commentary, Intermediary liability, Regulating speech  Comments Off on The Importance Of Defending Section 230 Even When It’s Hard (cross-post)
Jun 132017
 

Cross-posted on Techdirt.

The Copia Institute filed another amicus brief this week, this time in Fields v. Twitter. Fields v. Twitter is one of a flurry of cases being brought against Internet platforms alleging that they are liable for the harms caused by the terrorists using their sites. The facts in these cases are invariably awful: often people have been brutally killed and their loved ones are seeking redress for their loss. There is a natural, and perfectly reasonable, temptation to give them some sort of remedy from someone, but as we argued in our brief, that someone cannot be an internet platform.

There are several reasons for this, including some that have nothing to do with Section 230. For instance, even if Section 230 did not exist and platforms could be liable for the harms resulting from their users’ use of their services, for them to be liable there would have to be a clear connection between the use of the platform and the harm. Otherwise, based on the general rules of tort law, there could be no liability. In this particular case, for instance, there is a fairly weak connection between ISIS members using Twitter and the specific terrorist act that killed the plaintiffs’ family members.

But we left that point to Twitter to ably argue. Our brief focused exclusively on the fact that Section 230 should prevent a court from ever even reaching the tort law analysis. With Section 230, a platform should never find itself having to defend against liability for harm that may have resulted from how people used it. Our concern is that in several recent cases with their own terrible facts, the Ninth Circuit in particular has found itself willing to make exceptions to that rule. As much as we were supporting Twitter in this case, trying to help ensure the Ninth Circuit does not overturn the very good District Court decision that had correctly applied Section 230 to dismiss the case, we also had an eye to the long view of reversing this trend. Continue reading »

Comments on DMCA Section 512: On the general effectiveness of the Safe Harbors

 Analysis/commentary, Criminal IP Enforcement, Intermediary liability  Comments Off on Comments on DMCA Section 512: On the general effectiveness of the Safe Harbors
Apr 062016
 

The following is Section III.A of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Question #1 asks whether Section 512 safe harbors are working as intended, and Question #5 asks the related question of whether the right balance has been struck between copyright owners and online service providers.  To the extent that service providers have been insulated from the costs associated with liability for their users’ content, the DMCA, with its safe harbors, has been a good thing.  But the protection is all too often too complicated to achieve, too expensive to assert, or otherwise too illusory for service providers to be adequately protected.

Relatedly, Question #2 asks whether courts have properly construed the entities and activities covered by the safe harbor, and the answer is not always.  But the problem here is not just that they have sometimes gotten it wrong but that there is too often the possibility for them to get it wrong.  Whereas under Section 230 questions of liability for intermediaries for illegality in user-supplied content are relatively straight forward – was the intermediary the party that produced the content? if not, then it is not liable – when the alleged illegality in others’ content relates to potential copyright infringement, the test becomes a labyrinth minefield that the service provider may need to endure costly litigation to navigate.  Not only is ultimate liability expensive but even the process of ensuring that it won’t face that liability can be crippling.[1]  Service providers, and investors in service providers, need a way to minimize and manage the legal risk and associated costs arising from their provision of online services, but given the current complexity[2] outlining the requirements for safe harbors they can rarely be so confidently assured.
Continue reading »

Comments on DMCA Section 512: The DMCA functions as a system of extra-judicial censorship

 Analysis/commentary, Intermediary liability, Regulating speech  Comments Off on Comments on DMCA Section 512: The DMCA functions as a system of extra-judicial censorship
Apr 042016
 

The following is Section II.B of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Despite all the good that Section 230 and the DMCA have done to foster a robust online marketplace of ideas, the DMCA’s potential to deliver that good has been tempered by the particular structure of the statute.  Whereas Section 230 provides a firm immunity to service providers for potential liability in user-supplied content,[1] the DMCA conditions its protection.[2]  And that condition is censorship.  The irony is that while the DMCA makes it possible for service providers to exist to facilitate online speech, it does so at the expense of the very speech they exist to facilitate due to the notice and takedown system.

In a world without the DMCA, if someone wanted to enjoin content they would need to demonstrate to a court that it indeed owned a valid copyright and that the use of content in question infringed this copyright before a court would compel its removal.  Thanks to the DMCA, however, they are spared both their procedural burdens and also their pleading burdens.  In order to cause content to be disappeared from the Internet all anyone needs to do is send a takedown notice that merely points to content and claims it as theirs.

Although some courts are now requiring takedown notice senders to consider whether the use of the content in question was fair,[3] there is no real penalty for the sender if they get it wrong or don’t bother.[4]  Instead, service providers are forced to become judge and jury, even though (a) they lack the information needed to properly evaluate copyright infringement claims,[5] (b) the sheer volume of takedowns notices often makes case-by-case evaluation of them impossible, and (c) it can be a bet-the-company decision if the service provider gets it wrong because their “error” may deny them the Safe Harbor and put them on the hook for infringement liability.[6]  Although there is both judicial and statutory recognition that service providers are not in the position to police user-supplied content for infringement,[7] there must also be recognition that they are similarly not in the position to police for invalid takedowns.  Yet they must, lest there be no effective check on these censorship demands.

Ordinarily the First Amendment and due process would not permit this sort of censorship, the censorship of an Internet user’s speech predicated on mere allegation.  Mandatory injunctions are disfavored generally,[8] and particularly so when they target speech and may represent impermissible prior restraint on speech that has not yet been determined to be wrongful.[9]  To the extent that the DMCA causes these critical speech protections to be circumvented it is consequently only questionably constitutional.  For the DMCA to be statutorily valid it must retain, in its drafting and interpretation, ample protection to see that these important constitutional speech protections are not ignored.
Continue reading »

Sep 292013
 

This past week California passed a law requiring website owners to allow minors (who are also residents of California) to delete any postings they may have made on the website. There is plenty to criticize about this law, including that it is yet another example of a legislative commandment cavalierly imposing liability on website owners with no contemplation of the technical feasibility of how they are supposed to comply with it.

But such discussion should be moot. This law is precluded by federal law, in this case 47 U.S.C. Section 230. By its provisions, Section 230 prevents intermediaries (such as websites) from being held liable for content others have posted on them. (See Section 230(c)(1)). Moreover, states are not permitted to undermine that immunity. (See Section 230(e)(3)). So, for instance, even if someone were to post some content to a website that might be illegal in some way under state law, that state law can’t make the website hosting that content itself be liable for it (nor can that state law make the website delete it). But that’s what this law proposes to do at its essence: make websites liable for content others have posted to them.

As such, even aside for the other Constitutional infirmities of this law such as those involving compelled speech for forcing website owners to either host or delete content at someone else’s behest (see a discussion from Eric Goldman about this and other Constitutional problems here), it’s also constitutionally pre-empted by a prior act of Congress.

Some might argue that the intent of the law is important and noble enough to forgive it these problems. Unlike in generations past, kids today truly do have something akin to a “permanent record” thanks to the ease of the Internet to collect and indefinitely store the digital evidence of everyone’s lives. But such a concern requires thoughtful consideration for how to best ameliorate those consequences, if it’s even possible to, without injuring important free speech principles and values the Internet also supports. This law offers no such solution.

Jul 282013
 

I was asked to write the “Posts of the Week” for Techdirt this past weekend and used it as an opportunity to convey some of the ideas I explore here to that audience. The post was slightly constrained by the contours of the project — for instance, I could only punctuate my greater points with actual posts that appeared on Techdirt last week — but I think they held together with coherence, and I appreciated the chance to reframe some of the issues Techdirt was already exploring in this way.

In any case, I’ve decided to cross-post my summary here, partly because I always like to host a copy of my guest blog posts on one of my sites, and partly because it gives me a chance to update and annotate those ideas further. Please do go visit Techdirt though, which was kind enough to ask me to do this, to read more about the items described below.
Continue reading »

Roommates.com redux

 Analysis/commentary, Intermediary liability  Comments Off on Roommates.com redux
Feb 062012
 

While I was working on this post Eric Goldman beat me to the punch and posted something similar. But great minds and all that… Intermediary liability is also such a crucial issue related to the criminalization of online content I want to make sure plenty of discussion on it takes place here.

In addition to the First Amendment, in the US free speech on the Internet is also advanced by 47 U.S.C. Section 230, an important law that generally serves to immunize web hosts for liability in user generated content. (See Section 230(c)(1): “No provider . . . of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”). Note that this law doesn’t absolve the content of any intrinsic liability; it just means that the host can’t be held liable for it. Only the user who posted it can be.

This small rule has a big impact: if hosts could be held liable for everything their users’ posted, they would be forced to police and censor it. True, the effect of this immunity means that sometimes some vile content can make its way online and linger there, potentially harmfully. But it also means that by not forcing hosts to be censorious middlemen, they are not finding themselves tempted to kill innocuous, or even abjectly good, content. As a result all sorts of vibrant communities and useful information have been able to take root on the Web.

But for this immunity to really be meaningful, it’s not enough that it protect the host from a final award on damages. It’s extremely expensive to have to be dragged into court at all. If hosts legitimately fear needing to go through the judicial process to answer for users’ content, they may find it more worth their while to become censorious middlemen with respect to that content, in order to ensure they never need go down this road.

Which brings us to Fair Housing Council of San Fernando v. Roommates.com, both its seminal piece of Section 230 jurisprudence and its more recent epilogue, each part of the attempted civil prosecution of a web host for fair housing act violations. Continue reading »