Nov 042017
 

The following post originally appeared on Techdirt on 11/3/17.

The news about the DOJ trying to subpoena Twitter calls to mind an another egregious example of the government trying to unmask an anonymous speaker earlier this year. Remember when the federal government tried to compel Twitter to divulge the identity of a user who had been critical of the Trump administration? This incident was troubling enough on its face: there’s no place in a free society for a government to come after a critic of it. But largely overlooked in the worthy outrage over the bald-faced attempt to punish a dissenting voice was the government’s simultaneous attempt to prevent Twitter from telling anyone that the government was demanding this information. Because Twitter refused to comply with that demand, the affected user was able to get counsel and the world was able to know how the government was abusing its authority. As the saying goes, sunlight is the best disinfectant, and by shining a light on the government’s abusive behavior it was able to be stopped.

That storm may have blown over, but the general issues raised by the incident continue to affect Internet platforms – and by extension their users and their speech. A significant problem we keep having to contend with is not only what happens when the government demands information about users from platforms, but what happens when it then compels the same platforms to keep those demands a secret. These secrecy demands are often called different things and are born from separate statutory mechanisms, but they all boil down to being some form of gag over the platform’s ability to speak, with the same equally troubling implications. We’ve talked before about how important it is that platforms be able to protect their users’ right to speak anonymously. That right is part and parcel of the First Amendment because there are many people who would not be able to speak if they were forced to reveal their identities in order to do so. Public discourse, and the benefit the public gets from it, would then suffer in the absence of their contributions. But it’s one thing to say that people have the right to speak anonymously; it’s another to make that right meaningful. If civil plaintiffs, or, worse, the government, can too easily force anonymous speakers to be unmasked then the right to speak anonymously will only be illusory. For it to be something speakers can depend on to enable them to speak freely there have to be effective barriers preventing that anonymity from too casually being stripped by unjust demands. Continue reading »

Nov 042017
 

The following post originally appeared on Techdirt on 10/27/17.

It isn’t unusual or unwarranted for Section 230 to show up as a defense in situations where some might not expect it. Its basic principles may apply to more situations than may necessarily be readily apparent. But to appear as a defense in the Cockrum v. Campaign for Donald Trump case is pretty unexpected. From page 37 of the campaign’s motion to dismiss the case against it, the following two paragraphs are what the campaign slipped in on the subject:

Plaintiffs likewise cannot establish vicarious liability by alleging that the Campaign conspired with WikiLeaks. Under section 230 of the Communications Decency Act (47 U.S.C. § 230), a website that provides a forum where “third parties can post information” is not liable for the third party’s posted information. Klayman v. Zuckerberg, 753 F.3d 1354, 1358 (D.C. Cir. 2014). That is so even when even when the website performs “editorial functions” “such as deciding whether to publish.” Id. at 1359. Since WikiLeaks provided a forum for a third party (the unnamed “Russian actors”) to publish content developed by that third party (the hacked emails), it cannot be held liable for the publication.

That defeats the conspiracy claim. A conspiracy is an agreement to commit “an unlawful act.” Paul v. Howard University, 754 A.2d 297, 310 (D.C. 2000). Since WikiLeaks’ posting of emails was not an unlawful act, an alleged agreement that it should publish those emails could not have been a conspiracy.

This is the case brought against the campaign for allegedly colluding with Wikileaks and the Russians to disclose the plaintiffs’ private information as part of the DNC email trove that ended up on Wikileaks. Like Eric Goldman, who has an excellent post on the subject, I’m not going to go into the relative merits of the lawsuit itself, but I would note that it is worth consideration. Even if it’s true that the Trump campaign and Wikileaks were somehow in cahoots to hack the DNC and publish the data taken from it, whether and how the consequences of that disclosure can be recognized by law is a serious issue, as is whether this particular lawsuit by these particular plaintiffs with these particular claims is one that the law can permit to go forward without causing collateral effects to other expressive endeavors, including whistleblower journalism generally. On these points there may or may not be issues with the campaign’s motion to dismiss overall. But the shoehorning of a Section 230 argument into its defensive strategy seems sufficiently weird and counterproductive to be worth commenting on in and of itself. Continue reading »

Nov 042017
 

The following post first appeared on Techdirt on 10/25/17.

The last two posts I wrote about SESTA discussed how, if it passes, it will result in collateral damage to the important speech interests Section 230 is intended to protect. This post discusses how it will also result in collateral damage to the important interests that SESTA itself is intended to protect: those of vulnerable sex workers.

Concerns about how SESTA would affect them are not new: several anti-trafficking advocacy groups and experts have already spoken out about how SESTA, far from ameliorating the risk of sexual exploitation, will only exacerbate the risk of it in no small part because it disables one of the best tools for fighting it: the Internet platforms themselves:

[Using the vilified Backpage as an example, in as much as] Backpage acts as a channel for traffickers, it also acts as a point of connection between victims and law enforcement, family, good samaritans, and NGOs. Countless news reports and court documents bear out this connection. A quick perusal of news stories shows that last month, a mother found and recovered her daughter thanks to information in an ad on Backpagea brother found his sister the same way; and a family alerted police to a missing girl on Backpage, leading to her recovery. As I have written elsewhere, NGOs routinely comb the website to find victims. Nicholas Kristof of the New York Times famously “pulled out [his] laptop, opened up Backpage and quickly found seminude advertisements for [a victim], who turned out to be in a hotel room with an armed pimp,” all from the victim’s family’s living room. He emailed the link to law enforcement, which staged a raid and recovered the victim.

And now there is yet more data confirming what these experts have been saying: when there have been platforms available to host content for erotic services, it has decreased the risk of harm to sex workers. Continue reading »

Oct 212017
 

The following is the second in a pair of posts on Techdirt about how SESTA’s attempt to carve-out “trafficking” from Section 230’s platform protection threatens legitimate online speech having nothing to do with actual harm to trafficking victims.

Think we’re unduly worried about how “trafficking” charges will get used to punish legitimate online speech? We’re not.

A few weeks ago a Mississippi mom posted an obviously joking tweet offering to sell her three-year old for $12.

I tweeted a funny conversation I had with him about using the potty, followed by an equally-as-funny offer to my followers: 3-year-old for sale. $12 or best offer.

The next thing she knew, Mississippi authorities decided to investigate her for child trafficking.

The saga began when a caseworker and supervisor from Child Protection Services dropped by my office with a Lafayette County sheriff’s deputy. You know, a typical Monday afternoon.

They told me an anonymous male tipster called Mississippi’s child abuse hotline days earlier to report me for attempting to sell my 3-year-old son, citing a history of mental illness that probably drove me to do it.

Beyond notifying me of the charges, they said I’d have to take my son out of school so they could see him and talk to him that day, presumably protocol to ensure children aren’t in immediate danger. So I went to his preschool, pulled my son out of a deep sleep during naptime, and did everything in my power not to cry in front of him on the drive back to my office.

All of this for a joke tweet.

This story is bad enough on its own. As it stands now, actions by the Mississippi authorities will chill other Mississippi parents from blowing off steam with facetious remarks on social media. But at least the chilling harm is contained within Mississippi’s borders. If SESTA passes, that chill will spread throughout the country. Continue reading »

Oct 212017
 

The following is the first of a pair of posts on SESTA highlighting how carving out an exception to Section 230’s platform protection for sex trafficking rips a huge hole in the critical protection for online speech that Section 230 in its current form provides.

First, if you are someone who likes stepped-up ICE immigration enforcement and does not like “sanctuary cities,” you might cheer the implications of this post, but it isn’t otherwise directed at you. It is directed at the center of the political ven diagram of people who both feel the opposite about these immigration policies, and yet who are also championing SESTA. Because this news from Oakland raises the specter of a horrific implication for online speech championing immigrant rights if SESTA passes: the criminal prosecution of the platforms which host that discussion.

Much of the discussion surrounding SESTA is based on some truly horrific tales of sex abuse, crimes that more obviously fall under what the human trafficking statutes are clearly intended to address. But with news that ICE is engaging in a very broad reading of the type of behavior the human trafficking laws might cover and prosecuting anyone that happens to help an immigrant, it’s clear that the type of speech that SESTA will carve out from Section 230’s protection will go far beyond the situations the bill originally contemplated. Continue reading »

Jul 062017
 

The following was originally posted on Techdirt.

Sunday morning I made the mistake of checking Twitter first thing upon waking up. As if just a quick check of Twitter would ever be possible during this administration… It definitely wasn’t this past weekend, because waiting for me in my Twitter stream was Trump’s tweet of the meme he found on Reddit showing him physically beating the crap out of a personified CNN.

But that’s not what waylaid me. What gave me pause were all the people demanding it be reported to Twitter for violating its terms of service. The fact that so many people thought that was a good idea worries me, because the expectation that when bad speech happens someone will make it go away is not a healthy one. My concern inspired a tweet storm, which has now been turned into this post. Continue reading »

Jun 132017
 

Cross-posted on Techdirt.

The Copia Institute filed another amicus brief this week, this time in Fields v. Twitter. Fields v. Twitter is one of a flurry of cases being brought against Internet platforms alleging that they are liable for the harms caused by the terrorists using their sites. The facts in these cases are invariably awful: often people have been brutally killed and their loved ones are seeking redress for their loss. There is a natural, and perfectly reasonable, temptation to give them some sort of remedy from someone, but as we argued in our brief, that someone cannot be an internet platform.

There are several reasons for this, including some that have nothing to do with Section 230. For instance, even if Section 230 did not exist and platforms could be liable for the harms resulting from their users’ use of their services, for them to be liable there would have to be a clear connection between the use of the platform and the harm. Otherwise, based on the general rules of tort law, there could be no liability. In this particular case, for instance, there is a fairly weak connection between ISIS members using Twitter and the specific terrorist act that killed the plaintiffs’ family members.

But we left that point to Twitter to ably argue. Our brief focused exclusively on the fact that Section 230 should prevent a court from ever even reaching the tort law analysis. With Section 230, a platform should never find itself having to defend against liability for harm that may have resulted from how people used it. Our concern is that in several recent cases with their own terrible facts, the Ninth Circuit in particular has found itself willing to make exceptions to that rule. As much as we were supporting Twitter in this case, trying to help ensure the Ninth Circuit does not overturn the very good District Court decision that had correctly applied Section 230 to dismiss the case, we also had an eye to the long view of reversing this trend. Continue reading »

May 262017
 

The following was cross-posted on Techdirt.

We often talk about how protecting online speech requires protecting platforms, like with Section 230 immunity and the safe harbors of the DMCA. But these statutory shields are not the only way law needs to protect platforms in order to make sure the speech they carry is also protected.

Earlier this month, I helped Techdirt’s think tank arm, the Copia Institute, file an amicus brief in support of Yelp in a case called Montagna v. Nunis. Like many platforms, Yelp lets people post content anonymously. Often people are only willing to speak when they can do so without revealing who they are (note how many people participate in the comments here without revealing their real names), which is why the right to speak anonymously has been found to be part and parcel of the First Amendment right of free speech . It’s also why sites like Yelp let users post anonymously, because often that’s the only way they will feel comfortable posting reviews candid enough to be useful to those who depend on sites like Yelp to help them make informed decisions.

But as we also see, people who don’t like the things said about them often try to attack their critics, and one way they do this is by trying to strip these speakers of their anonymity. True, sometimes online speech can cross the line and actually be defamatory, in which case being able to discover the identity of the speaker is important. This case in no way prevents legitimately aggrieved plaintiffs from using subpoenas to discover the identity of those whose unlawful speech has injured them to sue them for relief. Unfortunately, however, it is not just people with legitimate claims who are sending subpoenas; in many instances they are being sent by people objecting to speech that is perfectly legal, and that’s a problem. Unmasking the speakers behind protected speech not only violates their First Amendment rights to speak anonymously but it also chills the speech the First Amendment is designed to foster generally by making the critical anonymity protection that plenty of legal speech depends on suddenly illusory.

There is a lot that can and should be done to close off this vector of attack on free speech. One important measure is to make sure platforms are able to resist the subpoenas they get demanding they turn over whatever identifying information they have. There are practical reasons why they can’t always fight them — for instance, like DMCA takedown notices, they may simply get too many — but it is generally in their interest to try to resist illegitimate subpoenas targeting the protected speech posted anonymously on their platforms so that their users will not be scared away from speaking on their sites.

But when Yelp tried to resist the subpoena connected with this case, the court refused to let them stand in to defend the user’s speech interest. Worse, it sanctioned(!) Yelp for even trying, thus making platforms’ efforts to stand up for their users even more risky and expensive than they already are.

So Yelp appealed, and we filed an amicus brief supporting their effort. Fortunately, earlier this year Glassdoor won an important California State appellate ruling that validated attempts by platforms to quash subpoenas on behalf of their users. That decision discussed why the First Amendment and California State Constitution required platforms to have this ability to quash subpoenas targeting protected speech, and hopefully this particular appeals court will agree with its sister court and make clear that platforms are allowed to fight off subpoenas like this. As we pointed out in our brief, both state and federal law and policy require online speech to be protected, and preventing platforms from resisting subpoenas is out of step with those stated policy goals and constitutional requirements.

Feb 232017
 

Over at Techdirt there’s a write-up of the latest comment I submitted on behalf of the Copia Institute as part of the Copyright Office’s study on the operation of Section 512 of the Digital Millennium Copyright Act. As as we’ve told the Copyright Office before, that operation has had a huge impact on online free speech. (Those comments have also been cross-posted here.)

In some ways this impact is good: providing platforms with protection from liability in their users’ content means that they can be available to facilitate that content and speech. But all too often and in all too many ways the practical impact on free speech has been a negative one, with speech being much more vulnerable to censorship via takedown notice than it ever would have been if the person objecting to it (even for copyright-related reasons) had to go to court to get an injunction to take it down. Not only is the speech itself more vulnerable than it should be, but the protection the platforms depend on ends up being more vulnerable as well because platforms must risk it every time they refuse to act on a takedown notice, no matter how invalid that notice may be.

Our earlier comment pointed out in some detail how the current operation of the DMCA has been running afoul of the protections the First Amendment is supposed to afford speech, and in this second round of comments we’ve highlighted some further deficiencies. In particular, we reminded the Copyright Office of the problems with “prior restraint,” which the First Amendment also prohibits. Prior restraint is what happens when speech is punished before there has been any adjudication to prove that it deserves to be punished. The reason the First Amendment prohibits prior restraint is that it does no good to punish speech, such as by removing it, if the First Amendment would otherwise protect it – once it has been removed the damage will have already been done.

Making sure that legitimate speech cannot be removed is why we normally require the courts to carefully adjudicate whether its removal can be ordered before its removal will be allowed. But with the DMCA there is no such judicial check: people can send demands for all sorts of content to be removed, even if it weren’t actually infringing, because there is little to deter them so long as Section 512(f) continues to have no teeth. Instead platforms are forced to treat every takedown notice as a legitimate demand, regardless of whether it is or not. Not only does this mean they need to delete the content but, in the wake of some recent cases, it seems they also must potentially hold each allegation against their user, regardless of whether it was valid or not, and then cut that user off from their services when they’ve accrued too many such accusations, again regardless of they were valid or not.

As we did before, we counseled the Copyright Office to return to first principles: the DMCA was supposed to enhance online free speech, and it’s important to make sure that all of its provisions work together to do just that. To the extent that it may be appropriate for the Copyright Office to make recommendations on this front, one is to remind all concerned that the penalty articulated in Section 512(f) to sanction bad takedown notices can and should be applied according to a flexible standard, rather than the rigid one courts have lately adopted. In any case, however, the Copyright Office certainly should not be advocating to changes in any provisions or their interpretations that make the DMCA any less compatible with the First Amendment than it has already tended to be.

Dec 172016
 

The following was recently published on Techdirt, although with a different title.

Regardless of what one thinks about the apparent result of the 2016 election, it will inevitably present a number of challenges for America and the world. As Mike wrote about last week, they will inevitably touch on many of the tech policy issues often discussed here. The following is a closer look at some of the implications (and opportunities) with respect to several of them, given the unique hallmarks of Trump and his proposed administration. Continue reading »