Nov 122017
 

This post appeared on Techdirt on 11/10/17.  The anniversary it notes is today.

We have been talking a lot lately about how important Section 230 is for enabling innovation and fostering online speech, and, especially as Congress now flirts with erasing its benefits, how fortuitous it was that Congress ever put it on the books in the first place.

But passing the law was only the first step: for it to have meaningful benefit, courts needed to interpret it in a way that allowed for it to have its protective effect on Internet platforms. Zeran v. America Online was one of the first cases to test the bounds of Section 230’s protection, and the first to find that protection robust. Had the court decided otherwise, we likely would not have seen the benefits the statute has since then afforded.

This Sunday the decision in Zeran turns 20 years old, and to mark the occasion Eric Goldman and Jeff Kosseff have gathered together more than 20 essays from Internet lawyers and scholars reflecting on the case, the statute, and all of its effects. I have an essay there, “The First Hard Case: ‘Zeran v. AOL’ and What It Can Teach Us About Today’s Hard Cases,” as do many other advocates, including lawyers involved with the original case. Even people who are not fans of Section 230 and its legacy are represented. All of these pieces are worth reading and considering, especially by anyone interested in setting policy around these issues.

Nov 062017
 

This post is the second in a series that ran on Techdirt about the harm to online speech through unfettered discovery on platforms that they are then prevented from talking about.

In my last post, I discussed why it is so important for platforms to be able to speak about the discovery demands they receive, seeking to unmask their anonymous users. That candor is crucially important in ensuring that unmasking demands can’t damage the key constitutional right to speak anonymously, without some sort of check against their abuse.

The earlier post rolled together several different types of discovery instruments (subpoenas, warrants, NSLs, etc.) because to a certain extent it doesn’t matter which one is used to unmask an anonymous user. The issue raised by all of them is that if their power to unmask an anonymous user is too unfettered, then it will chill all sorts of legitimate speech. And, as noted in the last post, the ability for a platform receiving an unmasking demand to tell others it has received it is a critical check against unworthy demands seeking to unmask the speakers behind lawful speech.

The details of each type of unmasking instrument do matter, though, because each one has different interests to balance and, accordingly, different rules governing how to balance them. Unfortunately, the rules that have evolved for any particular one are not always adequately protective of the important speech interests any unmasking demand necessarily affects. As is the case for the type of unmasking demand at issue in this post: a federal grand jury subpoena.

Grand jury subpoenas are very powerful discovery instruments, and with good reason: the government needs a powerful weapon to be able to investigate serious crimes. There are also important constitutional reasons for why we equip grand juries with strong investigatory power, because if charges are to be brought against people, it’s important for due process reasons that they have been brought by the grand jury, as opposed to a more arbitrary exercise of government power. Grand juries are, however, largely at the disposal of government prosecutors, and thus a grand jury subpoena essentially functions as a government unmasking demand. The ability to compel information via a grand jury subpoena is therefore not a power we can allow to exist unchecked.

Which brings us to the story of the grand jury subpoena served on Glassdoor, which Paul Levy and Ars Technica wrote about earlier this year. It’s a story that raises three interrelated issues: (1) a poor balancing of the relevant interests, (2) a poor structural model that prevented a better balancing, and (3) a gag that has made it extraordinarily difficult to create a better rule governing how grand jury subpoenas should be balanced against important online speech rights. Continue reading »

Nov 042017
 

The following post originally appeared on Techdirt on 11/3/17.

The news about the DOJ trying to subpoena Twitter calls to mind an another egregious example of the government trying to unmask an anonymous speaker earlier this year. Remember when the federal government tried to compel Twitter to divulge the identity of a user who had been critical of the Trump administration? This incident was troubling enough on its face: there’s no place in a free society for a government to come after a critic of it. But largely overlooked in the worthy outrage over the bald-faced attempt to punish a dissenting voice was the government’s simultaneous attempt to prevent Twitter from telling anyone that the government was demanding this information. Because Twitter refused to comply with that demand, the affected user was able to get counsel and the world was able to know how the government was abusing its authority. As the saying goes, sunlight is the best disinfectant, and by shining a light on the government’s abusive behavior it was able to be stopped.

That storm may have blown over, but the general issues raised by the incident continue to affect Internet platforms – and by extension their users and their speech. A significant problem we keep having to contend with is not only what happens when the government demands information about users from platforms, but what happens when it then compels the same platforms to keep those demands a secret. These secrecy demands are often called different things and are born from separate statutory mechanisms, but they all boil down to being some form of gag over the platform’s ability to speak, with the same equally troubling implications. We’ve talked before about how important it is that platforms be able to protect their users’ right to speak anonymously. That right is part and parcel of the First Amendment because there are many people who would not be able to speak if they were forced to reveal their identities in order to do so. Public discourse, and the benefit the public gets from it, would then suffer in the absence of their contributions. But it’s one thing to say that people have the right to speak anonymously; it’s another to make that right meaningful. If civil plaintiffs, or, worse, the government, can too easily force anonymous speakers to be unmasked then the right to speak anonymously will only be illusory. For it to be something speakers can depend on to enable them to speak freely there have to be effective barriers preventing that anonymity from too casually being stripped by unjust demands. Continue reading »

Nov 042017
 

The following post originally appeared on Techdirt on 10/27/17.

It isn’t unusual or unwarranted for Section 230 to show up as a defense in situations where some might not expect it. Its basic principles may apply to more situations than may necessarily be readily apparent. But to appear as a defense in the Cockrum v. Campaign for Donald Trump case is pretty unexpected. From page 37 of the campaign’s motion to dismiss the case against it, the following two paragraphs are what the campaign slipped in on the subject:

Plaintiffs likewise cannot establish vicarious liability by alleging that the Campaign conspired with WikiLeaks. Under section 230 of the Communications Decency Act (47 U.S.C. § 230), a website that provides a forum where “third parties can post information” is not liable for the third party’s posted information. Klayman v. Zuckerberg, 753 F.3d 1354, 1358 (D.C. Cir. 2014). That is so even when even when the website performs “editorial functions” “such as deciding whether to publish.” Id. at 1359. Since WikiLeaks provided a forum for a third party (the unnamed “Russian actors”) to publish content developed by that third party (the hacked emails), it cannot be held liable for the publication.

That defeats the conspiracy claim. A conspiracy is an agreement to commit “an unlawful act.” Paul v. Howard University, 754 A.2d 297, 310 (D.C. 2000). Since WikiLeaks’ posting of emails was not an unlawful act, an alleged agreement that it should publish those emails could not have been a conspiracy.

This is the case brought against the campaign for allegedly colluding with Wikileaks and the Russians to disclose the plaintiffs’ private information as part of the DNC email trove that ended up on Wikileaks. Like Eric Goldman, who has an excellent post on the subject, I’m not going to go into the relative merits of the lawsuit itself, but I would note that it is worth consideration. Even if it’s true that the Trump campaign and Wikileaks were somehow in cahoots to hack the DNC and publish the data taken from it, whether and how the consequences of that disclosure can be recognized by law is a serious issue, as is whether this particular lawsuit by these particular plaintiffs with these particular claims is one that the law can permit to go forward without causing collateral effects to other expressive endeavors, including whistleblower journalism generally. On these points there may or may not be issues with the campaign’s motion to dismiss overall. But the shoehorning of a Section 230 argument into its defensive strategy seems sufficiently weird and counterproductive to be worth commenting on in and of itself. Continue reading »

Nov 042017
 

The following post first appeared on Techdirt on 10/25/17.

The last two posts I wrote about SESTA discussed how, if it passes, it will result in collateral damage to the important speech interests Section 230 is intended to protect. This post discusses how it will also result in collateral damage to the important interests that SESTA itself is intended to protect: those of vulnerable sex workers.

Concerns about how SESTA would affect them are not new: several anti-trafficking advocacy groups and experts have already spoken out about how SESTA, far from ameliorating the risk of sexual exploitation, will only exacerbate the risk of it in no small part because it disables one of the best tools for fighting it: the Internet platforms themselves:

[Using the vilified Backpage as an example, in as much as] Backpage acts as a channel for traffickers, it also acts as a point of connection between victims and law enforcement, family, good samaritans, and NGOs. Countless news reports and court documents bear out this connection. A quick perusal of news stories shows that last month, a mother found and recovered her daughter thanks to information in an ad on Backpagea brother found his sister the same way; and a family alerted police to a missing girl on Backpage, leading to her recovery. As I have written elsewhere, NGOs routinely comb the website to find victims. Nicholas Kristof of the New York Times famously “pulled out [his] laptop, opened up Backpage and quickly found seminude advertisements for [a victim], who turned out to be in a hotel room with an armed pimp,” all from the victim’s family’s living room. He emailed the link to law enforcement, which staged a raid and recovered the victim.

And now there is yet more data confirming what these experts have been saying: when there have been platforms available to host content for erotic services, it has decreased the risk of harm to sex workers. Continue reading »

Oct 212017
 

The following is the second in a pair of posts on Techdirt about how SESTA’s attempt to carve-out “trafficking” from Section 230’s platform protection threatens legitimate online speech having nothing to do with actual harm to trafficking victims.

Think we’re unduly worried about how “trafficking” charges will get used to punish legitimate online speech? We’re not.

A few weeks ago a Mississippi mom posted an obviously joking tweet offering to sell her three-year old for $12.

I tweeted a funny conversation I had with him about using the potty, followed by an equally-as-funny offer to my followers: 3-year-old for sale. $12 or best offer.

The next thing she knew, Mississippi authorities decided to investigate her for child trafficking.

The saga began when a caseworker and supervisor from Child Protection Services dropped by my office with a Lafayette County sheriff’s deputy. You know, a typical Monday afternoon.

They told me an anonymous male tipster called Mississippi’s child abuse hotline days earlier to report me for attempting to sell my 3-year-old son, citing a history of mental illness that probably drove me to do it.

Beyond notifying me of the charges, they said I’d have to take my son out of school so they could see him and talk to him that day, presumably protocol to ensure children aren’t in immediate danger. So I went to his preschool, pulled my son out of a deep sleep during naptime, and did everything in my power not to cry in front of him on the drive back to my office.

All of this for a joke tweet.

This story is bad enough on its own. As it stands now, actions by the Mississippi authorities will chill other Mississippi parents from blowing off steam with facetious remarks on social media. But at least the chilling harm is contained within Mississippi’s borders. If SESTA passes, that chill will spread throughout the country. Continue reading »

Oct 212017
 

The following is the first of a pair of posts on SESTA highlighting how carving out an exception to Section 230’s platform protection for sex trafficking rips a huge hole in the critical protection for online speech that Section 230 in its current form provides.

First, if you are someone who likes stepped-up ICE immigration enforcement and does not like “sanctuary cities,” you might cheer the implications of this post, but it isn’t otherwise directed at you. It is directed at the center of the political ven diagram of people who both feel the opposite about these immigration policies, and yet who are also championing SESTA. Because this news from Oakland raises the specter of a horrific implication for online speech championing immigrant rights if SESTA passes: the criminal prosecution of the platforms which host that discussion.

Much of the discussion surrounding SESTA is based on some truly horrific tales of sex abuse, crimes that more obviously fall under what the human trafficking statutes are clearly intended to address. But with news that ICE is engaging in a very broad reading of the type of behavior the human trafficking laws might cover and prosecuting anyone that happens to help an immigrant, it’s clear that the type of speech that SESTA will carve out from Section 230’s protection will go far beyond the situations the bill originally contemplated. Continue reading »

Oct 212017
 

The following was posted on Techdirt 10/16/17.

In the wake of the news about Harvey Weinstein’s apparently serial abuse of women, and the news that several of his victims were unable to tell anyone about it due to a non-disclosure agreement, the New York legislature is considering a bill to prevent such NDAs from being enforceable in New York state. According to the Buzzfeed article the bill as currently proposed still allows a settlement agreement to demand that the recipient of a settlement not disclose how much they settled for, but it can’t put the recipient of a settlement in jeopardy of needing to compensate their abuser if they choose to talk about what happened to them.

It’s not the first time a state has imposed limits on the things that people can contract for. California, for example, has a law that generally makes non-compete agreements invalid. Even Congress has now passed a law banning contracts that limit consumers’ ability to complain about merchants. Although, as we learn in law school, there are some Constitutional disputes about how unfettered the freedom to contract should be in the United States, there has also always been the notion that some contractual demands are inherently “void as against public policy.” In other words, go ahead and write whatever contractual clause you want, but they aren’t all going to be enforceable against the people you want to force to comply with them.

Like with the federal Consumer Review Fairness Act mentioned above, the proposed New York bill recognizes that there is a harm to the public interest when people cannot speak freely. When bad things happen, people need to know about them if they are to protect themselves. And it definitely isn’t consistent with the public interest if the people doing the bad things can stop people from knowing that they’ve been doing them. These NDAs have essentially had the effect of letting bad actors pay money for the ability to continue the bad acts, and this proposed law is intended to take away that power.

As with any law the devil will be in the details (for instance, this proposed bill appears to apply only to non-disclosure clauses in the employment context, not more broadly), and it isn’t clear whether this one, as written, might cause some unintended consequences. For instance, there might theoretically be the concern that without a gag clause in a settlement agreement it might be harder for victims to reach agreements that would compensate them for their injury. But as long as victims of other people’s bad acts can be silenced as a condition of being compensated for those bad acts, and that silence enables there to be yet more victims, then there are already some unfortunate consequences for a law to try to address.

Oct 032017
 

I always knew, even before I applied to college, that I wanted to be a mass communications major.  At UC Berkeley (where I went) the major required a choice of several pre-requisites.  On a lark, I decided to take Sociology 1.

As a major portion of our grade, we needed to do some sort of social research project.  I was new to the Bay Area and surprised to see how many panhandlers congregated near the BART stations in San Francisco.  So I decided to research commuters’ attitudes towards giving money to them.

My classmate and I put together a one-page survey that collected some broad demographic data (age, sex, general income level, etc.) and then asked several questions about donation habits.  Then we set out for a BART station to distribute our survey to evening commuters.

Our goal was to give a survey to everyone we could, but we also had some sense of not wanting to skew the data we collected by accidentally giving the survey to, say, more men than women.  So we tried to passively make sure we were giving it out in relatively equal numbers to both.  And from 5pm to 6pm that was easy.  But once 6pm rolled around, all of a sudden we noticed that we couldn’t find many women to give it to.  Male commuters vastly outnumbered them.  We administered the survey on two evenings, and both times made the same observation.

Nonetheless we persevered, and managed to collect 100 usable surveys, of which 50 ultimately turned out to be from men and 50 from women.  But then we noticed another gender difference:

Of those 50 men, 27 reported earning more than $50,000 a year.

Of those 50 women: 6.

And this is why I became a sociologist.  Because while I firmly believe that people are all individuals capable of free will, it is clear that there are unseen forces that affect their decisions.  Sociology is about revealing what those forces are.

The paper we wrote is now lost to history (or lost in an inaccessible attic somewhere, which is essentially the same thing), but my recollection is that the data revealed yet another gender difference: as men grew more wealthy they tended to give less, whereas for women, the trend was the opposite.  Based on the written comments we got back we surmised that poorer men had a greater sense of empathy for those needing handouts, and wealthier women a greater sense of freedom to be able to afford to help.

But whatever the result and whatever the reason, the takeaway from the project I still carry with me was that we need to pay attention to those invisible forces, particularly in policy discussions.  We can’t simply demand that people act differently than they do: we need to understand why they act as they do and what needs to change for them to be able to choose to act differently.

Aug 222017
 

The following is a cross-post of something I wrote on Techdirt last week.  Some people have taken issue with the fact that I did not fully analyze exactly how VARA (see below) would specifically apply to the Confederate monuments, but that wasn’t the point.  The point was that we added something to copyright law that very easily could interact with public art controversies and in a way that is not going to make them any easier to sort out.

There’s no issue of public interest that copyright law cannot make worse. So let me ruin your day by pointing out there’s a copyright angle to the monument controversy: the Visual Artists Rights Act (VARA), a 1990 addition to the copyright statute that allows certain artists to control what happens to their art long after they’ve created it and no longer own it. Techdirt has written about it a few times, and it was thrust into the spotlight this year during the controversy over the Fearless Girl statue.

Now, VARA may not be specifically applicable to the current controversy. For instance, it’s possible that at least some of the Confederacy monuments in question are too old to be subject to VARA’s reach, or, if not, that all the i’s were dotted on the paperwork necessary to avoid it. (It’s also possible that neither is the case — VARA may still apply, and artists behind some of the monuments might try to block their removal.) But it would be naïve to believe that we’ll never ever have monument controversies again. The one thing VARA gets right is an acknowledgement of the power of public art to be reflective and provocative. But how things are reflective and provocative to a society can change over time as the society evolves. As we see now, figuring out how to handle these changes can be difficult, but at least people in the community can make the choice, hard though it may sometimes be, about what art they want in their midst. VARA, however, takes away that discretion by giving it to someone else who can trump it (so to speak).

Of course, as with any law, the details matter: what art was it, whose art was it, where was it, who paid for it, when was it created, who created it, and is whoever created it dead yet… all these questions matter in any situation dealing with the removal of a public art installation because they affect whether and how VARA actually applies. But to some extent the details don’t matter. While in some respects VARA is currently relatively limited, we know from experience that limited monopolies in the copyright space rarely stay so limited. What matters is that we created a law that is expressly designed in its effect to undermine the ability of a community with art in its midst to decide whether it wants to continue to have that art in its midst, and thought that was a good idea. Given the power of art to be a vehicle of expression, even political expression or outright propaganda, allowing any law to etch that expression in stone (as it were) is something we should really rethink.