Jul 062017
 

The following was originally posted on Techdirt.

Sunday morning I made the mistake of checking Twitter first thing upon waking up. As if just a quick check of Twitter would ever be possible during this administration… It definitely wasn’t this past weekend, because waiting for me in my Twitter stream was Trump’s tweet of the meme he found on Reddit showing him physically beating the crap out of a personified CNN.

But that’s not what waylaid me. What gave me pause were all the people demanding it be reported to Twitter for violating its terms of service. The fact that so many people thought that was a good idea worries me, because the expectation that when bad speech happens someone will make it go away is not a healthy one. My concern inspired a tweet storm, which has now been turned into this post. Continue reading »

May 262017
 

The following was cross-posted on Techdirt.

We often talk about how protecting online speech requires protecting platforms, like with Section 230 immunity and the safe harbors of the DMCA. But these statutory shields are not the only way law needs to protect platforms in order to make sure the speech they carry is also protected.

Earlier this month, I helped Techdirt’s think tank arm, the Copia Institute, file an amicus brief in support of Yelp in a case called Montagna v. Nunis. Like many platforms, Yelp lets people post content anonymously. Often people are only willing to speak when they can do so without revealing who they are (note how many people participate in the comments here without revealing their real names), which is why the right to speak anonymously has been found to be part and parcel of the First Amendment right of free speech . It’s also why sites like Yelp let users post anonymously, because often that’s the only way they will feel comfortable posting reviews candid enough to be useful to those who depend on sites like Yelp to help them make informed decisions.

But as we also see, people who don’t like the things said about them often try to attack their critics, and one way they do this is by trying to strip these speakers of their anonymity. True, sometimes online speech can cross the line and actually be defamatory, in which case being able to discover the identity of the speaker is important. This case in no way prevents legitimately aggrieved plaintiffs from using subpoenas to discover the identity of those whose unlawful speech has injured them to sue them for relief. Unfortunately, however, it is not just people with legitimate claims who are sending subpoenas; in many instances they are being sent by people objecting to speech that is perfectly legal, and that’s a problem. Unmasking the speakers behind protected speech not only violates their First Amendment rights to speak anonymously but it also chills the speech the First Amendment is designed to foster generally by making the critical anonymity protection that plenty of legal speech depends on suddenly illusory.

There is a lot that can and should be done to close off this vector of attack on free speech. One important measure is to make sure platforms are able to resist the subpoenas they get demanding they turn over whatever identifying information they have. There are practical reasons why they can’t always fight them — for instance, like DMCA takedown notices, they may simply get too many — but it is generally in their interest to try to resist illegitimate subpoenas targeting the protected speech posted anonymously on their platforms so that their users will not be scared away from speaking on their sites.

But when Yelp tried to resist the subpoena connected with this case, the court refused to let them stand in to defend the user’s speech interest. Worse, it sanctioned(!) Yelp for even trying, thus making platforms’ efforts to stand up for their users even more risky and expensive than they already are.

So Yelp appealed, and we filed an amicus brief supporting their effort. Fortunately, earlier this year Glassdoor won an important California State appellate ruling that validated attempts by platforms to quash subpoenas on behalf of their users. That decision discussed why the First Amendment and California State Constitution required platforms to have this ability to quash subpoenas targeting protected speech, and hopefully this particular appeals court will agree with its sister court and make clear that platforms are allowed to fight off subpoenas like this. As we pointed out in our brief, both state and federal law and policy require online speech to be protected, and preventing platforms from resisting subpoenas is out of step with those stated policy goals and constitutional requirements.

Dec 172016
 

The following was recently published on Techdirt, although with a different title.

Regardless of what one thinks about the apparent result of the 2016 election, it will inevitably present a number of challenges for America and the world. As Mike wrote about last week, they will inevitably touch on many of the tech policy issues often discussed here. The following is a closer look at some of the implications (and opportunities) with respect to several of them, given the unique hallmarks of Trump and his proposed administration. Continue reading »

Comments on DMCA Section 512: The DMCA functions as a system of extra-judicial censorship

 Analysis/commentary, Intermediary liability, Regulating speech  Comments Off on Comments on DMCA Section 512: The DMCA functions as a system of extra-judicial censorship
Apr 042016
 

The following is Section II.B of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Despite all the good that Section 230 and the DMCA have done to foster a robust online marketplace of ideas, the DMCA’s potential to deliver that good has been tempered by the particular structure of the statute.  Whereas Section 230 provides a firm immunity to service providers for potential liability in user-supplied content,[1] the DMCA conditions its protection.[2]  And that condition is censorship.  The irony is that while the DMCA makes it possible for service providers to exist to facilitate online speech, it does so at the expense of the very speech they exist to facilitate due to the notice and takedown system.

In a world without the DMCA, if someone wanted to enjoin content they would need to demonstrate to a court that it indeed owned a valid copyright and that the use of content in question infringed this copyright before a court would compel its removal.  Thanks to the DMCA, however, they are spared both their procedural burdens and also their pleading burdens.  In order to cause content to be disappeared from the Internet all anyone needs to do is send a takedown notice that merely points to content and claims it as theirs.

Although some courts are now requiring takedown notice senders to consider whether the use of the content in question was fair,[3] there is no real penalty for the sender if they get it wrong or don’t bother.[4]  Instead, service providers are forced to become judge and jury, even though (a) they lack the information needed to properly evaluate copyright infringement claims,[5] (b) the sheer volume of takedowns notices often makes case-by-case evaluation of them impossible, and (c) it can be a bet-the-company decision if the service provider gets it wrong because their “error” may deny them the Safe Harbor and put them on the hook for infringement liability.[6]  Although there is both judicial and statutory recognition that service providers are not in the position to police user-supplied content for infringement,[7] there must also be recognition that they are similarly not in the position to police for invalid takedowns.  Yet they must, lest there be no effective check on these censorship demands.

Ordinarily the First Amendment and due process would not permit this sort of censorship, the censorship of an Internet user’s speech predicated on mere allegation.  Mandatory injunctions are disfavored generally,[8] and particularly so when they target speech and may represent impermissible prior restraint on speech that has not yet been determined to be wrongful.[9]  To the extent that the DMCA causes these critical speech protections to be circumvented it is consequently only questionably constitutional.  For the DMCA to be statutorily valid it must retain, in its drafting and interpretation, ample protection to see that these important constitutional speech protections are not ignored.
Continue reading »

Mar 242015
 

A few months ago an advisory committee for the California State Bar promulgated an interim ethics opinion addressing when lawyers’ blogs should be subject to applicable bar rules governing lawyer advertising.

The impetus behind having bar rules addressing lawyer advertising is generally a reasonable one. The nature of the lawyer-client relationship, the relative imbalance in their respective expertise, and the stress inherent with the sort of situation that would require a lawyer’s assistance makes it important to ensure that lawyers are not misleading or overly aggressive in their solicitation of business. The applicable bar rule regarding lawyer advertising in California is also not especially onerous (although the same may not necessarily be said about similar rules in other jurisdictions).

But a blog is speech, and applying regulation to speech is something that constitutionally can only be done in very limited ways and in very limited circumstances. Yet there is nothing limited about this recommendation. It promulgates a standard that would ultimately catch many, if not most, legal blogs in the California bar’s regulatory net, despite it being unnecessary and chilling to speech that should be beyond government’s reach.

It’s also simply not a good idea that serves the public interest.
Continue reading »

Copyright’s Not Getting its Job Done (cross-post)

 Analysis/commentary, Criminal IP Enforcement  Comments Off on Copyright’s Not Getting its Job Done (cross-post)
Jan 182014
 

I wrote the following for the Electronic Frontier Foundation’s blog as part of “Copyright Week” – a push to raise awareness of the key principles that should guide a healthy, constructive, and effective copyright policy.

People sometimes treat copyright law as though it’s a fixed constant in the universe, like gravity. First the Earth cooled, then the dinosaurs came, and then we got copyright. But that’s not the case at all, and it’s important to remember this when we think about what’s gone wrong with the law and how to make it right. Copyright is a relatively recent invention, born out of a particular cultural background and designed to solve a specific problem at a particular point in history. While we might continue to value what it purports to do we aren’t slaves to its precepts: when copyright law no longer ably solves the original problem, or, indeed, when it creates new ones, we need not wring our hands in frustrated woe. If a law has turned into something that no longer works for us then we should feel free to come up with something else that does.

In order to figure out how to move forward it helps to look to the past. Copyright as we know it is only a few hundred years old. It traces its roots to the “Statute of Anne,” a law passed in the early 18th Century England to replace an earlier law that gave the government complete control over everything that was published. Naturally this earlier law led to a great deal of censorship, and the push for democratic reform near the end of the 17th Century led to demands that it be changed to something less stifling to the marketplace of ideas.

The result was the Statute of Anne, a law described as “[a]n Act for the Encouragement of Learning.” While the law it replaced had been designed to limit what knowledge was available to the public by giving permission to publish to just a few publishers approved by the king, this new law was designed to stimulate the dissemination of knowledge by giving everyone the ability to control how what they wrote was published themselves.

The statute did this by granting authors a “copy right” so that they could have first crack at exploiting the market for the works they created and not be at the mercy of publishers who might otherwise help themselves to these works and keep all the profit for themselves. A common rationale for copyright is that people won’t create if it won’t ever be worth their while to, so if we want to make sure we do get a lot of creative output we need a system that makes it at least theoretically economically viable to create.

But as we look at our modern copyright law, the distant progeny of the Statute of Anne, it is worth questioning the assumptions wrapped up in it. For one thing, it’s worth questioning whether and to what extent people create only when there is a profit motive. The reality is people create all the time, even when there’s no guarantee or expectation of ever being paid for it, and often these works can be just as good, if not better, than the ones created by people who are being paid. Furthermore, despite what some advocates for stricter copyright law suggest, copyright has never been a promise of financial success. In fact it’s sometimes been a barrier to it, and there are many authors and artists whose influence and commercial appeal took off only after the copyrights on their works had expired and the public could finally get affordable access to them.

It’s also important to recognize that the Statute of Anne sought to achieve its stated goal of encouraging learning in a way that very much reflected the Western European tradition of disseminating knowledge through the written word, and in response to the monopolistic power publishers had at the time to be gatekeepers over that knowledge. But it’s not the only way to skin this particular cat: in other parts of the world oral traditions and norms that encourage copying have allowed cultures to flourish in their own local idiom, without the need for copyright. So when we think about this law we need to recognize how much it reflects the unique time and place from where it arose and not deprive ourselves of the lessons of openness these other approaches teach us. Copyright is not the only solution to promoting the progress of arts and sciences, and it should not be treated as sacrosanct and immune to reform of its increasingly rigid rules, particularly when its current form is no longer reliably achieving its desired end.

The idea behind the Statute of Anne, which was echoed in the US Constitution a few decades later, is that society is better off when it has access to as many works of authorship as possible. But as EFF and many others have described this week, the monopolies copyright law grants have gotten broader in their scope and application, longer in their duration, and ultimately less effective, if not completely counter-productive, in encouraging more creativity and enabling the public’s access to the fruits of that creation. The irony is that as a result, like with the period before the Statute of Anne, we find ourselves in a time when government regulation is actually constricting dissemination of knowledge, rather than enhancing it.

But law is not immutable; indeed, the very existence of the Statute of Anne shows how much it can change when it needs to. When, as now, a law no longer fulfills its objectives, it’s time to reshape it into something that does. It’s time to fix copyright law so that it can finally get the job done that it was always intended to do.

Feb 292012
 

PayPal recently made news for implementing a policy denying its payment processing services to publications including obscene content.  There are several things objectionable about this policy, including the lack of any clear way of delineating what content would qualify as “obscene,” and its overall censorious impact.

But I’m not entirely sure that PayPal is necessarily the appropriate target for criticism of this policy.  It may be, to the extent that it is a truly discretionary policy PayPal has voluntarily chosen to pursue.  If it could just as easily chosen not to pursue it it can be fairly criticized for the choice it did make.  For this policy is not as simple as banning certain objectively horrible content 100% of all people would agree should be stricken from the face of the earth.  After all, there *is* no objectively horrible content 100% of all people would agree is objectionable.  Instead this policy has the effect of denying market opportunities to all sorts of writers producing all sorts of valid content, even if some people may not happen to like it.  And it does this not just by denying particular publications access to its services but by forcing electronic publishers to overcensor all the works they publish lest PayPal services be shut off to their entire businesses. Continue reading »

Bangladeshi faces sedition charge over Facebook post

 Examples, Regulating speech  Comments Off on Bangladeshi faces sedition charge over Facebook post
Jan 232012
 

Agence-France Press is reporting that a Bangladeshi high court has ordered police to prosecute Jahangirnagar University teacher Ruhul Khandakar for sedition as a result of a comment made on Facebook. The comment, since deleted, was “[Famous Bangladeshi filmmaker] Tareq Masud died as a result of government giving licence to unqualified drivers. Many die, why does not [Prime Minister] Sheikh Hasina die?”

He was also sentenced to six months in jail for contempt of court after he failed to respond to repeated summonses to explain a Facebook posting. The article reports Khandakar has been studying in Australia and these proceedings happened without him.

It also cites a local lawyer saying that this is the first time a Bangladeshi has been ordered to be jailed and tried for sedition over comments made on a social networking site.

Quicklinks 1/21/2012

 Quicklinks  Comments Off on Quicklinks 1/21/2012
Jan 212012
 

Various recent news:

India continues to flirt with web censorship

 Examples, Intermediary liability, Regulating speech  Comments Off on India continues to flirt with web censorship
Jan 132012
 

Last month Kapil Sibal, acting telecommunications minister for India, floated the proposition that social networks actively filter all content appearing on their systems.  Now comes news that a judge in New Delhi also thinks web censorship appropriate.  From the New York Times:

The comments of the judge, Suresh Kait, came in response to a lawsuit, filed by a private citizen in the capital, New Delhi. The suit demands that Internet companies screen content before it is posted on sites like Facebook, Google or Yahoo, that might offend the religious sentiments of Indians. A related criminal case accuses the companies — 21 in all — of violating an Indian law that applies to books, pamphlets and other material that is deemed to “deprave or corrupt.”

A trial court in New Delhi on Friday ordered that summons be served in the criminal case to officials at all 21 companies at their foreign headquarters’ addresses.

Google and Facebook refused to comment on the case, except to say they had filed a motion in the New Delhi High Court to dismiss the criminal case.

Their motion will be considered on Monday. Continue reading »