Sep 012018

This post originally appeared on Techdirt 3/16/18.

It’s become quite fashionable these days to gripe about the Internet. Even some of its staunchest allies in Congress have been getting cranky. Naturally there are going to be growing pains as humanity adapts to the unprecedented ability for billions of people to communicate with each other easily, cheaply, and immediately for the first time in world history. But this communications revolution has also brought some extraordinary benefits that we glibly risk when we forget about them and instead only focus the challenges. This glass is way more than half full but, if we’re not careful to protect it, soon it will be empty.

As we’ve been talking about a lot recently, working its way through Congress is a bill, SESTA/FOSTA, so fixated on perceived problems with the Internet (even though there’s no evidence that these are problems the Internet itself caused) that it threatens the ability of the Internet to deliver its benefits, including those that would better provide tools to deal with some of those perceived problems, if not outright make those same problems worse by taking away the Internet’s ability to help. But it won’t be the last such bill, as long as the regulatory pile-on intending to disable the Internet is allowed to proceed unchecked.

As the saying too often goes, you don’t know what you’ve got till it’s gone. But this time let’s not wait to lose it; let’s take the opportunity to appreciate all the good the Internet has given us, so we can hold on tight to it and resist efforts to take it away.

Towards that end, we want to encourage the sharing and collection of examples of how the Internet has made the world better: how it made it better for everyone, and how it even just made it better for you, and whether it made things better for good, or for even just one moment in one day when the Internet enabled some connection, discovery, or opportunity that could not have happened without it. It is unlikely that this list could be exhaustive: the Internet delivers its benefits too frequently and often too seamlessly to easily recognize them all. But that’s why it’s all the more important to go through the exercise of reflecting on as many as we can, because once they become less frequent and less seamless they will be much easier to miss and much harder to get back.

Sep 012018

This post originally appeared on Techdirt on 2/26/18.

These days a lot of people are upset with Facebook, along with many other of its fellow big Internet companies. Being upset with these companies can make it tempting to try to punish them with regulation that might hurt them. But it does no good to punish them with regulation that will end up hurting everyone – including you.

Yet that’s what the bill Congress is about to vote on will do. SESTA (or sometimes SESTA-FOSTA) would make changes that reduce the effectiveness of Section 230 of the Communications Decency Act. While a change to this law would certainly hurt the Facebooks of the world, it is not just the Facebooks that should care. You should too, and here’s why.

Section 230 is a federal statute that says that people who use the Internet are responsible for how they use it—but only those people are, and not those who provide the services that make it possible for people to use the Internet in the first place. The reason it’s important to have this law is because so many people – hundreds, thousands, millions, if not billions of people – use these services to say or do so many things on the Internet. Of course, the reality is, sometimes people use these Internet services to say or do dumb, awful, or even criminal things, and naturally we have lots of laws to punish these dumb, awful, or criminal things. But think about what it would mean for Internet service providers if all those laws that punish bad ways people use the Internet could be directed at them. Even for big companies like Facebook it would be impossibly expensive to have to defend themselves every time someone used their services in these unfortunate ways. Section 230 means that they don’t have to, and that they can remain focused on providing Internet services for all the hundreds, thousands, millions, if not billions of people – including people like you – who use their services in good ways.

If, however, Section 230 stops effectively protecting these service providers, then they will have to start limiting how people can use their services because it will be too expensive to risk letting anyone use their services in potentially wrongful ways. And because it’s not possible for Internet service providers to correctly and accurately filter the sheer volume of content they intermediate, they will end up having to limit too much good content in order to make sure they don’t end up in trouble for having limited too little of the bad.

This inevitable censorship should matter to you even if you are not a Facebook user, because it won’t just be Facebook that will be forced to censor how you use the Internet. Ever bought or sold something on line? Rented an apartment? Posted or watched a video? Found anything useful through a search engine? Your ability to speak, learn, buy, sell, complain, organize, or do anything else online depends on Internet services being able to depend on Section 230 to let you. It isn’t just the big commercial services like Facebook who need Section 230, but Internet service providers of all sorts of shapes and sizes, including broadband ISPs, email providers, online marketplaces, consumer review sites, fan forums, online publications that host user comments… Section 230 even enables non-commercial sites like Wikipedia. As a giant collection of information other people have provided, if Section 230’s protection evaporates, then so will Wikipedia’s ability to provide this valuable resource.

Diminishing Section 230’s protection also not only affects your ability to use existing Internet services, but new ones too. There’s a reason so many Internet companies are based in the United States, where Section 230 has made it safe for start-ups to develop innovative services without fear of crippling liability, and then grow into successful businesses employing thousands. Particularly if you dislike Facebook you should fear a future without Section 230: big companies can afford to take some lumps, but without Section 230’s protection good luck ever getting a new service that’s any better.

And that’s not all: weakening Section 230 not only hurts you by hurting Internet service providers; it also hurts you directly. Think about emails you forward. Comment threads you allow on Facebook posts. Tweets you retweet. These are all activities Section 230 can protect. After all, you’re not the person who wrote the original emails, comments, or tweets, so why should you get in trouble if the original author said or did something dumb, awful, or even criminal in those emails, comments, or tweets? Section 230 makes many of the ordinary ways you use the Internet possible, but without it all bets are off.

Sep 012018

This post originally appeared on Techdirt on 2/3/18.

With the event at Santa Clara earlier this month, and the companion essays published here, we’ve been talking a lot lately about how platforms moderate content. It can be a challenging task for a platform to figure out how to balance dealing with the sometimes troubling content it can find itself intermediating on the one hand and free speech concerns on the other. But at least, thanks to Section 230, platforms have been free to do the best they could to manage these competing interests. However you may think they make these decisions now, they would not come out any better without that statutory protection insulating them from legal consequence if they did not opt to remove absolutely everything that could tempt trouble. If they had to contend with the specter of liability in making these decisions it would inevitably cause platforms to play a much more censoring role at the expense of legitimate user speech.

Fearing such a result is why the Copia Institute filed an amicus brief at the Ninth Circuit last year in Fields v. Twitter, one of the many “how dare you let terrorists use the Internet” cases that keep getting filed against Internet platforms. While it’s problematic that they keep getting filed, they have fortunately not tended to get very far. I say “fortunately,” because although it is terrible what has happened to the victims of these attacks, if platforms could be liable for what terrorists do it would end up chilling platforms’ ability to intermediate any non-terrorist speech. Thus we, along with the EFF and the Internet Association (representing many of the bigger Internet platforms), had all filed briefs urging the Ninth Circuit to find, as the lower courts have tended to, that Section 230 insulates platforms from these types of lawsuits.

A few weeks ago the Ninth Circuit issued its decision. The good news is that this decision affirms that the end has been reached in this particular case and hopefully will deter future ones. However the court did not base its reasoning on the existence of Section 230. While somewhat disappointing because we saw this case as an important opportunity to buttress Section 230’s critical statutory protection, by not speaking to it at all it also didn’t undermine it, and the fact the court ruled this way isn’t actually bad. By focusing instead on the language of the Anti-Terrorism Act itself (this is the statute barring the material support of terrorists), it was still able to lessen the specter of legal liability that would otherwise chill platforms and force them to censor more speech.

In fact, it may even be better that the court ruled this way. The result is not fundamentally different than what a decision based on Section 230 would have led to: like with the ATA, which the court found would have required some direct furtherance by the platform of the terrorist act, so would Section 230 have required the platform’s direct interaction with the creation of user content furthering the act in order for the platform to potentially be liable for its consequences. But the more work Section 230 does to protect platforms legally, the more annoyed people seem to get at it politically. So by not being relevant to the adjudication of these sorts of tragic cases it won’t throw more fuel on the political fire seeking to undermine the important speech-protective work Section 230 does, and then it hopefully will remain safely on the books for the next time we need it.

Sep 012018

This post originally appeared on Techdirt on 1/22/18.

Shortly after Trump was elected I wrote a post predicting how things might unfold on the tech policy front with the incoming administration. It seems worth taking stock, now almost a year into it, to see how those predictions may have played out. Continue reading »

Jul 062017

The following was originally posted on Techdirt.

Sunday morning I made the mistake of checking Twitter first thing upon waking up. As if just a quick check of Twitter would ever be possible during this administration… It definitely wasn’t this past weekend, because waiting for me in my Twitter stream was Trump’s tweet of the meme he found on Reddit showing him physically beating the crap out of a personified CNN.

But that’s not what waylaid me. What gave me pause were all the people demanding it be reported to Twitter for violating its terms of service. The fact that so many people thought that was a good idea worries me, because the expectation that when bad speech happens someone will make it go away is not a healthy one. My concern inspired a tweet storm, which has now been turned into this post. Continue reading »

Dec 172016

The following was recently published on Techdirt, although with a different title.

Regardless of what one thinks about the apparent result of the 2016 election, it will inevitably present a number of challenges for America and the world. As Mike wrote about last week, they will inevitably touch on many of the tech policy issues often discussed here. The following is a closer look at some of the implications (and opportunities) with respect to several of them, given the unique hallmarks of Trump and his proposed administration. Continue reading »

Apr 082016

The following is Section III.C of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Questions #16 and #17 more specifically contemplate the effectiveness of the put-back process articulated at subsection 512(g).  As explained in Section III.B this mechanism is not effective for restoring wrongfully removed content and is little used.  But it is worth taking a moment here to further explore the First Amendment harms wrought to both Internet users and service providers by the DMCA.[1]

It is part and parcel of First Amendment doctrine that people are permitted to speak, and to speak anonymously.[2]  Although that anonymity can be stripped in certain circumstances, there is nothing about the allegation of copyright infringement that should cause it to be stripped automatically.  Particularly in light of copyright law incorporating free speech principles[3] this anonymity cannot be more fragile than it would in any other circumstance where speech was subject to legal challenge.  The temptation to characterize all alleged infringers as malevolent pirates who get what they deserve must be resisted; the DMCA targets all speakers and all speech, no matter how fair or necessary to public discourse this speech is.

And yet, with the DMCA, not only is speech itself more vulnerable to censorship via copyright infringement claim than it would be for other types of allegations[4] but so are the necessary protections speakers depend on to be able to speak.[5]  Between the self-identification requirements of subsection 512(g) put-back notices and the ease of demanding user information with subsection 512(h) subpoenas that also do not need to be predicated on actual lawsuits,[6] Internet speakers on the whole must fear the loss of their privacy if anyone dares to construe an infringement claim, no matter how illegitimate or untested that claim may be.  Given the ease of concocting an invalid infringement claim,[7] and the lack of any incentive not to,[8] the DMCA gives all-too-ready access to the identities of Internet users to the people least deserving of it and at the expense of those who most need it.[9]

Furthermore, the DMCA also compromises service providers’ own First Amendment interests in developing the forums and communities they would so choose.  The very design of the DMCA puts service providers at odds with their users, forcing them to be antagonistic their own customers and their own business interests as a condition for protecting those interests.  Attempts to protect their forums or their users can expose them to tremendous costs and potentially incalculable risk, and all of this harm flows from mere allegation that never need be tested in a court of law.  The DMCA forces service providers to enforce censorship compelled by a mere takedown notice, compromise user privacy in response to subsection 512(h) subpoenas (or devote significant resources to trying to quash them), and, vis a vis Questions #22 and 23, disconnect users according to termination policies whose sufficiency cannot be known until a court decides they are not.[10]

The repeat infringer policy requirement of subsection 512(i)(A) exemplifies the statutory problem with many of the DMCA’s safe harbor requirements.  A repeat infringer policy might only barely begin to be legitimate if it applied to the disconnection of a user after a certain number of judicial findings of liability for acts of infringement that users had used the service provider to commit.  But as at least one service provider lost its safe harbor for not permanently disconnecting users after only a certain number of allegations, even though they were allegations that had never been tested in a court consistent with the principles of due process or prohibition on prior restraint.[11]

In no other context would we find these sorts of government incursions against the rights of speakers constitutional, robbing them of their speech, anonymity, and the opportunity to further speak, without adequate due process.  These incursions do not suddenly become constitutionally sound just because the DMCA coerces service providers to be the agent committing these acts instead.
Continue reading »

Apr 062016

The following is Section III.A of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Question #1 asks whether Section 512 safe harbors are working as intended, and Question #5 asks the related question of whether the right balance has been struck between copyright owners and online service providers.  To the extent that service providers have been insulated from the costs associated with liability for their users’ content, the DMCA, with its safe harbors, has been a good thing.  But the protection is all too often too complicated to achieve, too expensive to assert, or otherwise too illusory for service providers to be adequately protected.

Relatedly, Question #2 asks whether courts have properly construed the entities and activities covered by the safe harbor, and the answer is not always.  But the problem here is not just that they have sometimes gotten it wrong but that there is too often the possibility for them to get it wrong.  Whereas under Section 230 questions of liability for intermediaries for illegality in user-supplied content are relatively straight forward – was the intermediary the party that produced the content? if not, then it is not liable – when the alleged illegality in others’ content relates to potential copyright infringement, the test becomes a labyrinth minefield that the service provider may need to endure costly litigation to navigate.  Not only is ultimate liability expensive but even the process of ensuring that it won’t face that liability can be crippling.[1]  Service providers, and investors in service providers, need a way to minimize and manage the legal risk and associated costs arising from their provision of online services, but given the current complexity[2] outlining the requirements for safe harbors they can rarely be so confidently assured.
Continue reading »

Apr 052016

The following is Section II.C of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Veoh was a video hosting service akin to YouTube that was found to be eligible for the DMCA safe harbor.[1]  Unfortunately this finding was reached after years of litigation had already driven the company into bankruptcy and forced it to layoff its staff.[2]  Meanwhile SeeqPod was a search engine that helped people (including potential consumers) find multimedia content out on the Internet, but it, too, was also driven into bankruptcy by litigation, taking with it an important tool to help people discover creative works.[3]

History is littered with examples like the ones above of innovative new businesses being driven out of existence, their innovation and investment chilled, by litigation completely untethered from the principles underpinning copyright law.  Copyright law exists solely to “promote the progress of science and the useful arts.”  Yet all too frequently it has had the exact opposite effect.

The DMCA has the potential to be a crucial equalizer, but it can only do so when the economic value of what these service providers deliver is considered by policymakers with at least as much weight as that given to the incumbent interests who complain that their previous business models may have become unworkable in light of digital technology.  Service providers are economic engines employing innumerable people, directly and indirectly, and driving innovation forward while they deliver a world of information to each and every Internet user.  We know economic harm is done to them and to anyone, creators and consumers, who would have benefited from their services when they are not protected.

But what needs careful scrutiny and testing are economic arguments predicated on the assumption that every digital copy of every copyrighted work transmitted online without the explicit permission of a copyright holder represents a financial loss.  This is a presumption that needs careful scrutiny, with reviewable data and auditable methodology.  It is quite a leap to assume that every instance (or even most instances) of people consuming “pirated” copyrighted works is an instance they would otherwise have paid the creator.  For example, it tends to presume that people have unlimited amounts of money to spend on unlimited numbers of copyrighted works, and it also ignores the fact that some works may only be consumable at a price point of $0, which is something that institutions like libraries and over-the-air radio have long enabled, to the betterment of creators and the public beneficiaries of creative works alike.  Furthermore, even in instances when people would be willing to pay for access to a work, copyright owners may not be offering it at any price, nor are they necessarily equitably sharing the revenues derived from creative works with the actual creators whose efforts require the remuneration.[4]

The DMCA does not adjust to reflect situations like these, nor does it incentivize copyright holders to correct their own self-induced market failures.  On the contrary; it allows them to deprive the public of access to their works and to threaten the service providers enabling their access with extinction if they do not assist in disabling this access. None of these outcomes are consistent with the goals and purpose of copyright in general, and care must be taken not to allow the DMCA be a law that ensures them.
Continue reading »

Apr 042016

The following is Section II.B of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Despite all the good that Section 230 and the DMCA have done to foster a robust online marketplace of ideas, the DMCA’s potential to deliver that good has been tempered by the particular structure of the statute.  Whereas Section 230 provides a firm immunity to service providers for potential liability in user-supplied content,[1] the DMCA conditions its protection.[2]  And that condition is censorship.  The irony is that while the DMCA makes it possible for service providers to exist to facilitate online speech, it does so at the expense of the very speech they exist to facilitate due to the notice and takedown system.

In a world without the DMCA, if someone wanted to enjoin content they would need to demonstrate to a court that it indeed owned a valid copyright and that the use of content in question infringed this copyright before a court would compel its removal.  Thanks to the DMCA, however, they are spared both their procedural burdens and also their pleading burdens.  In order to cause content to be disappeared from the Internet all anyone needs to do is send a takedown notice that merely points to content and claims it as theirs.

Although some courts are now requiring takedown notice senders to consider whether the use of the content in question was fair,[3] there is no real penalty for the sender if they get it wrong or don’t bother.[4]  Instead, service providers are forced to become judge and jury, even though (a) they lack the information needed to properly evaluate copyright infringement claims,[5] (b) the sheer volume of takedowns notices often makes case-by-case evaluation of them impossible, and (c) it can be a bet-the-company decision if the service provider gets it wrong because their “error” may deny them the Safe Harbor and put them on the hook for infringement liability.[6]  Although there is both judicial and statutory recognition that service providers are not in the position to police user-supplied content for infringement,[7] there must also be recognition that they are similarly not in the position to police for invalid takedowns.  Yet they must, lest there be no effective check on these censorship demands.

Ordinarily the First Amendment and due process would not permit this sort of censorship, the censorship of an Internet user’s speech predicated on mere allegation.  Mandatory injunctions are disfavored generally,[8] and particularly so when they target speech and may represent impermissible prior restraint on speech that has not yet been determined to be wrongful.[9]  To the extent that the DMCA causes these critical speech protections to be circumvented it is consequently only questionably constitutional.  For the DMCA to be statutorily valid it must retain, in its drafting and interpretation, ample protection to see that these important constitutional speech protections are not ignored.
Continue reading »