Jun 132017
 

Cross-posted on Techdirt.

The Copia Institute filed another amicus brief this week, this time in Fields v. Twitter. Fields v. Twitter is one of a flurry of cases being brought against Internet platforms alleging that they are liable for the harms caused by the terrorists using their sites. The facts in these cases are invariably awful: often people have been brutally killed and their loved ones are seeking redress for their loss. There is a natural, and perfectly reasonable, temptation to give them some sort of remedy from someone, but as we argued in our brief, that someone cannot be an internet platform.

There are several reasons for this, including some that have nothing to do with Section 230. For instance, even if Section 230 did not exist and platforms could be liable for the harms resulting from their users’ use of their services, for them to be liable there would have to be a clear connection between the use of the platform and the harm. Otherwise, based on the general rules of tort law, there could be no liability. In this particular case, for instance, there is a fairly weak connection between ISIS members using Twitter and the specific terrorist act that killed the plaintiffs’ family members.

But we left that point to Twitter to ably argue. Our brief focused exclusively on the fact that Section 230 should prevent a court from ever even reaching the tort law analysis. With Section 230, a platform should never find itself having to defend against liability for harm that may have resulted from how people used it. Our concern is that in several recent cases with their own terrible facts, the Ninth Circuit in particular has found itself willing to make exceptions to that rule. As much as we were supporting Twitter in this case, trying to help ensure the Ninth Circuit does not overturn the very good District Court decision that had correctly applied Section 230 to dismiss the case, we also had an eye to the long view of reversing this trend.

The problem is, like the First Amendment itself, speech protections only work as speech protections when they always work. When one can find exemptions here and there, all of a sudden none of these protections are effective and it chills the speech of those who were counting on them because no one can be sure whether or not the speech will ultimately be protected. In the case of Section 230, that chilling arises because if the platforms cannot be sure whether they will be protected from liability in their users’ speech, then they will have to assume they are not. Suddenly they will have to make all the censoring choices with respect to their users’ content that Section 230 was designed to prevent, just to avoid the specter of potentially crippling liability.

One of the points we emphasized in our brief was how such an outcome flouts what Congress intended when it passed Section 230. As we said then, and will say again as many times as we need to, the point of Section 230 is to encourage the most beneficial online speech and also minimize the worst speech. To see how this dual-purposed intent plays out we need to look at the statute as a whole, beyond the part of it that usually gets the most attention, at Subsection (c)(1), which is about how platforms are immune from liability manifest in their users’ speech. There is also another equally important part of the statute, at Subsection (c)(2), that immunizes platforms from liability when they take steps to minimize harmful online content on their systems. This subsection rarely gets attention, but it’s important not to overlook, especially as people look at the effect of the first subsection and worry that it might encourage too much “bad” speech. Congress anticipated this problem and built in a remedy as part of a balanced approach to encourage the most good speech and least bad speech. The problem with now holding online services liable for bad uses of their platforms is that it distorts this balance, and in distorting this balance undermines both these goals.

We used the cases of Barnes v. Yahoo and Doe 14 v. Internet Brands to illustrate this point. Both of these are cases where the Ninth Circuit did make exemptions and found Section 230 not to apply to certain negative uses of Internet platforms. For instance, in Barnes Section 230 was actually found to apply to part of the claim directly relating to the speech in question, which was a good result, but the lawsuit also included a promissory estoppel claim, and the Court decided that because it was not directly related to liability arising from content it could go forward. The problem here was that Yahoo had separately promised to take down certain content, and so the Court found it potentially liable for not having lived up to its promise. But as we pointed out, the effect of the Barnes case was that now platforms never promise to take content down. Even though Congress intended for Section 230 to help Internet platforms perform a hygiene function to help keep the Internet free of the worst content, by discouraging platforms from going the extra mile it has instead had the opposite effect from the one Congress intended. That’s why courts should not continue to find reasons to limit Section 230’s applicability. Even if they think they have good reason to find one, that very justification itself will be better advanced when Section 230’s protection can be most robust.

We also pointed out that in terms of the other policy goal behind Section 230, to encourage more online speech, divining exemptions from Section 230’s coverage would undermine that goal as well. In this case the plaintiffs want providers to have to deny terrorists the use of their platforms. As a separate amicus brief by the Internet Association explained, platforms actually want to keep terrorists off and go to great lengths to try to do so. But as the saying goes, “One man’s terrorist is another man’s freedom fighter.” In other words, deciding who to label a terrorist can often be a difficult thing to do, as well as an extremely political decision to make. It’s certainly beyond the ken of an “intermediary” to determine — especially a smaller, less capitalized, or potentially even individual one. (Have you ever had people comment on one of your Facebook posts? Congratulations! You are an intermediary, and Section 230 applies to you too.)

Even if the rule were that a platform had to check prospective users’ names against a government list, there are significant constitutional concerns, particularly regarding the right to speak anonymously and the prohibition against prior restraint, that arise from having to make these sorts of registration denial decisions this way. There are also often significant constitutional problems with how these lists are made at all. As the amicus brief by EFF and CDT also argued, we can’t create a system where the statutory protection platforms depend on to be able to foster online free speech is conditioned on coercing platforms to undermine it.

May 262017
 

The following was cross-posted on Techdirt.

We often talk about how protecting online speech requires protecting platforms, like with Section 230 immunity and the safe harbors of the DMCA. But these statutory shields are not the only way law needs to protect platforms in order to make sure the speech they carry is also protected.

Earlier this month, I helped Techdirt’s think tank arm, the Copia Institute, file an amicus brief in support of Yelp in a case called Montagna v. Nunis. Like many platforms, Yelp lets people post content anonymously. Often people are only willing to speak when they can do so without revealing who they are (note how many people participate in the comments here without revealing their real names), which is why the right to speak anonymously has been found to be part and parcel of the First Amendment right of free speech . It’s also why sites like Yelp let users post anonymously, because often that’s the only way they will feel comfortable posting reviews candid enough to be useful to those who depend on sites like Yelp to help them make informed decisions.

But as we also see, people who don’t like the things said about them often try to attack their critics, and one way they do this is by trying to strip these speakers of their anonymity. True, sometimes online speech can cross the line and actually be defamatory, in which case being able to discover the identity of the speaker is important. This case in no way prevents legitimately aggrieved plaintiffs from using subpoenas to discover the identity of those whose unlawful speech has injured them to sue them for relief. Unfortunately, however, it is not just people with legitimate claims who are sending subpoenas; in many instances they are being sent by people objecting to speech that is perfectly legal, and that’s a problem. Unmasking the speakers behind protected speech not only violates their First Amendment rights to speak anonymously but it also chills the speech the First Amendment is designed to foster generally by making the critical anonymity protection that plenty of legal speech depends on suddenly illusory.

There is a lot that can and should be done to close off this vector of attack on free speech. One important measure is to make sure platforms are able to resist the subpoenas they get demanding they turn over whatever identifying information they have. There are practical reasons why they can’t always fight them — for instance, like DMCA takedown notices, they may simply get too many — but it is generally in their interest to try to resist illegitimate subpoenas targeting the protected speech posted anonymously on their platforms so that their users will not be scared away from speaking on their sites.

But when Yelp tried to resist the subpoena connected with this case, the court refused to let them stand in to defend the user’s speech interest. Worse, it sanctioned(!) Yelp for even trying, thus making platforms’ efforts to stand up for their users even more risky and expensive than they already are.

So Yelp appealed, and we filed an amicus brief supporting their effort. Fortunately, earlier this year Glassdoor won an important California State appellate ruling that validated attempts by platforms to quash subpoenas on behalf of their users. That decision discussed why the First Amendment and California State Constitution required platforms to have this ability to quash subpoenas targeting protected speech, and hopefully this particular appeals court will agree with its sister court and make clear that platforms are allowed to fight off subpoenas like this. As we pointed out in our brief, both state and federal law and policy require online speech to be protected, and preventing platforms from resisting subpoenas is out of step with those stated policy goals and constitutional requirements.

More on the First Amendment problems with DMCA Section 512

 Analysis/commentary, Intermediary liability  Comments Off on More on the First Amendment problems with DMCA Section 512
Feb 232017
 

Over at Techdirt there’s a write-up of the latest comment I submitted on behalf of the Copia Institute as part of the Copyright Office’s study on the operation of Section 512 of the Digital Millennium Copyright Act. As as we’ve told the Copyright Office before, that operation has had a huge impact on online free speech. (Those comments have also been cross-posted here.)

In some ways this impact is good: providing platforms with protection from liability in their users’ content means that they can be available to facilitate that content and speech. But all too often and in all too many ways the practical impact on free speech has been a negative one, with speech being much more vulnerable to censorship via takedown notice than it ever would have been if the person objecting to it (even for copyright-related reasons) had to go to court to get an injunction to take it down. Not only is the speech itself more vulnerable than it should be, but the protection the platforms depend on ends up being more vulnerable as well because platforms must risk it every time they refuse to act on a takedown notice, no matter how invalid that notice may be.

Our earlier comment pointed out in some detail how the current operation of the DMCA has been running afoul of the protections the First Amendment is supposed to afford speech, and in this second round of comments we’ve highlighted some further deficiencies. In particular, we reminded the Copyright Office of the problems with “prior restraint,” which the First Amendment also prohibits. Prior restraint is what happens when speech is punished before there has been any adjudication to prove that it deserves to be punished. The reason the First Amendment prohibits prior restraint is that it does no good to punish speech, such as by removing it, if the First Amendment would otherwise protect it – once it has been removed the damage will have already been done.

Making sure that legitimate speech cannot be removed is why we normally require the courts to carefully adjudicate whether its removal can be ordered before its removal will be allowed. But with the DMCA there is no such judicial check: people can send demands for all sorts of content to be removed, even if it weren’t actually infringing, because there is little to deter them so long as Section 512(f) continues to have no teeth. Instead platforms are forced to treat every takedown notice as a legitimate demand, regardless of whether it is or not. Not only does this mean they need to delete the content but, in the wake of some recent cases, it seems they also must potentially hold each allegation against their user, regardless of whether it was valid or not, and then cut that user off from their services when they’ve accrued too many such accusations, again regardless of they were valid or not.

As we did before, we counseled the Copyright Office to return to first principles: the DMCA was supposed to enhance online free speech, and it’s important to make sure that all of its provisions work together to do just that. To the extent that it may be appropriate for the Copyright Office to make recommendations on this front, one is to remind all concerned that the penalty articulated in Section 512(f) to sanction bad takedown notices can and should be applied according to a flexible standard, rather than the rigid one courts have lately adopted. In any case, however, the Copyright Office certainly should not be advocating to changes in any provisions or their interpretations that make the DMCA any less compatible with the First Amendment than it has already tended to be.

Dec 172016
 

The following was recently published on Techdirt, although with a different title.

Regardless of what one thinks about the apparent result of the 2016 election, it will inevitably present a number of challenges for America and the world. As Mike wrote about last week, they will inevitably touch on many of the tech policy issues often discussed here. The following is a closer look at some of the implications (and opportunities) with respect to several of them, given the unique hallmarks of Trump and his proposed administration. Continue reading »

Apr 082016
 

The following is Section III.C of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Questions #16 and #17 more specifically contemplate the effectiveness of the put-back process articulated at subsection 512(g).  As explained in Section III.B this mechanism is not effective for restoring wrongfully removed content and is little used.  But it is worth taking a moment here to further explore the First Amendment harms wrought to both Internet users and service providers by the DMCA.[1]

It is part and parcel of First Amendment doctrine that people are permitted to speak, and to speak anonymously.[2]  Although that anonymity can be stripped in certain circumstances, there is nothing about the allegation of copyright infringement that should cause it to be stripped automatically.  Particularly in light of copyright law incorporating free speech principles[3] this anonymity cannot be more fragile than it would in any other circumstance where speech was subject to legal challenge.  The temptation to characterize all alleged infringers as malevolent pirates who get what they deserve must be resisted; the DMCA targets all speakers and all speech, no matter how fair or necessary to public discourse this speech is.

And yet, with the DMCA, not only is speech itself more vulnerable to censorship via copyright infringement claim than it would be for other types of allegations[4] but so are the necessary protections speakers depend on to be able to speak.[5]  Between the self-identification requirements of subsection 512(g) put-back notices and the ease of demanding user information with subsection 512(h) subpoenas that also do not need to be predicated on actual lawsuits,[6] Internet speakers on the whole must fear the loss of their privacy if anyone dares to construe an infringement claim, no matter how illegitimate or untested that claim may be.  Given the ease of concocting an invalid infringement claim,[7] and the lack of any incentive not to,[8] the DMCA gives all-too-ready access to the identities of Internet users to the people least deserving of it and at the expense of those who most need it.[9]

Furthermore, the DMCA also compromises service providers’ own First Amendment interests in developing the forums and communities they would so choose.  The very design of the DMCA puts service providers at odds with their users, forcing them to be antagonistic their own customers and their own business interests as a condition for protecting those interests.  Attempts to protect their forums or their users can expose them to tremendous costs and potentially incalculable risk, and all of this harm flows from mere allegation that never need be tested in a court of law.  The DMCA forces service providers to enforce censorship compelled by a mere takedown notice, compromise user privacy in response to subsection 512(h) subpoenas (or devote significant resources to trying to quash them), and, vis a vis Questions #22 and 23, disconnect users according to termination policies whose sufficiency cannot be known until a court decides they are not.[10]

The repeat infringer policy requirement of subsection 512(i)(A) exemplifies the statutory problem with many of the DMCA’s safe harbor requirements.  A repeat infringer policy might only barely begin to be legitimate if it applied to the disconnection of a user after a certain number of judicial findings of liability for acts of infringement that users had used the service provider to commit.  But as at least one service provider lost its safe harbor for not permanently disconnecting users after only a certain number of allegations, even though they were allegations that had never been tested in a court consistent with the principles of due process or prohibition on prior restraint.[11]

In no other context would we find these sorts of government incursions against the rights of speakers constitutional, robbing them of their speech, anonymity, and the opportunity to further speak, without adequate due process.  These incursions do not suddenly become constitutionally sound just because the DMCA coerces service providers to be the agent committing these acts instead.
Continue reading »

Apr 072016
 

The following is Section III.B of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Question #12 asks if the notice-and-takedown process sufficiently protects against fraudulent, abusive, or unfounded notices and what should be done to address this concern.  Invalid takedown notices are most certainly a problem,[1] and the reason is that the system causes them to be a problem.  As discussed in Section II.B the notice-and-takedown regime is inherently a censorship regime, and it can be a very successful censorship regime because takedown notice senders can simply point to content they want removed and use the threat of liability as the gun to the service provider’s head to force it to remove it, lest the service provider risk its safe harbor protection.

Thanks to courts under-enforcing subsection 512(f) they can do this without fear of judicial oversight.[2]  But it isn’t just the lax subsection 512(f) standard that allows abusive notices to be sent without fear of accountability.  Even though the DMCA includes put-back provisions at subsection 512(g) we see relatively few instances of it being used.[3]  The DMCA is a complicated statute and the average non-lawyer may not know these provisions exist or be able to know how to use them.  Furthermore, trying to use them puts users in the crosshairs of the party gunning for their content (and, potentially, them as people) by forcing them to give up their right to anonymous speech in order to keep that speech from being censored.  All of these complications are significant deterrents to users being able to effectively defend their own content, content that would have already been censored (these measures would only allow the content to be restored, after the censorship damage has already been done).[4]  Ultimately there are no real checks on abusive takedown notices apart from what the service provider is willing and able to risk reviewing and rejecting.[5]  Given the enormity of this risk, however, it cannot remain the sole stopgap measure to keep this illegitimate censorship from happening.

Continuing on, Question #13 asks whether subsection 512(d), addressing “information location tools,” has been a useful mechanism to address infringement “that occurs as a result of a service provider’s referring or linking to infringing content.”  Purely as a matter of logic the answer cannot possibly be yes: simply linking to content has absolutely no bearing on whether content is or is not infringing.  The entire notion that there could be liability on a service provider for simply knowing where information resides stretches U.S. copyright law beyond recognition.  That sort of knowledge, and the sharing of that knowledge, should never be illegal, particularly in light of the Progress Clause, upon which the copyright law is predicated and authorized, and particularly when the mere act of sharing that knowledge in no way itself directly implicates any exclusive right held by a copyright holder in that content.[6]  Subsection 512(d) exists entirely as a means and mode of censorship, once again blackmailing service providers into the forced forgetting of information they once knew, and irrespective of whether the content they are being forced to forget is ultimately infringing or not.  As discussed above in Section II.B above, there is no way for the service provider to definitively know.
Continue reading »

Comments on DMCA Section 512: On the general effectiveness of the Safe Harbors

 Analysis/commentary, Criminal IP Enforcement, Intermediary liability  Comments Off on Comments on DMCA Section 512: On the general effectiveness of the Safe Harbors
Apr 062016
 

The following is Section III.A of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Question #1 asks whether Section 512 safe harbors are working as intended, and Question #5 asks the related question of whether the right balance has been struck between copyright owners and online service providers.  To the extent that service providers have been insulated from the costs associated with liability for their users’ content, the DMCA, with its safe harbors, has been a good thing.  But the protection is all too often too complicated to achieve, too expensive to assert, or otherwise too illusory for service providers to be adequately protected.

Relatedly, Question #2 asks whether courts have properly construed the entities and activities covered by the safe harbor, and the answer is not always.  But the problem here is not just that they have sometimes gotten it wrong but that there is too often the possibility for them to get it wrong.  Whereas under Section 230 questions of liability for intermediaries for illegality in user-supplied content are relatively straight forward – was the intermediary the party that produced the content? if not, then it is not liable – when the alleged illegality in others’ content relates to potential copyright infringement, the test becomes a labyrinth minefield that the service provider may need to endure costly litigation to navigate.  Not only is ultimate liability expensive but even the process of ensuring that it won’t face that liability can be crippling.[1]  Service providers, and investors in service providers, need a way to minimize and manage the legal risk and associated costs arising from their provision of online services, but given the current complexity[2] outlining the requirements for safe harbors they can rarely be so confidently assured.
Continue reading »

Comments on DMCA Section 512: The assumptions of economic harm underpinning the DMCA must be carefully examined

 Analysis/commentary, Criminal IP Enforcement, Intermediary liability  Comments Off on Comments on DMCA Section 512: The assumptions of economic harm underpinning the DMCA must be carefully examined
Apr 052016
 

The following is Section II.C of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Veoh was a video hosting service akin to YouTube that was found to be eligible for the DMCA safe harbor.[1]  Unfortunately this finding was reached after years of litigation had already driven the company into bankruptcy and forced it to layoff its staff.[2]  Meanwhile SeeqPod was a search engine that helped people (including potential consumers) find multimedia content out on the Internet, but it, too, was also driven into bankruptcy by litigation, taking with it an important tool to help people discover creative works.[3]

History is littered with examples like the ones above of innovative new businesses being driven out of existence, their innovation and investment chilled, by litigation completely untethered from the principles underpinning copyright law.  Copyright law exists solely to “promote the progress of science and the useful arts.”  Yet all too frequently it has had the exact opposite effect.

The DMCA has the potential to be a crucial equalizer, but it can only do so when the economic value of what these service providers deliver is considered by policymakers with at least as much weight as that given to the incumbent interests who complain that their previous business models may have become unworkable in light of digital technology.  Service providers are economic engines employing innumerable people, directly and indirectly, and driving innovation forward while they deliver a world of information to each and every Internet user.  We know economic harm is done to them and to anyone, creators and consumers, who would have benefited from their services when they are not protected.

But what needs careful scrutiny and testing are economic arguments predicated on the assumption that every digital copy of every copyrighted work transmitted online without the explicit permission of a copyright holder represents a financial loss.  This is a presumption that needs careful scrutiny, with reviewable data and auditable methodology.  It is quite a leap to assume that every instance (or even most instances) of people consuming “pirated” copyrighted works is an instance they would otherwise have paid the creator.  For example, it tends to presume that people have unlimited amounts of money to spend on unlimited numbers of copyrighted works, and it also ignores the fact that some works may only be consumable at a price point of $0, which is something that institutions like libraries and over-the-air radio have long enabled, to the betterment of creators and the public beneficiaries of creative works alike.  Furthermore, even in instances when people would be willing to pay for access to a work, copyright owners may not be offering it at any price, nor are they necessarily equitably sharing the revenues derived from creative works with the actual creators whose efforts require the remuneration.[4]

The DMCA does not adjust to reflect situations like these, nor does it incentivize copyright holders to correct their own self-induced market failures.  On the contrary; it allows them to deprive the public of access to their works and to threaten the service providers enabling their access with extinction if they do not assist in disabling this access. None of these outcomes are consistent with the goals and purpose of copyright in general, and care must be taken not to allow the DMCA be a law that ensures them.
Continue reading »

Comments on DMCA Section 512: The DMCA functions as a system of extra-judicial censorship

 Analysis/commentary, Intermediary liability, Regulating speech  Comments Off on Comments on DMCA Section 512: The DMCA functions as a system of extra-judicial censorship
Apr 042016
 

The following is Section II.B of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Despite all the good that Section 230 and the DMCA have done to foster a robust online marketplace of ideas, the DMCA’s potential to deliver that good has been tempered by the particular structure of the statute.  Whereas Section 230 provides a firm immunity to service providers for potential liability in user-supplied content,[1] the DMCA conditions its protection.[2]  And that condition is censorship.  The irony is that while the DMCA makes it possible for service providers to exist to facilitate online speech, it does so at the expense of the very speech they exist to facilitate due to the notice and takedown system.

In a world without the DMCA, if someone wanted to enjoin content they would need to demonstrate to a court that it indeed owned a valid copyright and that the use of content in question infringed this copyright before a court would compel its removal.  Thanks to the DMCA, however, they are spared both their procedural burdens and also their pleading burdens.  In order to cause content to be disappeared from the Internet all anyone needs to do is send a takedown notice that merely points to content and claims it as theirs.

Although some courts are now requiring takedown notice senders to consider whether the use of the content in question was fair,[3] there is no real penalty for the sender if they get it wrong or don’t bother.[4]  Instead, service providers are forced to become judge and jury, even though (a) they lack the information needed to properly evaluate copyright infringement claims,[5] (b) the sheer volume of takedowns notices often makes case-by-case evaluation of them impossible, and (c) it can be a bet-the-company decision if the service provider gets it wrong because their “error” may deny them the Safe Harbor and put them on the hook for infringement liability.[6]  Although there is both judicial and statutory recognition that service providers are not in the position to police user-supplied content for infringement,[7] there must also be recognition that they are similarly not in the position to police for invalid takedowns.  Yet they must, lest there be no effective check on these censorship demands.

Ordinarily the First Amendment and due process would not permit this sort of censorship, the censorship of an Internet user’s speech predicated on mere allegation.  Mandatory injunctions are disfavored generally,[8] and particularly so when they target speech and may represent impermissible prior restraint on speech that has not yet been determined to be wrongful.[9]  To the extent that the DMCA causes these critical speech protections to be circumvented it is consequently only questionably constitutional.  For the DMCA to be statutorily valid it must retain, in its drafting and interpretation, ample protection to see that these important constitutional speech protections are not ignored.
Continue reading »

Comments on DMCA Section 512: Congress protected intermediaries for a reason

 Analysis/commentary, Intermediary liability  Comments Off on Comments on DMCA Section 512: Congress protected intermediaries for a reason
Apr 032016
 

The following is Section II.A of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Congress in the 1990s may not have been able to predict the growth of the Internet, but it could see the direction it was taking and the value it had the potential to deliver.  We see this recognition first baked into the statutory language of 47 U.S.C. Section 230 (“Section 230”), a 1996 statute that provides unequivocal immunity for service providers that intermediate content from other users:

Congress finds the following: [that t]he rapidly developing array of Internet and other interactive computer services available to individual Americans represent an extraordinary advance in the availability of educational and informational resources to our citizens[;[1] that t]hese services offer users a great degree of control over the information that they receive, as well as the potential for even greater control in the future as technology develops[;[2] that t]he Internet and other interactive computer services offer a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity[;[3] that t]he Internet and other interactive computer services have flourished, to the benefit of all Americans, with a minimum of government regulation[;[4] and that i]ncreasingly Americans are relying on interactive media for a variety of political, educational, cultural, and entertainment services.[5]

It was therefore the policy of the United States to, among other things, “promote the continued development of the Internet and other interactive computer services and other interactive media”[6] and “to preserve the vibrant and competitive free market that presently exists for the Internet and other interactive computer services, unfettered by Federal or State regulation.”[7]

As the Notice of Inquiry soliciting comment for this study noted,[8] Congress was still of the same view about the importance of the Internet two years later when it passed the DMCA explicitly to help “foster the continued development of electronic commerce and the growth of the Internet.”[9]  As per an accompanying Senate Report, “The ‘Digital Millennium Copyright Act of 1998’ is designed to facilitate the robust development and world-wide expansion of electronic commerce, communications, research, development, and education in the digital age.”[10]  As the Report continued, Congress was going to achieve this end by protecting intermediaries, observing that, “[B]y limiting the liability of service providers, the DMCA ensures that the efficiency of the Internet will continue to improve and that the variety and quality of services on the Internet will continue to expand.”[11]

At no time since then has Congress fundamentally changed its view on the value of the Internet.  Nor should it.  In these nearly twenty years we have seen countless businesses and jobs be added to the economy, innumerable examples of pioneering technology be innovated, myriad new markets previously unimaginable be created (including many for those in the arts and sciences to economically exploit), and enormous value returned to the economy.  By protecting online service providers we have changed the world and brought the democratic promise of information and knowledge sharing to bear.  It is therefore absolutely critical that we not create law that interferes with this promise.  If anything, we should take this opportunity to reduce the costly friction that the more inapt portions of the existing law have been imposing instead.
Continue reading »