Why Protecting The Free Press Requires Protecting Trump’s Tweets (cross-post)

 Analysis/commentary, Intermediary liability, Regulating speech  Comments Off on Why Protecting The Free Press Requires Protecting Trump’s Tweets (cross-post)
Jul 062017
 

The following was originally posted on Techdirt.

Sunday morning I made the mistake of checking Twitter first thing upon waking up. As if just a quick check of Twitter would ever be possible during this administration… It definitely wasn’t this past weekend, because waiting for me in my Twitter stream was Trump’s tweet of the meme he found on Reddit showing him physically beating the crap out of a personified CNN.

But that’s not what waylaid me. What gave me pause were all the people demanding it be reported to Twitter for violating its terms of service. The fact that so many people thought that was a good idea worries me, because the expectation that when bad speech happens someone will make it go away is not a healthy one. My concern inspired a tweet storm, which has now been turned into this post. Continue reading »

Apr 072016
 

The following is Section III.B of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Question #12 asks if the notice-and-takedown process sufficiently protects against fraudulent, abusive, or unfounded notices and what should be done to address this concern.  Invalid takedown notices are most certainly a problem,[1] and the reason is that the system causes them to be a problem.  As discussed in Section II.B the notice-and-takedown regime is inherently a censorship regime, and it can be a very successful censorship regime because takedown notice senders can simply point to content they want removed and use the threat of liability as the gun to the service provider’s head to force it to remove it, lest the service provider risk its safe harbor protection.

Thanks to courts under-enforcing subsection 512(f) they can do this without fear of judicial oversight.[2]  But it isn’t just the lax subsection 512(f) standard that allows abusive notices to be sent without fear of accountability.  Even though the DMCA includes put-back provisions at subsection 512(g) we see relatively few instances of it being used.[3]  The DMCA is a complicated statute and the average non-lawyer may not know these provisions exist or be able to know how to use them.  Furthermore, trying to use them puts users in the crosshairs of the party gunning for their content (and, potentially, them as people) by forcing them to give up their right to anonymous speech in order to keep that speech from being censored.  All of these complications are significant deterrents to users being able to effectively defend their own content, content that would have already been censored (these measures would only allow the content to be restored, after the censorship damage has already been done).[4]  Ultimately there are no real checks on abusive takedown notices apart from what the service provider is willing and able to risk reviewing and rejecting.[5]  Given the enormity of this risk, however, it cannot remain the sole stopgap measure to keep this illegitimate censorship from happening.

Continuing on, Question #13 asks whether subsection 512(d), addressing “information location tools,” has been a useful mechanism to address infringement “that occurs as a result of a service provider’s referring or linking to infringing content.”  Purely as a matter of logic the answer cannot possibly be yes: simply linking to content has absolutely no bearing on whether content is or is not infringing.  The entire notion that there could be liability on a service provider for simply knowing where information resides stretches U.S. copyright law beyond recognition.  That sort of knowledge, and the sharing of that knowledge, should never be illegal, particularly in light of the Progress Clause, upon which the copyright law is predicated and authorized, and particularly when the mere act of sharing that knowledge in no way itself directly implicates any exclusive right held by a copyright holder in that content.[6]  Subsection 512(d) exists entirely as a means and mode of censorship, once again blackmailing service providers into the forced forgetting of information they once knew, and irrespective of whether the content they are being forced to forget is ultimately infringing or not.  As discussed above in Section II.B above, there is no way for the service provider to definitively know.
Continue reading »

Comments on DMCA Section 512: The DMCA functions as a system of extra-judicial censorship

 Analysis/commentary, Intermediary liability, Regulating speech  Comments Off on Comments on DMCA Section 512: The DMCA functions as a system of extra-judicial censorship
Apr 042016
 

The following is Section II.B of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Despite all the good that Section 230 and the DMCA have done to foster a robust online marketplace of ideas, the DMCA’s potential to deliver that good has been tempered by the particular structure of the statute.  Whereas Section 230 provides a firm immunity to service providers for potential liability in user-supplied content,[1] the DMCA conditions its protection.[2]  And that condition is censorship.  The irony is that while the DMCA makes it possible for service providers to exist to facilitate online speech, it does so at the expense of the very speech they exist to facilitate due to the notice and takedown system.

In a world without the DMCA, if someone wanted to enjoin content they would need to demonstrate to a court that it indeed owned a valid copyright and that the use of content in question infringed this copyright before a court would compel its removal.  Thanks to the DMCA, however, they are spared both their procedural burdens and also their pleading burdens.  In order to cause content to be disappeared from the Internet all anyone needs to do is send a takedown notice that merely points to content and claims it as theirs.

Although some courts are now requiring takedown notice senders to consider whether the use of the content in question was fair,[3] there is no real penalty for the sender if they get it wrong or don’t bother.[4]  Instead, service providers are forced to become judge and jury, even though (a) they lack the information needed to properly evaluate copyright infringement claims,[5] (b) the sheer volume of takedowns notices often makes case-by-case evaluation of them impossible, and (c) it can be a bet-the-company decision if the service provider gets it wrong because their “error” may deny them the Safe Harbor and put them on the hook for infringement liability.[6]  Although there is both judicial and statutory recognition that service providers are not in the position to police user-supplied content for infringement,[7] there must also be recognition that they are similarly not in the position to police for invalid takedowns.  Yet they must, lest there be no effective check on these censorship demands.

Ordinarily the First Amendment and due process would not permit this sort of censorship, the censorship of an Internet user’s speech predicated on mere allegation.  Mandatory injunctions are disfavored generally,[8] and particularly so when they target speech and may represent impermissible prior restraint on speech that has not yet been determined to be wrongful.[9]  To the extent that the DMCA causes these critical speech protections to be circumvented it is consequently only questionably constitutional.  For the DMCA to be statutorily valid it must retain, in its drafting and interpretation, ample protection to see that these important constitutional speech protections are not ignored.
Continue reading »

Dancing Babies, The DMCA, Fair Use And Whether Companies Should Pay For Bogus Takedowns (cross-post)

 Analysis/commentary, Intermediary liability  Comments Off on Dancing Babies, The DMCA, Fair Use And Whether Companies Should Pay For Bogus Takedowns (cross-post)
Jul 162015
 

Cross-posted from Techdirt.

Earlier this week the Ninth Circuit heard oral arguments in the appeal of Lenz v. Universal. This was the case where Stephanie Lenz sued Universal because Universal had sent YouTube a takedown notice demanding it delete the home movie she had posted of her toddler dancing, simply because music by Prince was audible in the background. It’s a case whose resolution has been pending since 2007, despite the fact that it involves the interpretation of a fundamental part of the DMCA’s operation.

The portion of the DMCA at issue in this case is Section 512 of the copyright statute, which the DMCA added in 1998 along with Section 1201. As with Section 1201, Section 512 reflects a certain naivete by Congress in thinking any part of the DMCA was a good idea, rather than the innovation-choking and speech- chilling mess it has turned out to be. But looking at the statutory language you can kind of see how Congress thought it was all going to work, what with the internal checks and balances they put into the DMCA to prevent it from being abused. Unfortunately, while even as intended there are some severe shortcomings to how this balance was conceptualized, what’s worse is how it has not even been working as designed.
Continue reading »

Feb 292012
 

PayPal recently made news for implementing a policy denying its payment processing services to publications including obscene content.  There are several things objectionable about this policy, including the lack of any clear way of delineating what content would qualify as “obscene,” and its overall censorious impact.

But I’m not entirely sure that PayPal is necessarily the appropriate target for criticism of this policy.  It may be, to the extent that it is a truly discretionary policy PayPal has voluntarily chosen to pursue.  If it could just as easily chosen not to pursue it it can be fairly criticized for the choice it did make.  For this policy is not as simple as banning certain objectively horrible content 100% of all people would agree should be stricken from the face of the earth.  After all, there *is* no objectively horrible content 100% of all people would agree is objectionable.  Instead this policy has the effect of denying market opportunities to all sorts of writers producing all sorts of valid content, even if some people may not happen to like it.  And it does this not just by denying particular publications access to its services but by forcing electronic publishers to overcensor all the works they publish lest PayPal services be shut off to their entire businesses. Continue reading »

EMI sues Ireland for not passing law to its liking

 Analysis/commentary, Criminal IP Enforcement, Regulating speech  Comments Off on EMI sues Ireland for not passing law to its liking
Feb 282012
 

Catching up from January is news about EMI suing the Irish government for not having passed a copyright law that would toughen sanctions for filesharing.  From the Irish Times:

The Irish arm of multinational music group EMI has launched a High Court action against the State as part of its bid to stop the illegal downloading of music.

The Government recently pledged to issue an order to allow copyright holders to compel internet service providers (ISPs) to block access to websites that they consider are engaged in piracy.

However, EMI Records (Ireland) remains unhappy with what it perceives to be foot-dragging on the part of the Government in tackling this issue.

It is concerned that the matter could be delayed again, and that even if a statutory instrument is issued, its contents may not be satisfactory. Continue reading »

Quicklinks 2/25/12

 Quicklinks  Comments Off on Quicklinks 2/25/12
Feb 252012
 

Another dose of quicklinks:

Quicklinks 2/4/2012

 Quicklinks  Comments Off on Quicklinks 2/4/2012
Feb 042012
 

Brief bits from the last week:

Quicklinks 1/28/2012

 Quicklinks  Comments Off on Quicklinks 1/28/2012
Jan 282012
 

Some items from the past week:

India continues to flirt with web censorship

 Examples, Intermediary liability, Regulating speech  Comments Off on India continues to flirt with web censorship
Jan 132012
 

Last month Kapil Sibal, acting telecommunications minister for India, floated the proposition that social networks actively filter all content appearing on their systems.  Now comes news that a judge in New Delhi also thinks web censorship appropriate.  From the New York Times:

The comments of the judge, Suresh Kait, came in response to a lawsuit, filed by a private citizen in the capital, New Delhi. The suit demands that Internet companies screen content before it is posted on sites like Facebook, Google or Yahoo, that might offend the religious sentiments of Indians. A related criminal case accuses the companies — 21 in all — of violating an Indian law that applies to books, pamphlets and other material that is deemed to “deprave or corrupt.”

A trial court in New Delhi on Friday ordered that summons be served in the criminal case to officials at all 21 companies at their foreign headquarters’ addresses.

Google and Facebook refused to comment on the case, except to say they had filed a motion in the New Delhi High Court to dismiss the criminal case.

Their motion will be considered on Monday. Continue reading »