Apr 082016
 

The following is Section III.C of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Questions #16 and #17 more specifically contemplate the effectiveness of the put-back process articulated at subsection 512(g).  As explained in Section III.B this mechanism is not effective for restoring wrongfully removed content and is little used.  But it is worth taking a moment here to further explore the First Amendment harms wrought to both Internet users and service providers by the DMCA.[1]

It is part and parcel of First Amendment doctrine that people are permitted to speak, and to speak anonymously.[2]  Although that anonymity can be stripped in certain circumstances, there is nothing about the allegation of copyright infringement that should cause it to be stripped automatically.  Particularly in light of copyright law incorporating free speech principles[3] this anonymity cannot be more fragile than it would in any other circumstance where speech was subject to legal challenge.  The temptation to characterize all alleged infringers as malevolent pirates who get what they deserve must be resisted; the DMCA targets all speakers and all speech, no matter how fair or necessary to public discourse this speech is.

And yet, with the DMCA, not only is speech itself more vulnerable to censorship via copyright infringement claim than it would be for other types of allegations[4] but so are the necessary protections speakers depend on to be able to speak.[5]  Between the self-identification requirements of subsection 512(g) put-back notices and the ease of demanding user information with subsection 512(h) subpoenas that also do not need to be predicated on actual lawsuits,[6] Internet speakers on the whole must fear the loss of their privacy if anyone dares to construe an infringement claim, no matter how illegitimate or untested that claim may be.  Given the ease of concocting an invalid infringement claim,[7] and the lack of any incentive not to,[8] the DMCA gives all-too-ready access to the identities of Internet users to the people least deserving of it and at the expense of those who most need it.[9]

Furthermore, the DMCA also compromises service providers’ own First Amendment interests in developing the forums and communities they would so choose.  The very design of the DMCA puts service providers at odds with their users, forcing them to be antagonistic their own customers and their own business interests as a condition for protecting those interests.  Attempts to protect their forums or their users can expose them to tremendous costs and potentially incalculable risk, and all of this harm flows from mere allegation that never need be tested in a court of law.  The DMCA forces service providers to enforce censorship compelled by a mere takedown notice, compromise user privacy in response to subsection 512(h) subpoenas (or devote significant resources to trying to quash them), and, vis a vis Questions #22 and 23, disconnect users according to termination policies whose sufficiency cannot be known until a court decides they are not.[10]

The repeat infringer policy requirement of subsection 512(i)(A) exemplifies the statutory problem with many of the DMCA’s safe harbor requirements.  A repeat infringer policy might only barely begin to be legitimate if it applied to the disconnection of a user after a certain number of judicial findings of liability for acts of infringement that users had used the service provider to commit.  But as at least one service provider lost its safe harbor for not permanently disconnecting users after only a certain number of allegations, even though they were allegations that had never been tested in a court consistent with the principles of due process or prohibition on prior restraint.[11]

In no other context would we find these sorts of government incursions against the rights of speakers constitutional, robbing them of their speech, anonymity, and the opportunity to further speak, without adequate due process.  These incursions do not suddenly become constitutionally sound just because the DMCA coerces service providers to be the agent committing these acts instead.
Continue reading »

Apr 072016
 

The following is Section III.B of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Question #12 asks if the notice-and-takedown process sufficiently protects against fraudulent, abusive, or unfounded notices and what should be done to address this concern.  Invalid takedown notices are most certainly a problem,[1] and the reason is that the system causes them to be a problem.  As discussed in Section II.B the notice-and-takedown regime is inherently a censorship regime, and it can be a very successful censorship regime because takedown notice senders can simply point to content they want removed and use the threat of liability as the gun to the service provider’s head to force it to remove it, lest the service provider risk its safe harbor protection.

Thanks to courts under-enforcing subsection 512(f) they can do this without fear of judicial oversight.[2]  But it isn’t just the lax subsection 512(f) standard that allows abusive notices to be sent without fear of accountability.  Even though the DMCA includes put-back provisions at subsection 512(g) we see relatively few instances of it being used.[3]  The DMCA is a complicated statute and the average non-lawyer may not know these provisions exist or be able to know how to use them.  Furthermore, trying to use them puts users in the crosshairs of the party gunning for their content (and, potentially, them as people) by forcing them to give up their right to anonymous speech in order to keep that speech from being censored.  All of these complications are significant deterrents to users being able to effectively defend their own content, content that would have already been censored (these measures would only allow the content to be restored, after the censorship damage has already been done).[4]  Ultimately there are no real checks on abusive takedown notices apart from what the service provider is willing and able to risk reviewing and rejecting.[5]  Given the enormity of this risk, however, it cannot remain the sole stopgap measure to keep this illegitimate censorship from happening.

Continuing on, Question #13 asks whether subsection 512(d), addressing “information location tools,” has been a useful mechanism to address infringement “that occurs as a result of a service provider’s referring or linking to infringing content.”  Purely as a matter of logic the answer cannot possibly be yes: simply linking to content has absolutely no bearing on whether content is or is not infringing.  The entire notion that there could be liability on a service provider for simply knowing where information resides stretches U.S. copyright law beyond recognition.  That sort of knowledge, and the sharing of that knowledge, should never be illegal, particularly in light of the Progress Clause, upon which the copyright law is predicated and authorized, and particularly when the mere act of sharing that knowledge in no way itself directly implicates any exclusive right held by a copyright holder in that content.[6]  Subsection 512(d) exists entirely as a means and mode of censorship, once again blackmailing service providers into the forced forgetting of information they once knew, and irrespective of whether the content they are being forced to forget is ultimately infringing or not.  As discussed above in Section II.B above, there is no way for the service provider to definitively know.
Continue reading »

Jul 072015
 

The following is cross-posted from Popehat.

There is no question that the right of free speech necessarily includes the right to speak anonymously. This is partly because sometimes the only way for certain speech to be possible at all is with the protection of anonymity.

And that’s why so much outrage is warranted when bullies try to strip speakers of their anonymity simply because they don’t like what these people have to say, and why it’s even more outrageous when these bullies are able to. If anonymity is so fragile that speakers can be so easily unmasked, fewer people will be willing to say the important things that need to be said, and we all will suffer for the silence.

We’ve seen on these blog pages examples of both government and private bullies make specious attacks on the free speech rights of their critics, often by using subpoenas, both civil and criminal, to try to unmask them. But we’ve also seen another kind of attempt to identify Internet speakers, and it’s one we’ll see a lot more of if the proposal ICANN is currently considering is put into place.

Continue reading »

Sep 292013
 

This past week California passed a law requiring website owners to allow minors (who are also residents of California) to delete any postings they may have made on the website. There is plenty to criticize about this law, including that it is yet another example of a legislative commandment cavalierly imposing liability on website owners with no contemplation of the technical feasibility of how they are supposed to comply with it.

But such discussion should be moot. This law is precluded by federal law, in this case 47 U.S.C. Section 230. By its provisions, Section 230 prevents intermediaries (such as websites) from being held liable for content others have posted on them. (See Section 230(c)(1)). Moreover, states are not permitted to undermine that immunity. (See Section 230(e)(3)). So, for instance, even if someone were to post some content to a website that might be illegal in some way under state law, that state law can’t make the website hosting that content itself be liable for it (nor can that state law make the website delete it). But that’s what this law proposes to do at its essence: make websites liable for content others have posted to them.

As such, even aside for the other Constitutional infirmities of this law such as those involving compelled speech for forcing website owners to either host or delete content at someone else’s behest (see a discussion from Eric Goldman about this and other Constitutional problems here), it’s also constitutionally pre-empted by a prior act of Congress.

Some might argue that the intent of the law is important and noble enough to forgive it these problems. Unlike in generations past, kids today truly do have something akin to a “permanent record” thanks to the ease of the Internet to collect and indefinitely store the digital evidence of everyone’s lives. But such a concern requires thoughtful consideration for how to best ameliorate those consequences, if it’s even possible to, without injuring important free speech principles and values the Internet also supports. This law offers no such solution.

May 132013
 

One of the cases I came across when I was writing an article about Internet surveillance was Deal v. Spears, 980 F. 2d 1153 (8th Cir. 1992), a case involving the interception of phone calls that was arguably prohibited by the Wiretap Act (18 U.S.C. § 2511 et seq.). The Wiretap Act, for some context, is a 1968 statute that applied Fourth Amendment privacy values to telephones, and in a way that prohibited both the government and private parties from intercepting the contents of conversations taking place through the telephone network. That prohibition is fairly strong: while there are certain types of interceptions that are exempted from it, these exemptions have not necessarily been interpreted generously, and Deal v. Spears was one of those cases where the interception was found to have run afoul of the prohibition.

It’s an interesting case for several reasons, one being that it upheld the privacy rights of an apparent bad actor (of course, so does the Fourth Amendment generally). In this case the defendants owned a store that employed the plaintiff, whom the defendants strongly suspected – potentially correctly – was stealing from them. In order to catch the plaintiff in the act, the defendants availed themselves of the phone extension in their adjacent house to intercept the calls the plaintiff made on the store’s business line to further her crimes. Ostensibly such an interception could be exempted by the Wiretap Act: the business extension exemption generally allows for business proprietors to listen in to calls made in the ordinary course of business. (See 18 U.S.C. § 2510(5)(a)(i)). But here the defendants didn’t just listen in to business calls; they recorded *all* calls that the plaintiff made, regardless of whether they related to the business or not, and, by virtue of being automatically recorded, without the telltale “click” one hears when an actual phone extension is picked up, thereby putting the callers on notice that someone is listening in. This silent, pervasive monitoring of the contents of all communications put the monitoring well-beyond the statutory exception that might otherwise have permitted a more limited interception.

[T]he [defendants] recorded twenty-two hours of calls, and […] listened to all of them without regard to their relation to his business interests. Granted, [plaintiff] might have mentioned the burglary at any time during the conversations, but we do not believe that the [defendants’] suspicions justified the extent of the intrusion.

For a similar view, see US v. Jones, 542 F. 2d 661 (6th Cir. 1976):

[T]here is a vast difference between overhearing someone on an extension and installing an electronic listening device to monitor all incoming and outgoing telephone calls.

And so the defendants, hapless victims though they seemed to have been in their own right, were found to have violated the Wiretap Act.

But Deal v. Spears is a telephone case, and telephone cases are fairly straight forward. The statutory language clearly reaches the contents of those communications made with that technology, and all that’s really been left for courts to decide is how broad to construe the few exemptions the statute articulates. What has been much harder is figuring out how to extend the Wiretap Act’s prohibitions against surveillance to those communications made via other technologies (ie, the Internet), or to aspects of those communications that seem to apply more to how they should be routed than their underlying message. However privacy interests are privacy interests, and no amount of legal hairsplitting alleviates the harm that can result when any identifiable aspect of someone’s communications can be surveilled. There is a lot that the Wiretap Act, both in terms of its statutory history and subsequent case law, can teach us about surveillance policy, and we would be foolish not to heed those lessons.

More on them later.

Mar 272013
 

I’ve written before about the balance privacy laws need to take with respect to the data aggregation made possible by the digital age. When it comes to data aggregated or accessed by the government, on that front law and policy should provide some firm checks to ensure that such aggregation or access does not violate people’s Fourth Amendment right “to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.” Such limitations don’t forever hobble legitimate investigations of wrongdoing; they simply require adequate probable cause before the digital records of people’s lives be exposed to police scrutiny. You do not need to have something to hide in order not to want that.

But all too often when we demand that government better protect privacy it’s not because we want the government to; on the contrary, we want it to force private parties to. Which isn’t to say that there is no room for concern when private parties aggregate personal data. Such aggregations can easily be abused, either by private parties or by the government itself (which tends to have all too easy access to it). But as this recent article in the New York Times suggests, a better way to construct the regulation might be to focus less on how private parties collect the data and more on the subsequent access to and use of the data once collected, since that is generally from where any possible harm could flow. The problem with privacy regulation that is too heavy-handed in how it allows technology to interact with data is that these regulations can choke further innovation, often undesirably. As a potential example, although mere speculation, this article suggests that Google discontinued its support for its popular Google Reader product due to the burdens of compliance with myriad privacy regulations. Assuming this suspicion is true — but even if it’s not — while perhaps some of this regulation vindicates important policy values, it is fair to question whether it does so in a sufficiently nuanced way so that it doesn’t provide a disincentive for innovators to develop and support new products and technologies. If such regulation is having that chilling effect, we may reasonably want to question whether these enforcement mechanisms have gone too far.

Meanwhile public outcry has largely been ignoring much more obvious and dangerous incursions into their privacy rights done by government actors, a notable example of which will be discussed in the following post.

Feb 202013
 

At an event on CFAA reform last night I heard Brewster Kahle say what to my ears sounded like, “Law that follows technology tends to be ok. Law that tries to lead it is not.”

His comment came after an earlier tweet I’d made:

I think we need a per se rule that any law governing technology that was enacted more than 10 years ago is inherently invalid.

In posting that tweet I was thinking about two horrible laws in particular, the Computer Fraud and Abuse Act (CFAA) and the Electronic Communications Privacy Act (ECPA). The former attempts to forbid “hacking,” and the second ostensibly tried to update 1968’s Wiretap Act to cover information technology. In both instances the laws as drafted generally incorporated the attitude that technology as understood then would be the technology the world would have forever hence, a prediction that has obviously been false. But we are nonetheless left with laws like these on the books, laws that hobble further innovation by how they’ve enshrined in our legal code what is right and wrong when it comes to our computer code, as we understood it in 1986, regardless of whether, if considered afresh and applied to today’s technology, we would still think so.

To my tweet a friend did challenge me, however, “What about Section 230? (47 U.S.C. § 230).” This is a law from 1996, and he has a point. Section 230 is a piece of legislation that largely immunizes Internet service providers for liability in content posted on their systems by their users – and let’s face it: the very operational essence of the Internet is all about people posting content on other people’s systems. However, unlike the CFAA and ECPA, Section 230 has enabled technology to flourish, mostly by purposefully getting the law itself out of the way of the technology.

The above are just a few examples of some laws that have either served technology well – or served to hamper it. There are certainly more, and some laws might ultimately do a bit of both. But the general point is sound: law that is too specific is often too stifling. Innovation needs to be able to happen however it needs to, without undue hindrance caused by legislators who could not even begin to imagine what that innovation might look like so many years before. After all, if they could imagine it then, it would not be so innovative now.

May 192012
 

There’s no discussing technology law without discussing how it implicates privacy.  But privacy is such a broad concept; to discuss it in any meaningful requires a definition with more detail.

I see there being (at least for purposes of the sort of discussion on this site) two main types privacy: privacy from the government, and privacy from other individuals.  And when it comes to regulating the intersection of privacy and technology, these two types of privacy require very different treatment. Continue reading »

Feb 022012
 

I’ve written before about Netflix petitioning Congress to modify the Video Privacy Protection Act (VPPA) to allow for users to easily share what they are watching to social networks. Right now users can easily share what books they are reading and what music they are listening to, but because the videos they stream may be covered by this videotape-era law, Netflix is concerned it could run afoul of it if it allowed for similarly easy sharing.

But as Susan Crawford notes in this article, Netflix’s attempt to harmonize privacy law vis a vis the sharing of what people are streaming with what they are reading or listening to may be backfiring: harmonization may well occur, not by making it easier to share video but rather by making it harder to share those other media too. Continue reading »

Webkinz privacy complaint filed with FTC

 Examples, Privacy from private parties  Comments Off on Webkinz privacy complaint filed with FTC
Dec 142011
 

From the Los Angeles Times, the Campaign for a Commercial-Free Childhood has filed a complaint with the FTC alleging deceptive and unfair trade practices by the Webkinz website.  The organization accuses the children’s site and its corporate parent Ganz of violating facets of the Children’s Online Privacy Protection Act, which prohibits the collecting and maintaining of children’s personal information about users by failing to link to its privacy policy from its home page, in violation of the act, and that the policy is written in “vague, confusing and contradictory” language.

According to the complaint, Webkinz asks children to provide their first name, date of birth, gender and state of residence during registration, urging the users “it is important to use real information.” As the child navigates the animated website, dubbed Webkinz World, Webkinz monitors the child’s activity by depositing software to track his or her movements through the site, the complaint said.

As the children play in Webkinz World — which is aimed at children ages 6 to 13 and enables users to play games and interact with other members — Ganz allows third parties to track their activities for behavioral advertising purposes, the advocacy group alleges.

Ganz says parents can “easily opt out” of having their children view ads, noting it is “committed to being highly responsible in our approach to advertising.” But ads continue to appear on the site, even after parents have opted out, according to the complaint. In fact, the complaint said, ads are incorporated into Webkinz games such as “Wheel of Wow,” which attracts some 4 million players a month.

The Campaign for a Commercial-Free Childhood alleges that Ganz’s privacy policy is deceptive because it states that the information it gathers from children during the registration process could not be used to identify the child offline. It further alleges that the practice of using software — “cookies” and “web Beacons” — to track children’s activities and serve them targeted ads without parental consent “contravenes FTC guidance on behavioral advertising” and amounts to an unfair trade practice.