Oct 032017
 

I always knew, even before I applied to college, that I wanted to be a mass communications major.  At UC Berkeley (where I went) the major required a choice of several pre-requisites.  On a lark, I decided to take Sociology 1.

As a major portion of our grade, we needed to do some sort of social research project.  I was new to the Bay Area and surprised to see how many panhandlers congregated near the BART stations in San Francisco.  So I decided to research commuters’ attitudes towards giving money to them.

My classmate and I put together a one-page survey that collected some broad demographic data (age, sex, general income level, etc.) and then asked several questions about donation habits.  Then we set out for a BART station to distribute our survey to evening commuters.

Our goal was to give a survey to everyone we could, but we also had some sense of not wanting to skew the data we collected by accidentally giving the survey to, say, more men than women.  So we tried to passively make sure we were giving it out in relatively equal numbers to both.  And from 5pm to 6pm that was easy.  But once 6pm rolled around, all of a sudden we noticed that we couldn’t find many women to give it to.  Male commuters vastly outnumbered them.  We administered the survey on two evenings, and both times made the same observation.

Nonetheless we persevered, and managed to collect 100 usable surveys, of which 50 ultimately turned out to be from men and 50 from women.  But then we noticed another gender difference:

Of those 50 men, 27 reported earning more than $50,000 a year.

Of those 50 women: 6.

And this is why I became a sociologist.  Because while I firmly believe that people are all individuals capable of free will, it is clear that there are unseen forces that affect their decisions.  Sociology is about revealing what those forces are.

The paper we wrote is now lost to history (or lost in an inaccessible attic somewhere, which is essentially the same thing), but my recollection is that the data revealed yet another gender difference: as men grew more wealthy they tended to give less, whereas for women, the trend was the opposite.  Based on the written comments we got back we surmised that poorer men had a greater sense of empathy for those needing handouts, and wealthier women a greater sense of freedom to be able to afford to help.

But whatever the result and whatever the reason, the takeaway from the project I still carry with me was that we need to pay attention to those invisible forces, particularly in policy discussions.  We can’t simply demand that people act differently than they do: we need to understand why they act as they do and what needs to change for them to be able to choose to act differently.

Aug 222017
 

The following is a cross-post of something I wrote on Techdirt last week.  Some people have taken issue with the fact that I did not fully analyze exactly how VARA (see below) would specifically apply to the Confederate monuments, but that wasn’t the point.  The point was that we added something to copyright law that very easily could interact with public art controversies and in a way that is not going to make them any easier to sort out.

There’s no issue of public interest that copyright law cannot make worse. So let me ruin your day by pointing out there’s a copyright angle to the monument controversy: the Visual Artists Rights Act (VARA), a 1990 addition to the copyright statute that allows certain artists to control what happens to their art long after they’ve created it and no longer own it. Techdirt has written about it a few times, and it was thrust into the spotlight this year during the controversy over the Fearless Girl statue.

Now, VARA may not be specifically applicable to the current controversy. For instance, it’s possible that at least some of the Confederacy monuments in question are too old to be subject to VARA’s reach, or, if not, that all the i’s were dotted on the paperwork necessary to avoid it. (It’s also possible that neither is the case — VARA may still apply, and artists behind some of the monuments might try to block their removal.) But it would be naïve to believe that we’ll never ever have monument controversies again. The one thing VARA gets right is an acknowledgement of the power of public art to be reflective and provocative. But how things are reflective and provocative to a society can change over time as the society evolves. As we see now, figuring out how to handle these changes can be difficult, but at least people in the community can make the choice, hard though it may sometimes be, about what art they want in their midst. VARA, however, takes away that discretion by giving it to someone else who can trump it (so to speak).

Of course, as with any law, the details matter: what art was it, whose art was it, where was it, who paid for it, when was it created, who created it, and is whoever created it dead yet… all these questions matter in any situation dealing with the removal of a public art installation because they affect whether and how VARA actually applies. But to some extent the details don’t matter. While in some respects VARA is currently relatively limited, we know from experience that limited monopolies in the copyright space rarely stay so limited. What matters is that we created a law that is expressly designed in its effect to undermine the ability of a community with art in its midst to decide whether it wants to continue to have that art in its midst, and thought that was a good idea. Given the power of art to be a vehicle of expression, even political expression or outright propaganda, allowing any law to etch that expression in stone (as it were) is something we should really rethink.

Copyright Law And The Grenfell Fire – Why We Cannot Let Legal Standards Be Locked Up By Copyright (cross-post)

 Analysis/commentary, Criminal IP Enforcement, Regulating speech  Comments Off on Copyright Law And The Grenfell Fire – Why We Cannot Let Legal Standards Be Locked Up By Copyright (cross-post)
Jul 122017
 

The following was also posted on Techdirt.

It’s always hard to write about the policy implications of tragedies – the last thing their victims need is the politicization of what they suffered. At the same time, it’s important to learn what lessons we can from these events in order to avoid future ones. Earlier Mike wrote about the chilling effects on Grenfell residents’ ability to express their concerns about the safety of the building – chilling effects that may have been deadly – because they lived in a jurisdiction that allowed critical speech to be easily threatened. The policy concern I want to focus on now is how copyright law also interferes with safety and accountability both in the US and elsewhere.

I’m thinking in particular about the litigation Carl Malamud has found himself faced with because he dared to post legally-enforceable standards on his website as a resource for people who wanted ready access to the law that governed them. (Disclosure: I helped file amicus briefs supporting his defense in this litigation.) A lot of the discussion about the litigation has focused on the need for people to know the details of the law that governs them: while ignorance of the law is no excuse, as a practical matter people need a way to actually know what the law is if they are going to be expected to comply with it. Locking it away in a few distant libraries or behind paywalls is not an effective way of disseminating that knowledge.

But there is another reason why the general public needs to have access to this knowledge. Not just because it governs them, but because others’ compliance with it obviously affects them. Think for instance about the tenants in these buildings, or any buildings anywhere: how can they be equipped to know if the buildings they live in meet applicable safety standards if they never can see what those standards are? They instead are forced to trust that those with privileged access to that knowledge will have acted on it accordingly. But as the Grenfell tragedy has shown, that trust may be misplaced. “Trust, but verify,” it has been famously said. But without access to the knowledge necessary to verify that everything has been done properly, no one can make sure that it has. That makes the people who depend on this compliance vulnerable. And as long as copyright law is what prevents them from knowing if there has been compliance, then it is copyright law that makes them so.  Continue reading »

Why Protecting The Free Press Requires Protecting Trump’s Tweets (cross-post)

 Analysis/commentary, Intermediary liability, Regulating speech  Comments Off on Why Protecting The Free Press Requires Protecting Trump’s Tweets (cross-post)
Jul 062017
 

The following was originally posted on Techdirt.

Sunday morning I made the mistake of checking Twitter first thing upon waking up. As if just a quick check of Twitter would ever be possible during this administration… It definitely wasn’t this past weekend, because waiting for me in my Twitter stream was Trump’s tweet of the meme he found on Reddit showing him physically beating the crap out of a personified CNN.

But that’s not what waylaid me. What gave me pause were all the people demanding it be reported to Twitter for violating its terms of service. The fact that so many people thought that was a good idea worries me, because the expectation that when bad speech happens someone will make it go away is not a healthy one. My concern inspired a tweet storm, which has now been turned into this post. Continue reading »

Diffusion of the Internet Among University Undergraduates

 Analysis/commentary, Other regulation  Comments Off on Diffusion of the Internet Among University Undergraduates
Jul 052017
 

The attached paper is a re-publication of the honors thesis I wrote in 1996 as a senior at the University of California at Berkeley.  As the title indicates, it was designed to study Internet adoption among my fellow students. Continue reading »

The Importance Of Defending Section 230 Even When It’s Hard (cross-post)

 Analysis/commentary, Intermediary liability, Regulating speech  Comments Off on The Importance Of Defending Section 230 Even When It’s Hard (cross-post)
Jun 132017
 

Cross-posted on Techdirt.

The Copia Institute filed another amicus brief this week, this time in Fields v. Twitter. Fields v. Twitter is one of a flurry of cases being brought against Internet platforms alleging that they are liable for the harms caused by the terrorists using their sites. The facts in these cases are invariably awful: often people have been brutally killed and their loved ones are seeking redress for their loss. There is a natural, and perfectly reasonable, temptation to give them some sort of remedy from someone, but as we argued in our brief, that someone cannot be an internet platform.

There are several reasons for this, including some that have nothing to do with Section 230. For instance, even if Section 230 did not exist and platforms could be liable for the harms resulting from their users’ use of their services, for them to be liable there would have to be a clear connection between the use of the platform and the harm. Otherwise, based on the general rules of tort law, there could be no liability. In this particular case, for instance, there is a fairly weak connection between ISIS members using Twitter and the specific terrorist act that killed the plaintiffs’ family members.

But we left that point to Twitter to ably argue. Our brief focused exclusively on the fact that Section 230 should prevent a court from ever even reaching the tort law analysis. With Section 230, a platform should never find itself having to defend against liability for harm that may have resulted from how people used it. Our concern is that in several recent cases with their own terrible facts, the Ninth Circuit in particular has found itself willing to make exceptions to that rule. As much as we were supporting Twitter in this case, trying to help ensure the Ninth Circuit does not overturn the very good District Court decision that had correctly applied Section 230 to dismiss the case, we also had an eye to the long view of reversing this trend. Continue reading »

Helping Platforms Protect Speech By Avoiding Bogus Subpoenas (cross-post)

 Analysis/commentary, Intermediary liability, Regulating speech  Comments Off on Helping Platforms Protect Speech By Avoiding Bogus Subpoenas (cross-post)
May 262017
 

The following was cross-posted on Techdirt.

We often talk about how protecting online speech requires protecting platforms, like with Section 230 immunity and the safe harbors of the DMCA. But these statutory shields are not the only way law needs to protect platforms in order to make sure the speech they carry is also protected.

Earlier this month, I helped Techdirt’s think tank arm, the Copia Institute, file an amicus brief in support of Yelp in a case called Montagna v. Nunis. Like many platforms, Yelp lets people post content anonymously. Often people are only willing to speak when they can do so without revealing who they are (note how many people participate in the comments here without revealing their real names), which is why the right to speak anonymously has been found to be part and parcel of the First Amendment right of free speech . It’s also why sites like Yelp let users post anonymously, because often that’s the only way they will feel comfortable posting reviews candid enough to be useful to those who depend on sites like Yelp to help them make informed decisions.

But as we also see, people who don’t like the things said about them often try to attack their critics, and one way they do this is by trying to strip these speakers of their anonymity. True, sometimes online speech can cross the line and actually be defamatory, in which case being able to discover the identity of the speaker is important. This case in no way prevents legitimately aggrieved plaintiffs from using subpoenas to discover the identity of those whose unlawful speech has injured them to sue them for relief. Unfortunately, however, it is not just people with legitimate claims who are sending subpoenas; in many instances they are being sent by people objecting to speech that is perfectly legal, and that’s a problem. Unmasking the speakers behind protected speech not only violates their First Amendment rights to speak anonymously but it also chills the speech the First Amendment is designed to foster generally by making the critical anonymity protection that plenty of legal speech depends on suddenly illusory.

There is a lot that can and should be done to close off this vector of attack on free speech. One important measure is to make sure platforms are able to resist the subpoenas they get demanding they turn over whatever identifying information they have. There are practical reasons why they can’t always fight them — for instance, like DMCA takedown notices, they may simply get too many — but it is generally in their interest to try to resist illegitimate subpoenas targeting the protected speech posted anonymously on their platforms so that their users will not be scared away from speaking on their sites.

But when Yelp tried to resist the subpoena connected with this case, the court refused to let them stand in to defend the user’s speech interest. Worse, it sanctioned(!) Yelp for even trying, thus making platforms’ efforts to stand up for their users even more risky and expensive than they already are.

So Yelp appealed, and we filed an amicus brief supporting their effort. Fortunately, earlier this year Glassdoor won an important California State appellate ruling that validated attempts by platforms to quash subpoenas on behalf of their users. That decision discussed why the First Amendment and California State Constitution required platforms to have this ability to quash subpoenas targeting protected speech, and hopefully this particular appeals court will agree with its sister court and make clear that platforms are allowed to fight off subpoenas like this. As we pointed out in our brief, both state and federal law and policy require online speech to be protected, and preventing platforms from resisting subpoenas is out of step with those stated policy goals and constitutional requirements.

More on the First Amendment problems with DMCA Section 512

 Analysis/commentary, Intermediary liability  Comments Off on More on the First Amendment problems with DMCA Section 512
Feb 232017
 

Over at Techdirt there’s a write-up of the latest comment I submitted on behalf of the Copia Institute as part of the Copyright Office’s study on the operation of Section 512 of the Digital Millennium Copyright Act. As as we’ve told the Copyright Office before, that operation has had a huge impact on online free speech. (Those comments have also been cross-posted here.)

In some ways this impact is good: providing platforms with protection from liability in their users’ content means that they can be available to facilitate that content and speech. But all too often and in all too many ways the practical impact on free speech has been a negative one, with speech being much more vulnerable to censorship via takedown notice than it ever would have been if the person objecting to it (even for copyright-related reasons) had to go to court to get an injunction to take it down. Not only is the speech itself more vulnerable than it should be, but the protection the platforms depend on ends up being more vulnerable as well because platforms must risk it every time they refuse to act on a takedown notice, no matter how invalid that notice may be.

Our earlier comment pointed out in some detail how the current operation of the DMCA has been running afoul of the protections the First Amendment is supposed to afford speech, and in this second round of comments we’ve highlighted some further deficiencies. In particular, we reminded the Copyright Office of the problems with “prior restraint,” which the First Amendment also prohibits. Prior restraint is what happens when speech is punished before there has been any adjudication to prove that it deserves to be punished. The reason the First Amendment prohibits prior restraint is that it does no good to punish speech, such as by removing it, if the First Amendment would otherwise protect it – once it has been removed the damage will have already been done.

Making sure that legitimate speech cannot be removed is why we normally require the courts to carefully adjudicate whether its removal can be ordered before its removal will be allowed. But with the DMCA there is no such judicial check: people can send demands for all sorts of content to be removed, even if it weren’t actually infringing, because there is little to deter them so long as Section 512(f) continues to have no teeth. Instead platforms are forced to treat every takedown notice as a legitimate demand, regardless of whether it is or not. Not only does this mean they need to delete the content but, in the wake of some recent cases, it seems they also must potentially hold each allegation against their user, regardless of whether it was valid or not, and then cut that user off from their services when they’ve accrued too many such accusations, again regardless of they were valid or not.

As we did before, we counseled the Copyright Office to return to first principles: the DMCA was supposed to enhance online free speech, and it’s important to make sure that all of its provisions work together to do just that. To the extent that it may be appropriate for the Copyright Office to make recommendations on this front, one is to remind all concerned that the penalty articulated in Section 512(f) to sanction bad takedown notices can and should be applied according to a flexible standard, rather than the rigid one courts have lately adopted. In any case, however, the Copyright Office certainly should not be advocating to changes in any provisions or their interpretations that make the DMCA any less compatible with the First Amendment than it has already tended to be.

Dec 172016
 

The following was recently published on Techdirt, although with a different title.

Regardless of what one thinks about the apparent result of the 2016 election, it will inevitably present a number of challenges for America and the world. As Mike wrote about last week, they will inevitably touch on many of the tech policy issues often discussed here. The following is a closer look at some of the implications (and opportunities) with respect to several of them, given the unique hallmarks of Trump and his proposed administration. Continue reading »

Apr 082016
 

The following is Section III.C of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Questions #16 and #17 more specifically contemplate the effectiveness of the put-back process articulated at subsection 512(g).  As explained in Section III.B this mechanism is not effective for restoring wrongfully removed content and is little used.  But it is worth taking a moment here to further explore the First Amendment harms wrought to both Internet users and service providers by the DMCA.[1]

It is part and parcel of First Amendment doctrine that people are permitted to speak, and to speak anonymously.[2]  Although that anonymity can be stripped in certain circumstances, there is nothing about the allegation of copyright infringement that should cause it to be stripped automatically.  Particularly in light of copyright law incorporating free speech principles[3] this anonymity cannot be more fragile than it would in any other circumstance where speech was subject to legal challenge.  The temptation to characterize all alleged infringers as malevolent pirates who get what they deserve must be resisted; the DMCA targets all speakers and all speech, no matter how fair or necessary to public discourse this speech is.

And yet, with the DMCA, not only is speech itself more vulnerable to censorship via copyright infringement claim than it would be for other types of allegations[4] but so are the necessary protections speakers depend on to be able to speak.[5]  Between the self-identification requirements of subsection 512(g) put-back notices and the ease of demanding user information with subsection 512(h) subpoenas that also do not need to be predicated on actual lawsuits,[6] Internet speakers on the whole must fear the loss of their privacy if anyone dares to construe an infringement claim, no matter how illegitimate or untested that claim may be.  Given the ease of concocting an invalid infringement claim,[7] and the lack of any incentive not to,[8] the DMCA gives all-too-ready access to the identities of Internet users to the people least deserving of it and at the expense of those who most need it.[9]

Furthermore, the DMCA also compromises service providers’ own First Amendment interests in developing the forums and communities they would so choose.  The very design of the DMCA puts service providers at odds with their users, forcing them to be antagonistic their own customers and their own business interests as a condition for protecting those interests.  Attempts to protect their forums or their users can expose them to tremendous costs and potentially incalculable risk, and all of this harm flows from mere allegation that never need be tested in a court of law.  The DMCA forces service providers to enforce censorship compelled by a mere takedown notice, compromise user privacy in response to subsection 512(h) subpoenas (or devote significant resources to trying to quash them), and, vis a vis Questions #22 and 23, disconnect users according to termination policies whose sufficiency cannot be known until a court decides they are not.[10]

The repeat infringer policy requirement of subsection 512(i)(A) exemplifies the statutory problem with many of the DMCA’s safe harbor requirements.  A repeat infringer policy might only barely begin to be legitimate if it applied to the disconnection of a user after a certain number of judicial findings of liability for acts of infringement that users had used the service provider to commit.  But as at least one service provider lost its safe harbor for not permanently disconnecting users after only a certain number of allegations, even though they were allegations that had never been tested in a court consistent with the principles of due process or prohibition on prior restraint.[11]

In no other context would we find these sorts of government incursions against the rights of speakers constitutional, robbing them of their speech, anonymity, and the opportunity to further speak, without adequate due process.  These incursions do not suddenly become constitutionally sound just because the DMCA coerces service providers to be the agent committing these acts instead.
Continue reading »