May 142013
 

This specific blog post has been prompted by news that the Department of Justice had subpoenaed the phone records of the Associated Press. Many are concerned about this news for many reasons, not the least of which being that this revelation suggests that, at minimum, the Department of Justice violated many of its own rules in how it did so (ie, it should have reported the existence of the subpoena within 45 days, maybe 90 on the outside, but here it seems to have delayed a year). The subpoena of the phone records of a news organization also threatens to chill newsgathering generally, for what sources would want to speak to a reporter if the government could be presumed to know that these communications had been taking place? For reasons discussed in the context of shield laws, reporters can’t do their information-gathering-and-sharing job if the people they get their information from are too frightened to share it. Even if one were to think that in some situations loose lips do indeed sink ships and it’s sometimes bad for people to share information, there’s no way the law can differentiate which situations are bad and which are good presumptively or prospectively. In order to for the good situations to happen – for journalists to help serve as a check on power — the law needs to give them a free hand to discover the information they need to do that.

But the above discussion is largely tangential to the point of this post. The biggest problem with the story of the subpoena is not *that* it happened but that, for all intents and purposes, it *could* happen, and not just because of how it affected the targeted journalists but because of how it would affect anyone subject to a similar subpoena for any reason. Subpoenas are not search warrants, where a neutral arbiter ensures that the government has a proper reason to access the information it seeks. Subpoenas are simply the form by which the government demands the information it wants, and as long as the government only has to face what amounts to a clerical hurdle to get these sorts of communications records there are simply not enough legal barriers to protect the privacy of the people who made them. Continue reading »

May 132013
 

One of the cases I came across when I was writing an article about Internet surveillance was Deal v. Spears, 980 F. 2d 1153 (8th Cir. 1992), a case involving the interception of phone calls that was arguably prohibited by the Wiretap Act (18 U.S.C. § 2511 et seq.). The Wiretap Act, for some context, is a 1968 statute that applied Fourth Amendment privacy values to telephones, and in a way that prohibited both the government and private parties from intercepting the contents of conversations taking place through the telephone network. That prohibition is fairly strong: while there are certain types of interceptions that are exempted from it, these exemptions have not necessarily been interpreted generously, and Deal v. Spears was one of those cases where the interception was found to have run afoul of the prohibition.

It’s an interesting case for several reasons, one being that it upheld the privacy rights of an apparent bad actor (of course, so does the Fourth Amendment generally). In this case the defendants owned a store that employed the plaintiff, whom the defendants strongly suspected – potentially correctly – was stealing from them. In order to catch the plaintiff in the act, the defendants availed themselves of the phone extension in their adjacent house to intercept the calls the plaintiff made on the store’s business line to further her crimes. Ostensibly such an interception could be exempted by the Wiretap Act: the business extension exemption generally allows for business proprietors to listen in to calls made in the ordinary course of business. (See 18 U.S.C. § 2510(5)(a)(i)). But here the defendants didn’t just listen in to business calls; they recorded *all* calls that the plaintiff made, regardless of whether they related to the business or not, and, by virtue of being automatically recorded, without the telltale “click” one hears when an actual phone extension is picked up, thereby putting the callers on notice that someone is listening in. This silent, pervasive monitoring of the contents of all communications put the monitoring well-beyond the statutory exception that might otherwise have permitted a more limited interception.

[T]he [defendants] recorded twenty-two hours of calls, and […] listened to all of them without regard to their relation to his business interests. Granted, [plaintiff] might have mentioned the burglary at any time during the conversations, but we do not believe that the [defendants'] suspicions justified the extent of the intrusion.

For a similar view, see US v. Jones, 542 F. 2d 661 (6th Cir. 1976):

[T]here is a vast difference between overhearing someone on an extension and installing an electronic listening device to monitor all incoming and outgoing telephone calls.

And so the defendants, hapless victims though they seemed to have been in their own right, were found to have violated the Wiretap Act.

But Deal v. Spears is a telephone case, and telephone cases are fairly straight forward. The statutory language clearly reaches the contents of those communications made with that technology, and all that’s really been left for courts to decide is how broad to construe the few exemptions the statute articulates. What has been much harder is figuring out how to extend the Wiretap Act’s prohibitions against surveillance to those communications made via other technologies (ie, the Internet), or to aspects of those communications that seem to apply more to how they should be routed than their underlying message. However privacy interests are privacy interests, and no amount of legal hairsplitting alleviates the harm that can result when any identifiable aspect of someone’s communications can be surveilled. There is a lot that the Wiretap Act, both in terms of its statutory history and subsequent case law, can teach us about surveillance policy, and we would be foolish not to heed those lessons.

More on them later.

Apr 112013
 

The Computer Fraud and Abuse Act is no stranger to these pages.  The tragic suicide of Aaron Swartz at the beginning of the year following the relentless pursuit of the Department of Justice against him for his downloading of the JSTOR archive has galvanized a reform movement to overhaul – or at least ameliorate – some of the most troublesome provisions of the CFAA.

One such provision can be found at 18 U.S.C. § 1030(g), which creates a civil cause of action for a party claiming to be aggrieved by the purported wrongdoings described in subsection (a).  While civil causes of action are generally beyond the scope of this blog, having a civil cause of action buried in a statute designed to enable criminal prosecutions can be problematic for defendants facing the latter because the civil litigation, as it explores the contours of the statute and its internal definitions, tends to leave in its wake precedent that prosecutors can later use.  Which is unfortunate, because how the statute may be interpreted in a civil context – which inherently can only reflect the particular dynamics of the particular civil dispute between these particular private parties – reshapes how the statute will be interpreted in a criminal context.  Especially with a law like the CFAA, whose language always tempts excessive application, these civil precedents can vastly expand the government’s prosecutorial power over people’s technology use, and easily in a way Congress never intended.  One should also never presume that the outcome of a civil dispute correlates to a result that is truly fair and just; miscarriages of justice happen all the time, often simply because it is often so difficult and expensive to properly defend against a lawsuit, especially one asserting a claim from such an imprecisely-drafted and overly broad statute like the CFAA.

The reality is that plaintiffs often abuse the judicial process to bully defendants, and that brings us to the second subject of this post, Prenda Law, which is currently being exposed, judicially and publicly  as one of the biggest bullies on the block.  But why should we care here?  Because although Prenda has most notoriously exploited the Copyright Act for its legal attacks, it has also showed itself ready, willing, and able to abuse the easily-abusable CFAA in order to enrich itself as well. Continue reading »

Apr 092013
 

I found myself blogging about journalist shield law at my personal blog today. As explained in that post, an experience as the editor of the high school paper has made newsman’s privilege a topic near and dear to my heart. So I thought I would resurrect a post I wrote a few years ago on the now-defunct blog I kept as a law student about how newsman’s privilege interacts with blogging as food for thought here. Originally written and edited in 2006/2007, with a few more edits for clarity now.

At a blogging colloquium at Harvard Law School 1 Eugene Volokh gave a presentation on the free speech protections that might be available for blogging, with the important (and, in my opinion, eminently reasonable) suggestion that free speech protections should not be medium-specific. In other words, if these protections would be available to you if you’d put your thoughts on paper, they should be available if you’d put them on a blog. Continue reading »

Mar 272013
 

I was interviewed yesterday about my concerns for the new Golden Gate Bridge toll system. Like an increasing number of other roadways, as of this morning the bridge will have gone to all-electronic tolling and done away with its human toll-takers, ostensibly as a cost-cutting move. But while it may save the Bridge District some money on salaries, at what cost does it do so to the public?

With the toll-takers bridge users could pay cash, anonymously, whenever they wanted to use the bridge. Fastrak, the previous electronic toll system, has also been an option for the past several years, offering a discount to bridge users who didn’t mind having their travel information collected, stored, and potentially accessed by others in exchange for some potential expediency. But now bridge users will either have to use Fastrak, or agree to have their license plates photographed (and thereby have their travel information collected, stored, and potentially accessed by others) and then compared to DMV records in order to then be invoiced for their travels.
Continue reading »

Mar 272013
 

I’ve written before about the balance privacy laws need to take with respect to the data aggregation made possible by the digital age. When it comes to data aggregated or accessed by the government, on that front law and policy should provide some firm checks to ensure that such aggregation or access does not violate people’s Fourth Amendment right “to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures.” Such limitations don’t forever hobble legitimate investigations of wrongdoing; they simply require adequate probable cause before the digital records of people’s lives be exposed to police scrutiny. You do not need to have something to hide in order not to want that.

But all too often when we demand that government better protect privacy it’s not because we want the government to; on the contrary, we want it to force private parties to. Which isn’t to say that there is no room for concern when private parties aggregate personal data. Such aggregations can easily be abused, either by private parties or by the government itself (which tends to have all too easy access to it). But as this recent article in the New York Times suggests, a better way to construct the regulation might be to focus less on how private parties collect the data and more on the subsequent access to and use of the data once collected, since that is generally from where any possible harm could flow. The problem with privacy regulation that is too heavy-handed in how it allows technology to interact with data is that these regulations can choke further innovation, often undesirably. As a potential example, although mere speculation, this article suggests that Google discontinued its support for its popular Google Reader product due to the burdens of compliance with myriad privacy regulations. Assuming this suspicion is true — but even if it’s not — while perhaps some of this regulation vindicates important policy values, it is fair to question whether it does so in a sufficiently nuanced way so that it doesn’t provide a disincentive for innovators to develop and support new products and technologies. If such regulation is having that chilling effect, we may reasonably want to question whether these enforcement mechanisms have gone too far.

Meanwhile public outcry has largely been ignoring much more obvious and dangerous incursions into their privacy rights done by government actors, a notable example of which will be discussed in the following post.

Feb 202013
 

At an event on CFAA reform last night I heard Brewster Kahle say what to my ears sounded like, “Law that follows technology tends to be ok. Law that tries to lead it is not.”

His comment came after an earlier tweet I’d made:

I think we need a per se rule that any law governing technology that was enacted more than 10 years ago is inherently invalid.

In posting that tweet I was thinking about two horrible laws in particular, the Computer Fraud and Abuse Act (CFAA) and the Electronic Communications Privacy Act (ECPA). The former attempts to forbid “hacking,” and the second ostensibly tried to update 1968’s Wiretap Act to cover information technology. In both instances the laws as drafted generally incorporated the attitude that technology as understood then would be the technology the world would have forever hence, a prediction that has obviously been false. But we are nonetheless left with laws like these on the books, laws that hobble further innovation by how they’ve enshrined in our legal code what is right and wrong when it comes to our computer code, as we understood it in 1986, regardless of whether, if considered afresh and applied to today’s technology, we would still think so.

To my tweet a friend did challenge me, however, “What about Section 230? (47 U.S.C. § 230).” This is a law from 1996, and he has a point. Section 230 is a piece of legislation that largely immunizes Internet service providers for liability in content posted on their systems by their users – and let’s face it: the very operational essence of the Internet is all about people posting content on other people’s systems. However, unlike the CFAA and ECPA, Section 230 has enabled technology to flourish, mostly by purposefully getting the law itself out of the way of the technology.

The above are just a few examples of some laws that have either served technology well – or served to hamper it. There are certainly more, and some laws might ultimately do a bit of both. But the general point is sound: law that is too specific is often too stifling. Innovation needs to be able to happen however it needs to, without undue hindrance caused by legislators who could not even begin to imagine what that innovation might look like so many years before. After all, if they could imagine it then, it would not be so innovative now.

Feb 182013
 

It’s become clear that I will need to talk more about copyright policy in general on these pages, even if in a not-particularly-criminal-law context.  As we evaluate criminalizing acts involving technology that cause “harm,” and since some of that notion of harm is predicated on our notion of copyright, it’s important that we truly understand where the concept of copyright comes from and what policy objective it is supposed to achieve.  Particularly because it’s a fair question as to whether modern copyright law still achieves those objectives, or instead potentially represents its own harm. Continue reading »

Feb 132013
 

Last week the BBC contributed its thoughts to the W3C committee contemplating the Encrypted Media Extensions Proposal to the HTML standard, which would allow for more standardized video viewing across multiple platforms.  After establishing its bonafides as a source of Internet video broadcasting, it got to the point.  The proposal, it said, was was overall a helpful one as far as the standardization was concerned.  Technological fragmentation is a problem for someone who wants to make sure their video is viewable to a wide audience. Despite that enormous benefit, however, the BBC could only support the Proposal if it incorporated a DRM standard such that the BBC could pointedly control the retail market for its programming.

It’s worth questioning whether manipulating markets ultimately enlarges them — or, instead, potentially reduces them — but that’s not a subject for these pages right now.  The problem was how the BBC required the proposal to be changed in order to ostensibly enable such manipulation:

The proposed Encrypted Media Proposal looks to be a useful starting point. However, the BBC is unlikely to be able to use any such mechanism unless we feel that it is sufficiently secure that there would be the possibility of legal action in the event of bypassing it.

This is not an easy qualification: the W3C is an international body, and laws on bypassing technical protection measures vary significantly from country to country. In this instance the BBC would be looking for such a mechanism to be secure enough in the UK that it would be a “effective technical protection mechanism” under section 296zb of the Copyright, Designs and Patents Act 1988 (as modified by the Copyright and Related Rights Regulations 2003). We expect that other providers will look for similar assurances in their own territories, such as the anti-circumvention provisions in the Digital Millennium Copyright Act in the United States. (emphasis added)

To summarize, the BBC, “the world’s leading public service broadcaster,” “established by a Royal Charter” and “primarily funded by the licence fee paid by UK households” with a “mission [...] to enrich people’s lives with programmes that inform, educate and entertain,” has just lobbied an international technical standards organization charged with “lead[ing] the World Wide Web to its full potential by developing protocols and guidelines that ensure the long-term growth of the Web” such that it enables “involves participation, sharing knowledge, and thereby building trust on a global scale” to make its standards such that people could be imprisoned for using that very technology in a way the BBC did not like.

True, perhaps the BBC was only contemplating there being civil penalties, which is problematic as well. But both the DMCA and section 296zb of the Copyright, Designs and Patents Act 1988 allow for state criminal enforcement when people circumvent technologies designed to control access to content, regardless of how legitimate that access would be.

Feb 092013
 

The following case, Twentieth Century Fox v. Harris, is not a criminal matter.  But I want to include it here nonetheless in part because it’s important to talk about copyright policy generally, particularly given the increasing trend for it to be criminalized.  And partly because, in this case, hardly two weeks after I asserted that copyright infringement analogized more to trespass than to theft, a court independently reached the same conclusion. Continue reading »