Sep 012018
 

This post originally appeared on Techdirt on 4/17/18.

Over the weekend Trump tweeted:

 

If you can’t read that it says:

 

Attorney Client privilege is now a thing of the past. I have many (too many!) lawyers and they are probably wondering when their offices, and even homes, are going to be raided with everything, including their phones and computers, taken. All lawyers are deflated and concerned!

 

Attorney-client privilege is indeed a serious thing. It is inherently woven into the Sixth Amendment’s right to counsel. That right to counsel is a right to effective counsel. Effective counsel depends on candor by the client. That candor in turn depends on clients being confident that their communications seeking counsel will be confidential. If, however, a client has to fear the government obtaining those communications then their ability to speak openly with their lawyer will be chilled. But without that openness, their lawyers will not be able to effectively advocate for them. Thus the Sixth Amendment requires that attorney-client communications – those communications made in the furtherance of seeking legal counsel – be privileged from government (or other third party) view. Continue reading »

Sep 012018
 

This post originally appeared on Techdirt on 1/29/18.

Never mind all the other reasons Deputy Attorney General Rod Rosenstein’s name has been in the news lately… this post is about his comments at the State of the Net conference in DC on Monday. In particular: his comments on encryption backdoors.

As he and so many other government officials have before, he continued to press for encryption backdoors, as if it were possible to have a backdoor and a functioning encryption system. He allowed that the government would not itself need to have the backdoor key; it could simply be a company holding onto it, he said, as if this qualification would lay all concerns to rest.

But it does not, and so near the end of his talk I asked the question, “What is a company to do if it suffers a data breach and the only thing compromised is the encryption key it was holding onto?”

There were several concerns reflected in this question. One relates to what the poor company is to do. It’s bad enough when they experience a data breach and user information is compromised. Not only does a data breach undermine a company’s relationship with its users, but, recognizing how serious this problem is, authorities are increasingly developing policy instructing companies on how they are to respond to such a situation, and it can expose the company to significant legal liability if it does not comport with these requirements.

But if an encryption key is taken it is so much more than basic user information, financial details, or even the pool of potentially rich and varied data related to the user’s interactions with the company that is at risk. Rather, it is every single bit of information the user has ever depended on the encryption system to secure that stands to be compromised. What is the appropriate response of a company whose data breach has now stripped its users of all the protection they depended on for all this data? How can it even begin to try to mitigate the resulting harm? Just what would government officials, who required the company to keep this backdoor key, now propose it do? Particularly if the government is going to force companies to be in this position of holding onto these keys, these answers are something they are going to need to know if they are going to be able to afford to be in the encryption business at all.

Which leads to the other idea I was hoping the question would capture: that encryption policy and cybersecurity policy are not two distinct subjects. They interrelate. So when government officials worry about what bad actors do, as Rosenstein’s comments reflected, it can’t lead to the reflexive demand that encryption be weakened simply because, as they reason, bad actors use encryption. Not when the same officials are also worried about bad actors breaching systems, because this sort of weakened encryption so significantly raises the cost of these breaches (as well as potentially makes them easier).

Unfortunately Rosenstein had no good answer. There was lots of equivocation punctuated with the assertion that experts had assured him that it was feasible to create backdoors and keep them safe. Time ran out before anyone could ask the follow-up question of exactly who were these mysterious experts giving him this assurance, especially in light of so many other experts agreeing that such a solution is not possible, but perhaps this answer is something Senator Wyden can find out

Sep 012018
 

This post originally appeared on Techdirt on 1/22/18.

Shortly after Trump was elected I wrote a post predicting how things might unfold on the tech policy front with the incoming administration. It seems worth taking stock, now almost a year into it, to see how those predictions may have played out. Continue reading »

Nov 192017
 

Originally posted on Techdirt November 15, 2017.

Well, I was wrong: last week I lamented that we might never know how the Ninth Circuit ruled on Glassdoor’s attempt to quash a federal grand jury subpoena served upon it demanding it identify users. Turns out, now we do know: two days after the post ran the court publicly released its decision refusing to quash the subpoena. It’s a decision that doubles-down on everything wrong with the original district court decision that also refused to quash it, only now with handy-dandy Ninth Circuit precedential weight.

Like the original ruling, it clings to the Supreme Court’s decision in Branzburg v. Hayes, a case where the Supreme Court explored the ability of anyone to resist a grand jury subpoena. But in doing so it manages to ignore other, more recent, Supreme Court precedents that should have led to the opposite result.

Here is the fundamental problem with both the district court and Ninth Circuit decisions: anonymous speakers have the right to speak anonymously. (See, e.g., the post-Branzburg Supreme Court decision McIntyre v. Ohio Elections Commission). Speech rights also carry forth onto the Internet. (See, e.g., another post-Branzburg Supreme Court decision, Reno v. ACLU). But if the platforms hosting that speech can always be forced to unmask their users via grand jury subpoena, then there is no way for that right to ever meaningfully exist in the context of online speech. Continue reading »

Nov 062017
 

This post is the second in a series that ran on Techdirt about the harm to online speech through unfettered discovery on platforms that they are then prevented from talking about.

In my last post, I discussed why it is so important for platforms to be able to speak about the discovery demands they receive, seeking to unmask their anonymous users. That candor is crucially important in ensuring that unmasking demands can’t damage the key constitutional right to speak anonymously, without some sort of check against their abuse.

The earlier post rolled together several different types of discovery instruments (subpoenas, warrants, NSLs, etc.) because to a certain extent it doesn’t matter which one is used to unmask an anonymous user. The issue raised by all of them is that if their power to unmask an anonymous user is too unfettered, then it will chill all sorts of legitimate speech. And, as noted in the last post, the ability for a platform receiving an unmasking demand to tell others it has received it is a critical check against unworthy demands seeking to unmask the speakers behind lawful speech.

The details of each type of unmasking instrument do matter, though, because each one has different interests to balance and, accordingly, different rules governing how to balance them. Unfortunately, the rules that have evolved for any particular one are not always adequately protective of the important speech interests any unmasking demand necessarily affects. As is the case for the type of unmasking demand at issue in this post: a federal grand jury subpoena.

Grand jury subpoenas are very powerful discovery instruments, and with good reason: the government needs a powerful weapon to be able to investigate serious crimes. There are also important constitutional reasons for why we equip grand juries with strong investigatory power, because if charges are to be brought against people, it’s important for due process reasons that they have been brought by the grand jury, as opposed to a more arbitrary exercise of government power. Grand juries are, however, largely at the disposal of government prosecutors, and thus a grand jury subpoena essentially functions as a government unmasking demand. The ability to compel information via a grand jury subpoena is therefore not a power we can allow to exist unchecked.

Which brings us to the story of the grand jury subpoena served on Glassdoor, which Paul Levy and Ars Technica wrote about earlier this year. It’s a story that raises three interrelated issues: (1) a poor balancing of the relevant interests, (2) a poor structural model that prevented a better balancing, and (3) a gag that has made it extraordinarily difficult to create a better rule governing how grand jury subpoenas should be balanced against important online speech rights. Continue reading »

Nov 042017
 

The following post originally appeared on Techdirt on 11/3/17.

The news about the DOJ trying to subpoena Twitter calls to mind an another egregious example of the government trying to unmask an anonymous speaker earlier this year. Remember when the federal government tried to compel Twitter to divulge the identity of a user who had been critical of the Trump administration? This incident was troubling enough on its face: there’s no place in a free society for a government to come after a critic of it. But largely overlooked in the worthy outrage over the bald-faced attempt to punish a dissenting voice was the government’s simultaneous attempt to prevent Twitter from telling anyone that the government was demanding this information. Because Twitter refused to comply with that demand, the affected user was able to get counsel and the world was able to know how the government was abusing its authority. As the saying goes, sunlight is the best disinfectant, and by shining a light on the government’s abusive behavior it was able to be stopped.

That storm may have blown over, but the general issues raised by the incident continue to affect Internet platforms – and by extension their users and their speech. A significant problem we keep having to contend with is not only what happens when the government demands information about users from platforms, but what happens when it then compels the same platforms to keep those demands a secret. These secrecy demands are often called different things and are born from separate statutory mechanisms, but they all boil down to being some form of gag over the platform’s ability to speak, with the same equally troubling implications. We’ve talked before about how important it is that platforms be able to protect their users’ right to speak anonymously. That right is part and parcel of the First Amendment because there are many people who would not be able to speak if they were forced to reveal their identities in order to do so. Public discourse, and the benefit the public gets from it, would then suffer in the absence of their contributions. But it’s one thing to say that people have the right to speak anonymously; it’s another to make that right meaningful. If civil plaintiffs, or, worse, the government, can too easily force anonymous speakers to be unmasked then the right to speak anonymously will only be illusory. For it to be something speakers can depend on to enable them to speak freely there have to be effective barriers preventing that anonymity from too casually being stripped by unjust demands. Continue reading »

Dec 172016
 

The following was recently published on Techdirt, although with a different title.

Regardless of what one thinks about the apparent result of the 2016 election, it will inevitably present a number of challenges for America and the world. As Mike wrote about last week, they will inevitably touch on many of the tech policy issues often discussed here. The following is a closer look at some of the implications (and opportunities) with respect to several of them, given the unique hallmarks of Trump and his proposed administration. Continue reading »

Apr 082016
 

The following is Section III.C of the comment I submitted in the Copyright Office’s study on the operation of Section 512 of the copyright statute.

Questions #16 and #17 more specifically contemplate the effectiveness of the put-back process articulated at subsection 512(g).  As explained in Section III.B this mechanism is not effective for restoring wrongfully removed content and is little used.  But it is worth taking a moment here to further explore the First Amendment harms wrought to both Internet users and service providers by the DMCA.[1]

It is part and parcel of First Amendment doctrine that people are permitted to speak, and to speak anonymously.[2]  Although that anonymity can be stripped in certain circumstances, there is nothing about the allegation of copyright infringement that should cause it to be stripped automatically.  Particularly in light of copyright law incorporating free speech principles[3] this anonymity cannot be more fragile than it would in any other circumstance where speech was subject to legal challenge.  The temptation to characterize all alleged infringers as malevolent pirates who get what they deserve must be resisted; the DMCA targets all speakers and all speech, no matter how fair or necessary to public discourse this speech is.

And yet, with the DMCA, not only is speech itself more vulnerable to censorship via copyright infringement claim than it would be for other types of allegations[4] but so are the necessary protections speakers depend on to be able to speak.[5]  Between the self-identification requirements of subsection 512(g) put-back notices and the ease of demanding user information with subsection 512(h) subpoenas that also do not need to be predicated on actual lawsuits,[6] Internet speakers on the whole must fear the loss of their privacy if anyone dares to construe an infringement claim, no matter how illegitimate or untested that claim may be.  Given the ease of concocting an invalid infringement claim,[7] and the lack of any incentive not to,[8] the DMCA gives all-too-ready access to the identities of Internet users to the people least deserving of it and at the expense of those who most need it.[9]

Furthermore, the DMCA also compromises service providers’ own First Amendment interests in developing the forums and communities they would so choose.  The very design of the DMCA puts service providers at odds with their users, forcing them to be antagonistic their own customers and their own business interests as a condition for protecting those interests.  Attempts to protect their forums or their users can expose them to tremendous costs and potentially incalculable risk, and all of this harm flows from mere allegation that never need be tested in a court of law.  The DMCA forces service providers to enforce censorship compelled by a mere takedown notice, compromise user privacy in response to subsection 512(h) subpoenas (or devote significant resources to trying to quash them), and, vis a vis Questions #22 and 23, disconnect users according to termination policies whose sufficiency cannot be known until a court decides they are not.[10]

The repeat infringer policy requirement of subsection 512(i)(A) exemplifies the statutory problem with many of the DMCA’s safe harbor requirements.  A repeat infringer policy might only barely begin to be legitimate if it applied to the disconnection of a user after a certain number of judicial findings of liability for acts of infringement that users had used the service provider to commit.  But as at least one service provider lost its safe harbor for not permanently disconnecting users after only a certain number of allegations, even though they were allegations that had never been tested in a court consistent with the principles of due process or prohibition on prior restraint.[11]

In no other context would we find these sorts of government incursions against the rights of speakers constitutional, robbing them of their speech, anonymity, and the opportunity to further speak, without adequate due process.  These incursions do not suddenly become constitutionally sound just because the DMCA coerces service providers to be the agent committing these acts instead.
Continue reading »

Sep 162014
 

In addition to the amicus brief in Smith v. Obama, a few weeks earlier I had filed another one on behalf of the National Association of Criminal Defense Lawyers in Jewel v. NSA, another case challenging the NSA’s telecommunications surveillance.

Unlike Smith v. Obama and other similar cases, which argued that even collecting “just” telephonic metadata violated the Fourth Amendment, in Jewel the surveillance involved the collection of communications in their entirety. It didn’t just catch the identifying characteristics of these communications; it captured their entire substance.

The Electronic Frontier Foundation originally filed this case in 2008 following the revelations of whistleblower Mark Klein, a former tech at AT&T, that a switch installed in a secret room at AT&T’s facilities were diverting copies all the Internet traffic passing through their systems to the government. This, the EFF argued in a motion for summary judgment, amounted to the kind of “search and seizure” barred by the Fourth Amendment without a warrant.

Like in Smith v. Obama, this surveillance necessarily implicates the Sixth Amendment in how it violates the privacy of communications between lawyers and their clients. But because the surveillance involves the collection of the content of these communications it also inherently violates the Fifth Amendment right against self-incrimination as well. Continue reading »

Sep 152014
 

Last week Durie Tangri and I filed an amicus brief on behalf of the National Association of Criminal Defense Lawyers in the appeal of Smith v. Obama. Smith v. Obama is one of the many lawsuits being brought against the government following revelations of how the NSA has been spying on Americans’ communications. Like several of the others, including First Unitarian Church of Los Angeles v. NSA and Klayman v. Obama, this case is about the government’s wholesale collection of telephonic metadata – or, in other words, information reflecting whom people called, when, and for how long (among other details).

In the Klayman case, which is now on appeal at the D.C. Circuit Court of Appeals, the district court judge found that this wholesale, warrantless, collection of people’s call records indeed violated the Fourth Amendment. In Smith v. Obama, however, the district court had reached the opposite conclusion. Despite finding the reasoning in Klayman persuasive, the district court judge here felt bound to follow the precedent set forth in a 1979 Supreme Court case called Smith v. Maryland.

In that case the Supreme Court held that it did not violate the Fourth Amendment for the government to acquire records of people’s calls. The government only violates the Fourth Amendment when it invades a “reasonable expectation of privacy society recognizes as reasonable” without a warrant. But how could there be an expectation of privacy in the phone number a person dialed, the Supreme Court wondered. How could anyone claim the information was private, if it had been voluntarily shared with the phone company? Deciding that it could not be considered private, the court therefore found that no expectation of privacy was being invaded by the government’s collection of this information, which therefore meant that the collection could not violate the Fourth Amendment.

The problem is, in the Smith v. Maryland case the Supreme Court was contemplating the effect on the Fourth Amendment raised by the government acquiring only (1) specific call information (2) from a specific time period (3) belonging only to a specific individual (4) already suspected of a crime. It was not considering how the sort of surveillance at issue in this case implicated the Fourth Amendment, where the government is engaging in the bulk capturing of (1) all information relating to all calls (2) made during an open-ended time period (3) for all people, including (4) those who may not have been suspected of any wrongdoing prior to the collection of these call records. What Smith is arguing on appeal is that the circumstances here are sufficiently different from those in Smith v. Maryland such that the older case should not serve as a barrier to finding the government’s warrantless bulk collection of these phone records violates the Fourth Amendment.

In particular, unlike in Smith v. Maryland, in this case we are dealing with aggregated metadata, and as even the current incarnation of the Supreme Court has noted, the consequences of the government capturing aggregated metadata are much more harmful to the civil liberties of the people whose data is captured than the Supreme Court contemplated back in 1979. In U.S. v. Jones, a Fourth Amendment decision issued in 2012, Justice Sotomayor observed that aggregated metadata “generates a precise, comprehensive record” of people’s habits, which in turn “reflects a wealth of detail about [their] familial, political, professional, religious, and sexual associations.” One of the reasons we have the Fourth Amendment is to ensure that these associations are not chilled by the government being able to freely spy on people’s private affairs. But when this form of warrantless surveillance is allowed to take place, they necessarily will be.

While it’s bad enough that any associations are chilled, in certain instances that chilling implicates other Constitutional rights. The amicus brief by the Reporters Committee for Freedom of the Press addressed how the First Amendment is undermined when journalists can no longer be approached by anonymous sources because, if the government can easily discover evidence of their conversations, the sources effectively have no anonymity and will be too afraid to reach out. Similarly, the brief I wrote discusses the impact on the Sixth Amendment right to counsel when another type of relationship is undermined by this surveillance: that between lawyers and their clients. Continue reading »