RIP Doug Engelbart

 Analysis/commentary, Other regulation  Comments Off on RIP Doug Engelbart
Jul 032013
 

I was saddened to hear the news about Doug Engelbart’s passing. Although most famous for having invented the mouse (I once had the privilege of holding the original – in some ways it was even better than its successors, as its two beveled wheels allowed the mouse to easily be drawn in a straight line), his contributions to the digital world we now take for granted run much deeper than that specific innovation.

I had the privilege of meeting Mr. Engelbart on a few occasions, and in the wake of this news I’m prompted to repost something I wrote a few years ago following one of those encounters, something that contemplated how law and innovation so often seemed to collide in a way deleterious for the latter. As we take this moment to recognize the rich legacy Mr. Engelbart leaves the world it should remind us to never allow law to deprive the world of other such gifts in future.

In early December I attended the “Program for the Future,” celebrating the 40th anniversary of a seminal event in technological history: Doug Engelbart’smother of all demos.” While today the technologies he showed off in his 1968 presentation must seem ordinary and quaint, back then they were revolutionary and laid the foundation for what we now take for granted.

While perhaps most widely known for being the world debut of the mouse, which he invented, Engelbart’s presentation is most notable for how it advanced collective intelligence. What made the presentation so important weren’t the technologies themselves but the human problems they stood to solve.

So in celebration of Engelbart’s important contribution to the world, a group of futurists and technologists gathered together at The Tech museum in San Jose to contemplate the future innovations yet to come. Personally, for me, the event was a bit nostalgic. Before law school, as a technologist in Silicon Valley, I often attended such events. Sometimes they got a bit silly, as there’d be so much “blue skying” and thinking about what could be done that nothing would actually get done. But these kinds of events were still important and because they fostered an environment where the bolts of inspiration could be seized upon and fanned into exciting innovations.

I still gravitate towards technology-related events, only today they are invariably legally-related. At these events technology is always considered in the context of regulatory frameworks, and the people doing the thinking are always lawyers and policy makers. Notably, however, at this event I was only one out of maybe a handful attendees who was a lawyer. And therein lies the disconnect.
Continue reading »

Feb 202013
 

At an event on CFAA reform last night I heard Brewster Kahle say what to my ears sounded like, “Law that follows technology tends to be ok. Law that tries to lead it is not.”

His comment came after an earlier tweet I’d made:

I think we need a per se rule that any law governing technology that was enacted more than 10 years ago is inherently invalid.

In posting that tweet I was thinking about two horrible laws in particular, the Computer Fraud and Abuse Act (CFAA) and the Electronic Communications Privacy Act (ECPA). The former attempts to forbid “hacking,” and the second ostensibly tried to update 1968’s Wiretap Act to cover information technology. In both instances the laws as drafted generally incorporated the attitude that technology as understood then would be the technology the world would have forever hence, a prediction that has obviously been false. But we are nonetheless left with laws like these on the books, laws that hobble further innovation by how they’ve enshrined in our legal code what is right and wrong when it comes to our computer code, as we understood it in 1986, regardless of whether, if considered afresh and applied to today’s technology, we would still think so.

To my tweet a friend did challenge me, however, “What about Section 230? (47 U.S.C. § 230).” This is a law from 1996, and he has a point. Section 230 is a piece of legislation that largely immunizes Internet service providers for liability in content posted on their systems by their users – and let’s face it: the very operational essence of the Internet is all about people posting content on other people’s systems. However, unlike the CFAA and ECPA, Section 230 has enabled technology to flourish, mostly by purposefully getting the law itself out of the way of the technology.

The above are just a few examples of some laws that have either served technology well – or served to hamper it. There are certainly more, and some laws might ultimately do a bit of both. But the general point is sound: law that is too specific is often too stifling. Innovation needs to be able to happen however it needs to, without undue hindrance caused by legislators who could not even begin to imagine what that innovation might look like so many years before. After all, if they could imagine it then, it would not be so innovative now.

The Knowledge Gap

 Analysis/commentary, Other regulation  Comments Off on The Knowledge Gap
Feb 082013
 

This article on TechDirt summarizes a recent brouhaha that recently broke out in a corner of the Internet I tend to haunt with other lawyers and cyberlaw professionals and has started to percolate into the mainstream.  The upshot is that someone is upset that other people have reposted her tweets without her permission and control, and she is convinced this is legally wrongful.  So convinced is she, in fact, that she keeps threatening to sue a number of them who have used these tweets to comment on her erroneous legal theory, which only stokes further interest in criticizing her as even more observers come to note that the law is not, in fact, on her side.  (TechDirt’s analysis does a decent job explaining why.)

It is easy to be tempted to join in the mocking of this person’s very public tantrums, and to be sure, threatening litigation is not to be taken lightly.  Doing so, particularly when cloaked in legal ignorance, is ripe for justifiable criticism.

But while the exhibition of personal arrogance begs the schadenfreude of public censure, the underlying problem it can reveal is not.  The reality is that for me and my cyberlaw peers, we are so inured to how this area of law “works” (to the extent that it does) we tend to forget how foreign it is to most laypeople (and even many other lawyers), for whom its mystical mechanations can be really terrifying.  This sort of knowledge gap isn’t good for anyone.  That’s how we end up with bad law.

The answer naturally cannot be to modify the law to fit its common misperceptions.  Sometimes the law is what it is for very good reasons, or at least reasons that cannot simply be discounted, even if those reasons aren’t intuitively obvious to a layperson.  We can’t use common misapprehensions as the pillars upon which law should be based.  In fact, when we have done so in recent years, often in response to technology (another complex system that can be scary to those who don’t understand it), the end result has been law that so overreacts that it creates more problems while failing to properly solve any.

At the same time, however, rather than mocking those who don’t understand the law, those who do understand it should be endeavoring to explain it.  Let’s get everyone on the same page to understand how law works and why, so we can all work together to fix it when it doesn’t.  After all, in a democracy law should belong to everyone, not just the rarified few specially trained to understand it.

Of course, the above sympathetic sentiment is directed at those who would be willing to learn.  It’s not a moral failing to not know everything about the law, but it is to not care whether one does or not before proceeding with bumptious legal threats or dangerously inapt policy advocacy.  Those who would seek to use the law as a weapon without bothering to learn how it operates are justly entitled to whatever chastisement they get.

Jan 142013
 

This weekend’s news about the death of Aaron Swartz is a cogent reminder of what this project is about. Aaron was a gifted contributor to the tools and values that make the Internet the extraordinary medium it is, impacting everything from the RSS standard to the Creative Commons licensing system and more. From all accounts he was on a constant quest to free humanity’s knowledge and make it accessible to anyone who wanted or needed it.

These actions challenged the status quo, however, and the status quo fought back. For those who treat knowledge as a currency that can be horded, acts to free it are seen as a threat. Unfortunately for Aaron, those people have power, and they wielded it against him. Furthermore, and most saliently for this project, it happened not through private actions, but by leveraging the power of the state to pursue and criminally prosecute him for his efforts.

Fortunately for Aaron he had competent counsel able to help defend him against the charges laid at his door. For all too many in similar positions as Aaron such counsel isn’t always available, which is a big reason why this project exists. It’s important that there be counsel ready and able to understand both the technological nature of the criminal act alleged and the nature of the crimes charged in order to properly defend them. It is very easy, as we see with this case, for a prosecutor to throw the book at a defendant for having done anything with technology outside of the norm, regardless of whether that technology use really deserves such a sanction, or even any sanction at all.

But having counsel isn’t enough. These prosecutions are backbreaking and bankrupting, and even if the defendant is ultimately acquitted the mere persecution will have already extracted a punitive toll from the defendant. In Aaron’s case he was looking at defense costs and fines in the millions of dollars, and the specter of years if not decades of imprisonment. Who among us could bear such a fate looming over them without their lives being fundamentally altered?

Thus the parallel purpose of this project is to help advocate for better legal policy, so that we don’t empower the state to punish our innovators for innovating. The disruption they spawn, though perhaps painful for incumbents who liked things as they were, are necessary in order to have a future that benefits everyone.

Welcome (again)

 Project news  Comments Off on Welcome (again)
Feb 062012
 

Welcome new readers, whom I’ve now announced this blog to.  It seems to be chugging along nicely, although like any new project it is still subject to modifications and tweaks.  But the core of it won’t change: this blog, a piece of a larger envisioned project, is dedicated to covering the intersection of criminal law and technology, noting and commenting on situations where state sanctions are applying to technology use and development. Continue reading »

Cracking v. hacking

 Project news, Unauthorized access  Comments Off on Cracking v. hacking
Jan 112012
 

A word about “hacking.” Hacking is a word often colloquially misused to describe the unauthorized access of a computer system. Among self-described hackers, however, the correct term to describe such behavior is “cracking,” as in “safe cracking.” “Hacking” instead describes a far more neutral, or even beneficial activity: the creative problem solving involved in engineering a solution. (Links point to Eric Raymond’s Jargon File.)

It would greatly assist policy discussion to keep these terms clear, particularly given the interest in criminalizing the unauthorized access of computer systems. Associating the activities of hacking with the more pejorative definition loses nuance and tends to lead to the criminalization of more benign, even objectively good, technology uses.

Thus this site will endeavor to use the correct term as much as possible. But when citing other media it may necessarily parrot whatever word was used, however incorrectly.

Edit 2/20/13: I’ve realized I’m shouting into the wind on this issue. “Hacking” is too colloquially accepted to describe all sorts of innovative applications of technology, good and bad, to ever completely avoid. But I will remind others that the term does indeed describe both good uses and bad uses and should not be presumed to be a pejorative.

Death by chocolate cupcakes

 Analysis/commentary, Privacy from government  Comments Off on Death by chocolate cupcakes
Jan 102012
 

Yes, I do have other relevant things to blog about than more TSA antics.  This isn’t supposed to be a TSA-only blog.  But (a) some recent news is too outrageous/tempting to skip, and (b) there are relevant lessons to be extrapolated.

First, the news.  Remember the cupcake the TSA seized because its frosting was too “gel-like”?  Well, the TSA claims it has been unfairly criticized.  It was not that the cupcake had “gel-like” frosting; it was that the cupcake was in a jar.  As it happens, the woman whose cupcake this was denies the TSA’s description of the cupcake seizure.  But, really, does it matter?  Because even if the cupcake was in a jar, it was still deemed a threat and seized.  The TSA is very, very good at deeming things threats and seizing them.  But actually assessing whether something is truly a threat is another story.

Which brings us to the applicable lessons relevant to this blog:

People in authority are very good at deeming things threats.  They are very good at using their police power to exert control over what they deem as threats.  They are less good at actually meting out their authority commensurate to the actual problem, and as a consequence it’s very easy for innocent people to have their rights unduly affected.

These observations hold for many contexts, and technology regulation is no exception.  Exercises of governmental power can easily be heavy-handed, imprecise, and ill-suited for the problems they pretend to solve.  The identification and definition of the underlying problems can also be equally ham-fisted and oftentimes ignorant of actual risk.  Which is not to say that all government regulation is illegitimate.  On the contrary, these examples illustrate why it’s important to question and discuss exactly when and how governments should be involved in technology use and development.  They may well have important roles to play.  But only if they are played with care.

Edit 1/11/2012: Updated to provide a direct link to Bruce Schneier’s commentary about the TSA’s admission of its own irrelevance.

Globalchokepoints.org

 Project news  Comments Off on Globalchokepoints.org
Dec 292011
 

The Electronic Frontier Foundation has launched a new project, Global Censorship Chokepoints, whose mission is to track instances of censorship caused by allegations of copyright infringement.

Global Chokepoints is an online resource created to document and monitor global proposals to turn Internet intermediaries into copyright police. These proposals harm Internet users’ rights of privacy, due process and freedom of expression, and endanger the future of the free and open Internet. Our goal is to provide accurate empirical information to digital activists and policy makers, and help coordinate international opposition to attempts to cut off free expression through misguided copyright laws, policies, agreements and court cases.

There is some overlap between that project and this one, especially insofar as the state allows itself to be the enforcement arm of copyright infringement complaints.  But there is plenty of work to go around when it comes to protecting free speech around the world.  (Digital Age Defense also looks at state imposition of intermediary liability for non-IP related reasons as well.  See, e.g., attempts by the Indian government to demand content filters in order not to cause social unrest.)

Lessons from an early example of technology hacking

 Analysis/commentary, Unauthorized access  Comments Off on Lessons from an early example of technology hacking
Dec 292011
 

Paul Marks has a fascinating article at The New Scientist about an old example of hacking.

LATE one June afternoon in 1903 a hush fell across an expectant audience in the Royal Institution’s celebrated lecture theatre in London. Before the crowd, the physicist John Ambrose Fleming was adjusting arcane apparatus as he prepared to demonstrate an emerging technological wonder: a long-range wireless communication system developed by his boss, the Italian radio pioneer Guglielmo Marconi. The aim was to showcase publicly for the first time that Morse code messages could be sent wirelessly over long distances. Around 300 miles away, Marconi was preparing to send a signal to London from a clifftop station in Poldhu, Cornwall, UK.

Yet before the demonstration could begin, the apparatus in the lecture theatre began to tap out a message. At first, it spelled out just one word repeated over and over. Then it changed into a facetious poem accusing Marconi of “diddling the public”. Their demonstration had been hacked – and this was more than 100 years before the mischief playing out on the internet today. Who was the Royal Institution hacker? How did the cheeky messages get there? And why?

There are a lot of lessons in this tale of use for us today. Continue reading »

Water hack wasn’t

 Analysis/commentary, Unauthorized access  Comments Off on Water hack wasn’t
Dec 052011
 

Recently it appeared the fear of a foreign hacker penetrating the online systems of American infrastructure had been realized with news that a Russian hacker had attacked and disabled a pump in an Illinois water system.  These fears have now been shown to be misplaced: the supposed “hack” was a login by an engineer traveling in Russia at the time he was requested to perform some work on the system, and the pump broke down on its own, unrelatedly, months later.

Vulnerabilities of public infrastructure are not an idle concern.  The Stuxnet virus, which specifically targeted nuclear facilities in Iran, illustrates that infrastructure can be a compelling target and quite feasible to affect if those systems are not properly protected.

But the water system “hack” shows that proper protection of infrastructure — and, accordingly, any law intended to advance this — needs to be done carefully, with clear understanding of the actual threat and competent engineering not prone to panicked histrionics.  From the BBC article about it:

“Nobody checked with anybody. Lots of people assumed things they shouldn’t have assumed, and now it’s somebody else’s fault and we’re into a finger-pointing marathon,” wrote Nancy Bartels.

“If the public can be distracted from the issue of how DHS and ISTIC fumbled notification so badly, then nobody will be to blame, which is what’s really important after all.

“Meanwhile, one of these days, there’s going to be a really serious infrastructure attack, and nobody’s going to pay attention because everyone is going to assume that it’s another DHS screw-up.”