Protecting students from intrusive school social surveillance

Thanks to my State Senator Cynthia Stone Creem for pushing legislation in Massachusetts to protect elementary, secondary, and tertiary public school students from intrusive social media surveillance by school administrators — and for being proactive on this before it becomes a big problem, as it inevitably would without legislation.

No student should have to turn over their passwords and login info to their school just to be permitted to get an education. We cannot develop a healthy, independent, and democratic civil society if students face omnipresent surveillance that discourages them from branching out in their views during a formative period.

I also believe such online monitoring could have a chilling effect on young people being able to examine and test their self-identity, particularly in less welcoming communities.

While students and children do not always have full and unlimited rights, they must retain a reasonable right to privacy. That principle doesn’t change just because technology does.

Non-state surveillance

In an op-ed in the NY Times Sunday Review, Jeffrey Rosen discusses James Madison’s views on privacy and surveillance. In particular, Rosen argues that Madison made a slightly odd distinction between government invasions of privacy (which he wanted restricted) and the same from businesses or other people (which he didn’t really care about much). Then Rosen asks whether that distinction is valid or even still up to date.

In practice, the neo-Madisonian distinction between surveillance by the government and surveillance by Google makes little sense. It is true that, as Judge Pauley concluded, “People voluntarily surrender personal and seemingly private information to trans-national corporations which exploit that data for profit. Few think twice about it.”

But why? Why is it O.K. for AT&T to know about our political, religious and sexual associations, but not the government?


That distinction is unconvincing. Once data is collected by private parties, the government will inevitably demand access.

More fundamentally, continuously tracking my location, whether by the government or AT&T, is an affront to my dignity. When every step I take on- and off-line is recorded, so an algorithm can predict if I am a potential terrorist or a potential customer, I am being objectified and stereotyped, rather than treated as an individual, worthy of equal concern and respect.

Justice Louis Brandeis, the greatest defender of privacy in the 20th century, recognized this when he equated “the right to be let alone” with offenses against honor and dignity.

But he also lamented that American law, unlike European law, was not historically concerned with offenses against what the Romans called honor and what in more modern terms we call dignity. European laws constrain private companies from sharing and collecting personal data far more than American laws do, largely because of the legacy of Madisonian ideas of individual freedom, which focus on liberty rather than dignity.

What Americans may now need is a constitutional amendment to prohibit unreasonable searches and seizures of our persons and electronic effects, whether by the government or by private corporations like Google and AT&T.


Europe is way more aggressive about trying to curb private amassing of data. Meanwhile, both the U.S. government and private mega corporations — aided by the gushing of the American media — are pitching the concept of “big data” as a godsend and cure-all, thus necessitating mass collection and indefinite storage of data. Can’t throw all the data points in the data stew if you haven’t held on to all of them, the logic goes.

And it’s a fair question raised in this article. The phone company or the internet businesses knowing all our private information (and movements and habits) is allowed freely. Yet the government is supposed to be following various restrictions, due to the Bill of Rights — but why? Why don’t the protections extend to the private corporations? We’ve seen time and again that they willingly turn over all their data for “national security” and “public safety” reasons, sometimes without even being asked through a court order.

Our government need not construct a surveillance state unconstitutionally when corporate America will do it for them.

Addendum: On a partially related note, I highly recommend this article by Virginia Eubanks in The American Prospect: “Want to Predict the Future of Surveillance? Ask Poor Communities.”

Marginalized groups are often governments’ test subjects. Here are a few lessons we can learn from their experiences.

Will “smarter” cars start snitching on us?

After a Ford exec in a panel talk raised a (supposedly) hypothetical scenario whereby the company could collect live driving data like speed and relative location from embedded GPS and other services linked back to headquarters by satellite, and noted that it would include illegal/dangerous driving behaviors, Eugene Volokh of TVC started thinking through the legal implications of such a world. Here’s an excerpt from the piece:

Ford could technically gather this information, and could use it to prevent injuries. For instance, if GPS data shows that someone is speeding — or the car’s internal data shows that the driver is speeding, or driving in a way suggestive of drunk driving or extreme sleepiness, and the data can then be communicated to some central location — then Ford could notify the police, so the dangerous driver can be stopped. And the possibility of such reports could deter the dangerous driving in the first place.

Ford, then, is putting extremely dangerous devices on the road. It’s clearly foreseeable that those devices will be misused (since they often are misused). Car accidents cause tens of thousands of deaths and many more injuries each year. And Ford has a means of making those dangerous devices that it distributes less dangerous; yet it’s not using them.

Sounds like a lawsuit, no? Manufacturer liability for designs that unreasonably facilitate foreseeable misuse is well-established. And the fact that the misuse may stem from negligence (or even intentional wrongdoing) on the user’s part doesn’t necessarily block liability, so long as the user misconduct is foreseeable. I should note that I’m not wild about these aspects of our tort law system, and think they should likely be trimmed back in various ways; but there is certainly ample legal doctrine out there — whether one likes it or not — potentially supporting liability in such a situation.


The full piece is pretty long and gears gradually toward the technical side, for legal professionals, as it progresses, but the opening sections are for a more general audience. It’s certainly thought-provoking — the idea that companies might be able to use increasingly computerized and data-rich vehicles to monitor driving behavior and report it to the police (or to insurers, or… who knows where it ends).

Might they even make the data signals go two ways so they could control someone’s vehicle to slow them if they’re speeding out of control (or aid the police in controlling it, to stop a high-speed chase for example)? After all, some of these cars already have automated lane-finders and swerve-correctors as well as numerous other safety features. But those are locally-run by the onboard systems. Does the company have an obligation to control or report its vehicles when they’re being used dangerously or illegally? If they don’t, will they be sued? Interesting (and troubling) questions from a brave new world…

NSA: The Global Warcraft on Terror

The latest absurdity of national security theatrics has been revealed by the Snowden leaks:

Not limiting their activities to the earthly realm, American and British spies have infiltrated the fantasy worlds of World of Warcraft and Second Life, conducting surveillance and scooping up data in the online games played by millions of people across the globe, according to newly disclosed classified documents.

I think someone at the NSA conned their bosses into letting them play World of Warcraft at work…

But for all their enthusiasm — so many C.I.A., F.B.I. and Pentagon spies were hunting around in Second Life, the document noted, that a “deconfliction” group was needed to avoid collisions — the intelligence agencies may have inflated the threat.

It’s just one more example of how the U.S. government has ramped up an expensive and invasive façade of protection that provides no real safety. It’s pure theater, much like most of the arcane airport security rules and carry-on restrictions.

Former American intelligence officials, current and former gaming company employees and outside experts said in interviews that they knew of little evidence that terrorist groups viewed the games as havens to communicate and plot operations.

Games “are built and operated by companies looking to make money, so the players’ identity and activity is tracked,” said Peter W. Singer of the Brookings Institution, an author of “Cybersecurity and Cyberwar: What Everyone Needs to Know.” “For terror groups looking to keep their communications secret, there are far more effective and easier ways to do so than putting on a troll avatar.”