Thanks to my State Senator Cynthia Stone Creem for pushing legislation in Massachusetts to protect elementary, secondary, and tertiary public school students from intrusive social media surveillance by school administrators — and for being proactive on this before it becomes a big problem, as it inevitably would without legislation.
No student should have to turn over their passwords and login info to their school just to be permitted to get an education. We cannot develop a healthy, independent, and democratic civil society if students face omnipresent surveillance that discourages them from branching out in their views during a formative period.
I also believe such online monitoring could have a chilling effect on young people being able to examine and test their self-identity, particularly in less welcoming communities.
While students and children do not always have full and unlimited rights, they must retain a reasonable right to privacy. That principle doesn’t change just because technology does.
Promoting the rights of children, youth, and students is vitally important for keeping them safe. We’ve seen the footage this week from Spring Valley High School of a girl being body-slammed and seriously injured by a police officer in her school — an all-too-common occurrence. While this itself is a grave abuse (➚) and clearly one escalated by racism and misogynoir (learn more➚), one additional element we need to be aware of is how many schools (including in Massachusetts) have adopted policies that may limit our ability to find out about these incidents in future.
Such measures include monitoring students’ internet communications on campus and restricting or confiscating cell phones. While some of this is ostensibly to reduce distractions, its secondary (and I hope unintended!) effect is to reduce the ability of students to record authority figures or otherwise get the word out about abuses or inappropriate behaviors by adults who are supposed to be keeping them safe.
This doesn’t just apply to inappropriate uses of physical force to contain situations, but also to other types of abuse. There have been more than enough institutional sex abuse scandals erupting in recent years to learn from. These often occurred in eras where children and youth were neither respected nor readily empowered to document illegal actions (of any kind) by adults in positions of power. We now know that young people are endangered when they are unable to advocate for themselves against powerful adults or institutions and are unable to prove what is happening.
It would be a serious mistake to move toward policies that prioritize omnipresent surveillance and policing while deprioritizing student rights and student privacy. Such an approach doesn’t foster a culture of being willing to constructively stand up to authority or institutions when there are abuses or illegal activity. (And reportedly, a student who tried to intervene physically in this case to protect his or her classmate from abuse was also disciplined by the school, which should raise some similar questions too.)
In immediate terms, while we always hope these things won’t happen in our schools, if they do happen, it’s much better that we know about them quickly so we can stop them and act against those responsible. For that to happen, students must feel comfortable about coming forward and be empowered to do so. Part of a safe learning environment is not just taking a “public safety in schools” approach but also ensuring students can advocate for themselves when something isn’t right.
In the bigger picture, I believe that the latter approach – respecting the rights of young people and protecting their ability to blow the whistle on abuses of power without fearing recrimination – also helps promote a generally more engaged and empowered civic attitude for a lifetime. Part of our education system should be to encourage people to defend each other and themselves from abuses of power wherever it may occur. It should never be to teach our children that they are powerless to stop injustice, illegal activity, or abuse.
From “AT&T Helped N.S.A. Spy on an Array of Internet Traffic” in The New York Times:
Fairview is one of its oldest programs. It began in 1985, the year after antitrust regulators broke up the Ma Bell telephone monopoly and its long-distance division became AT&T Communications. An analysis of the Fairview documents by The Times and ProPublica reveals a constellation of evidence that points to AT&T as that program’s partner. Several former intelligence officials confirmed that finding.
In September 2003, according to the previously undisclosed N.S.A. documents, AT&T was the first partner to turn on a new collection capability that the N.S.A. said amounted to a “ ‘live’ presence on the global net.” In one of its first months of operation, the Fairview program forwarded to the agency 400 billion Internet metadata records — which include who contacted whom and other details, but not what they said — and was “forwarding more than one million emails a day to the keyword selection system” at the agency’s headquarters in Fort Meade, Md. Stormbrew [another program] was still gearing up to use the new technology, which appeared to process foreign-to-foreign traffic separate from the post-9/11 program.
In 2011, AT&T began handing over 1.1 billion domestic cellphone calling records a day to the N.S.A. after “a push to get this flow operational prior to the 10th anniversary of 9/11,” according to an internal agency newsletter. This revelation is striking because after Mr. Snowden disclosed the program of collecting the records of Americans’ phone calls, intelligence officials told reporters that, for technical reasons, it consisted mostly of landline phone records.
One of the other issues discussed is that the NSA has been scooping a lot of emails (or fragments of emails) of people unrelated to intended searches because the emails are transmitted in blocks of information pieces before being re-assembled at the destination, so the target emails are probably mixed with everyone else’s emails at the interception points.
There was also a lot of discussion in the article about how AT&T and Verizon were making infrastructural repairs for the NSA overseas.
Topics: NSA foreign political surveillance, students challenge voter IDs under 26th Amendment, Hobby Lobby ruling. People: Bill, Persephone, Sasha. Produced: July 6, 2014.
– Is it wrong for the NSA to spy on non-allied governments and foreign politicians?
– Are state voter ID laws unconstitutional under the 26th Amendment (the right to vote at age 18)?
– What are the implications of the Hobby Lobby ruling?
Part 1 – NSA:
Part 1 – NSA – AFD 91
Part 2 – 26th Amendment:
Part 2 – 26th Amendment – AFD 91
Part 3 – Hobby Lobby:
Part 3 – Hobby Lobby – AFD 91
To get one file for the whole episode, we recommend using one of the subscribe links at the bottom of the post.
– NDTV: US Hopes National Security Agency Surveillance on BJP Not to Impact Bilateral Ties
– AFD: Australia was spying on Indonesia
– NYT: College Students Claim Voter ID Laws Discriminate Based on Age
– Politico: Supreme Court rulings 2014: SCOTUS sides with Hobby Lobby on birth control
– Mother Jones: Are You There God? It’s Me, Hobby Lobby
– FAIR: The Deeply Held Religious Principle Hobby Lobby Suddenly Remembered It Had
– NBC: Female justices issue searing dissent over new contraceptive case
RSS Feed: Arsenal for Democracy Feedburner
iTunes Store Link: “Arsenal for Democracy by Bill Humphrey”
And don’t forget to check out The Digitized Ramblings of an 8-Bit Animal, the video blog of our announcer, Justin.
After a Ford exec in a panel talk raised a (supposedly) hypothetical scenario whereby the company could collect live driving data like speed and relative location from embedded GPS and other services linked back to headquarters by satellite, and noted that it would include illegal/dangerous driving behaviors, Eugene Volokh of TVC started thinking through the legal implications of such a world. Here’s an excerpt from the piece:
Ford could technically gather this information, and could use it to prevent injuries. For instance, if GPS data shows that someone is speeding — or the car’s internal data shows that the driver is speeding, or driving in a way suggestive of drunk driving or extreme sleepiness, and the data can then be communicated to some central location — then Ford could notify the police, so the dangerous driver can be stopped. And the possibility of such reports could deter the dangerous driving in the first place.
Ford, then, is putting extremely dangerous devices on the road. It’s clearly foreseeable that those devices will be misused (since they often are misused). Car accidents cause tens of thousands of deaths and many more injuries each year. And Ford has a means of making those dangerous devices that it distributes less dangerous; yet it’s not using them.
Sounds like a lawsuit, no? Manufacturer liability for designs that unreasonably facilitate foreseeable misuse is well-established. And the fact that the misuse may stem from negligence (or even intentional wrongdoing) on the user’s part doesn’t necessarily block liability, so long as the user misconduct is foreseeable. I should note that I’m not wild about these aspects of our tort law system, and think they should likely be trimmed back in various ways; but there is certainly ample legal doctrine out there — whether one likes it or not — potentially supporting liability in such a situation.
The full piece is pretty long and gears gradually toward the technical side, for legal professionals, as it progresses, but the opening sections are for a more general audience. It’s certainly thought-provoking — the idea that companies might be able to use increasingly computerized and data-rich vehicles to monitor driving behavior and report it to the police (or to insurers, or… who knows where it ends).
Might they even make the data signals go two ways so they could control someone’s vehicle to slow them if they’re speeding out of control (or aid the police in controlling it, to stop a high-speed chase for example)? After all, some of these cars already have automated lane-finders and swerve-correctors as well as numerous other safety features. But those are locally-run by the onboard systems. Does the company have an obligation to control or report its vehicles when they’re being used dangerously or illegally? If they don’t, will they be sued? Interesting (and troubling) questions from a brave new world…
New NSA/Five Eyes-related revelations in the Guardian: “Australia tried to monitor Indonesian president’s phone”
I’m not really surprised to find out the Australian government was spying on senior leadership in Indonesia. I think I’d be more surprised to learn they were definitely NOT.
That being said, the new jackass Prime Minister had a pretty weak response, besides incorrectly (see below) brushing it off as the work of the other party from whom he just took power. The government’s primary excuse? Australia’s activities were not so much “spying” as “research” [and] “We use the information that we gather for good, including to build a stronger relationship with Indonesia.”
Oh ok then. So you just needed to hack phones to find out their pets’ names or … what?
I really enjoy this “research” for better relations excuse. I mean, it’s seriously like saying Australia was just trying to Facebook-creep on Indonesia to see if Indonesia was “in a relationship” with anybody.
And about the claim that this is all the other party’s fault? The docs show the spying on Indonesia actually started the very day the previous PM from his party left office, meaning their own party would have been in office when it was planned, even if Labor technically had taken office by the time it began.