Archive for the ‘Proposals’ Category

When to share the raw data & when not to

Monday, September 7th, 2009

Russ Nelson sends this nomination:

This website encourages people to publish the raw data directly from their hydrologic sensors. Seems to me like they’re the poster child for open source sensing.

http://his.cuahsi.org/index.html

For scientific purposes, sharing the raw data (in addition to any interesting conclusions) is the way to go. In sensing situations where there are privacy concerns, which may not occur in the case of hydrologic data, an open source design process might involve not sharing all the raw data. Figuring out which cases are which will be a challenge! Thanks for the pointer, Russ. —Chris Peterson

Sensor scenarios for nanotech-enabled chemical & biological defense

Thursday, September 3rd, 2009

A new book Nanotechnology for Chemical and Biological Defense, ed. Margaret Kosal (Springer, 2009), includes sensor scenarios for nanotech-based defense against chemical and biological attacks. As is usual with scenario planning, multiple versions are presented, in this case reaching out to the year 2030. Here’s one from the “Radical Game Changers” scenario:

A terrorist organization releases a stealth nanoparticle-encapsulated biochemical agent at eight separate airports outside of the continental US. The initial dissemination of the novel agent is undetected. Passive networks of sensors at two US points of entry, however, recognize an increase in the average elevated temperature of passengers at security checkpoints. Additional sensors show elevated levels of liver enzymes in airport waste streams. Mobile response laboratories, in coordination with National Guard Civil Support teams, are dispatched and identify the causal agent. Intensive forensics reveals that the nanoparticles are engineered to aerosolize easily and then accumulate in the human liver where they slowly release the agent. Countermeasures are administered within 12 hours. In the world of Radical Game Changers, such highly-evolved technologies require equally-evolved detection schemes.

Whew, a scary scenario indeed. Though the defense succeeds in this scenario, it’s clear that the world is a very dangerous place in this vision. One goal for Open Source Sensing would be to head off such scenarios entirely. Meanwhile, take a look at the book for both other long-term scenarios and much nearer-term issues — you can search inside the book at Amazon.com. More on this topic over at Foresight’s main blog Nanodot. —Chris Peterson

Code of Fair Sensing Practices?

Thursday, August 13th, 2009

Simson Garfinkel gave a talk a while back that examined the “Code of Fair Information Practices”, developed originally by a U.S. government task force and described thusly:

• There must be no personal data record-keeping systems whose very existence is secret.
• There must be a way for a person to find out what information about the person is in a record and how it is used.
• There must be a way for a person to prevent information about the person that was obtained for one purpose from being used or made available for other purposes without the person’s consent.
• There must be a way for a person to correct or amend a record of identifiable information about the person.
• Any organization creating, maintaining, using, or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precautions to prevent misuses of the data.

Is this a useful model for how sensing data should be handled? It certainly is not being followed now. We do need to look at this list and ask whether it infringes on freedom of speech, though — see the third bullet above, for example. Sticky issues! —Chris Peterson

The main reason to care who gets sensing data about you

Tuesday, August 11th, 2009

An ITU paper spells out the main reason to care who gets sensing data about individuals:

From a political standpoint privacy is generally considered to be an indispensable ingredient for democratic societies. This is because it is seen to foster the plurality of ideas and critical debate necessary in such societies…

• Privacy is also a regulating agent in the sense that it can be used to balance and check the power of those capable of collecting data…

Lessig’s list of reasons for protecting privacy belongs to what Colin Bennett and Charles Raab have called the ‘privacy paradigm’—a set of assumptions based on more fundamental political ideas: ‘The modern claim to privacy … rests on the pervasive assumption of a civil society comprised of relatively autonomous individuals who need a modicum of privacy in order to be able to fulfil the various roles of the citizen in a liberal democratic state.’

So the main reason is to protect our political freedom. This is why I hope to find an alternative to the word ‘privacy’ in our discussions. While a useful word, it has connotations of guilt or shame, which are inappropriate in this discussion of how to preserve and strengthen our freedoms. Any ideas on alternative terms? —Chris Peterson

How long to keep unneeded sensor data? 10 minutes

Monday, August 10th, 2009

A paper by researchers at University of Washington, Intel, and Dartmouth reports on Exploring Privacy Concerns about Personal Sensing. Some interesting data:

In some cases, concerns about seemingly invasive sensors could be mitigated by changing the length of time that data were retained. While nearly half of the participants were unwilling to use GPS if the raw data (e.g., the latitude and longitude coordinates) were kept, all but one participant were willing to use it if the raw data were kept only for as long as was necessary to calculate the characteristics of detected physical activities (e.g., distance or pace of a run), and then promptly discarded. The exact length of the data window that the participants thought was acceptable varied, but most who wanted data purging thought that retaining one to 10 minutes of raw data at a time, unless a physical activity is being detected, was reasonable.

We found similar results for audio. A sliding data window of no more than one minute at a time of raw audio data was acceptable to 29% (7 of 24) of participants, although the majority (71%) found recording of any raw audio too invasive. Filtered audio fared better, however. If only a 10 minute sliding window of filtered audio was being saved, except for times when a physical activity is being detected, 62.5% (15 of 24) of participants were willing to use the microphone to get better activity detection.

And some recommendations:

Our results suggest at least three ways in which the acceptability of sensing can be increased, while respecting privacy. First, sensor data should be saved only when relevant activities are taking place. Results for both GPS and audio revealed that continuously purging the raw data increased user acceptance of both sensors. Second, whenever possible, a system’s core functionality should be based on minimallyinvasive sensing. The users can then be given a choice to decide whether to enable additional functionality that might require more invasive sensors. Physical activity detection, much of which can be done with a simple 3-D accelerometer, is a good example of a domain where such graded sensing could be implemented. And third, researchers should explore ways to capture only those features of the sensor data that are truly necessary for a given application. This means, however, that sensor systems might need to have enough computational power to perform onboard processing so that each application that uses a sensor can capture only the information that it needs.

We also note that users can make informed privacy trade-offs only if they understand what the technology is doing, why, and what the potential privacy and security implications are. Building visibility into systems so that users can see and control what data is being recorded and for how long supports informed use. Determining how this can best be done is a difficult, but important, design challenge.

More work along these lines, please. —Chris Peterson

Separating raw sensor data from processed inferences

Thursday, August 6th, 2009

The sticky issue of who gets sensor data has been addressed by Guruduth Banavar and Abraham Bernstein in “Challenges in Design and Software Infrastructure for Ubiquitous Computing Applications” in the book Advances in Computers, Vol. 62, parts of which you can view at Amazon or Google Books:

Gathering data of any kind irrevocably leads to privacy concerns. Where should the data be stored and what boundaries shouldn’t it cross? Who should have access and who doesn’t? These questions aren’t new to ubiquitous computing. But the pervasiveness of these sensors adds a new layer of complexity to understanding and managing all the possible data streams. Can one subpoena the data collected by ubiquitous computing systems? As the answer is probably yes, there might be a demand for ubiquitous computing systems where the raw sensor data cannot be accessed at all, but only processed inferences from the data, like “burglar entry,” can.

Quite right, there is such a demand. How do we move forward from the demand to the reality? —Chris Peterson

Intuitive control, by you, of data sensed about you

Wednesday, August 5th, 2009

David Kotz over at Dartmouth has been doing some interesting work on helping individuals control data sensed about us:

As pervasive environments become more commonplace, the privacy of users is placed at increased risk. The numerous and diverse sensors in these environments can record users’ contextual information, leading to users unwittingly leaving “digital footprints.” Users must thus be allowed to control how their digital footprints are reported to third parties. While a significant amount of prior work has focused on location privacy, location is only one type of footprint, and we expect most users to be incapable of specifying fine-grained policies for a multitude of footprints. In this paper we present a policy language based on the metaphor of physical walls, and posit that users will find this abstraction to be an intuitive way to control access to their digital footprints. For example, users understand the privacy implications of meeting in a room enclosed by physical walls. By allowing users to deploy “virtual walls,” they can control the privacy of their digital footprints much in the same way they control their privacy in the physical world. We present a policy framework and model for virtual walls with three levels of transparency that correspond to intuitive levels of privacy, and the results of a user study that indicates that our model is easy to understand and use.

Sounds great! One quibble about “Users must thus be allowed to control how their digital footprints are reported to third parties” — who is the second party, and how do users control what that party gets? The sensor itself, or the sensor operator? In either case, that is also something to address up front.

I was interested and admittedly surprised to see that this research was funded by the Bureau of Justice Assistance at the U.S. Department of Justice. —Chris Peterson

Ethical contracts for emotion sensors

Tuesday, August 4th, 2009

Principled sensing will often involve getting permission from those being sensed. We can get some ideas about how to think about this process from the paper Affective Sensors, Privacy, and Ethical Contracts by two MIT Media lab researchers, Carson Reynolds (now at U. Tokyo) and Prof. Rosalind Picard. While not a new paper, it seems like a good place to get started for newcomers to the goal of appropriate sensing. From the abstract:

Sensing affect raises critical privacy concerns, which are examined here using ethical theory, and with a study that illuminates the connection between ethical theory and privacy. We take the perspective that affect sensing systems encode a designer’s ethical and moral decisions: which emotions will be recognized, who can access recognition results, and what use is made of recognized emotions. Previous work on privacy has argued that users want feedback and control over such ethical choices. In response, we develop ethical contracts from the theory of contractualism, which grounds moral decisions on mutual agreement. Current findings indicate that users report significantly more respect for privacy in systems with an ethical contract when compared to a control.

A later quote: “Our theory asserts that ethical decisions are encoded by interaction technology.” Sounds right to me. See the Affective Computing Group for more recent papers. —Chris Peterson

Mass vehicle surveillance: the wrong way and the less-wrong way

Thursday, July 16th, 2009

Roger Clarke has a paper titled The Covert Implementation of Mass Vehicle Surveillance in Australia which looks at Automated Number Plate Recognition (ANPR), which he finds being done two different ways:

This paper outlines two alternative architectures for ANPR, referred to as the ‘mass surveillance’ and ‘blacklist-in-camera’ approaches. They reflect vastly different approaches to the balance between surveillance and civil liberties.

Basically it sounds like the wrong way is to collect all vehicle data in a centralized location regardless of whether the vehicle is suspected, and the less-wrong way is to have a list in the camera of numbers being looked for. About the latter:

Further key requirements of the ‘Blacklist in Camera’ design include: certified non-accessibility and non-recording of any personal data other than that arising under the above circumstances

This requirement is the kind of thing that Open Source Sensing advocates: note the word “certified”.

Apparently something somewhat similar to the latter method is done in Canada, but Australia is headed in the wrong direction, according to the author. —Chris Peterson

Scenarios of pervasive sensing & intelligent environments

Thursday, July 9th, 2009

Prof. Vic Callaghan of University of Essex (UK) brings to our attention a paper addressing issues of privacy and intelligent environments, which includes a number of scenarios that help make vivid what the future is bringing. His email is worth a read:

I just watched the video of your talk “Open Source Physical Security: Can we have both privacy and safety?“.

I think you raise a number of very important points about the potential for misuse of technology. I research in pervasive computing (Intelligent Environments, Pervasive Sensing, Digital Homes, Smart Homes etc) having previously been heavily involved in robotics. In this work I became aware of how technology could be misused, in a similar way to the nanotechnology you describe. We became so concerned that we gave a talk to the UN (as we felt it needed legislation or guidance at a very high level). More recently we wrote this up as an academic paper which suffered some opposition and modification before we were able to find and outlet willing to publish it (its a rather unpopular message). We are mainstream researchers in intelligent environments, that spent most of our life promoting this technology so it was, perhaps, a little unusual that we wrote an article that might be counter to its unfettered deployment. (more…)