Archive for August, 2009

Code of Fair Sensing Practices?

Thursday, August 13th, 2009

Simson Garfinkel gave a talk a while back that examined the “Code of Fair Information Practices”, developed originally by a U.S. government task force and described thusly:

• There must be no personal data record-keeping systems whose very existence is secret.
• There must be a way for a person to find out what information about the person is in a record and how it is used.
• There must be a way for a person to prevent information about the person that was obtained for one purpose from being used or made available for other purposes without the person’s consent.
• There must be a way for a person to correct or amend a record of identifiable information about the person.
• Any organization creating, maintaining, using, or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precautions to prevent misuses of the data.

Is this a useful model for how sensing data should be handled? It certainly is not being followed now. We do need to look at this list and ask whether it infringes on freedom of speech, though — see the third bullet above, for example. Sticky issues! —Chris Peterson

Electronic surveillance includes your physical location

Wednesday, August 12th, 2009

Not everyone realizes that “electronic” surveillance can include not just what we think of as electronic information (email, etc.) but physical data as well. In an EFF article on the UK’s half million intercepts of communications data in 2008 — which has no judicial review — this is explained:

These orders can reveal lists of websites visited, email headers, name and address lookups, and, perhaps most controversially, the real-time location of a particular mobile telephone.

So your cell phone is continually reporting your location, which in the UK sounds like pretty easy info for authorities to get. This is a lousy idea from a civil liberties perspective, to put it mildly. For those of you who trust the authorities in your own country, think of the ones elsewhere that you don’t trust: they could have this technology too. (Credit: Mark Finnern) —Chris Peterson

The main reason to care who gets sensing data about you

Tuesday, August 11th, 2009

An ITU paper spells out the main reason to care who gets sensing data about individuals:

From a political standpoint privacy is generally considered to be an indispensable ingredient for democratic societies. This is because it is seen to foster the plurality of ideas and critical debate necessary in such societies…

• Privacy is also a regulating agent in the sense that it can be used to balance and check the power of those capable of collecting data…

Lessig’s list of reasons for protecting privacy belongs to what Colin Bennett and Charles Raab have called the ‘privacy paradigm’—a set of assumptions based on more fundamental political ideas: ‘The modern claim to privacy … rests on the pervasive assumption of a civil society comprised of relatively autonomous individuals who need a modicum of privacy in order to be able to fulfil the various roles of the citizen in a liberal democratic state.’

So the main reason is to protect our political freedom. This is why I hope to find an alternative to the word ‘privacy’ in our discussions. While a useful word, it has connotations of guilt or shame, which are inappropriate in this discussion of how to preserve and strengthen our freedoms. Any ideas on alternative terms? —Chris Peterson

How long to keep unneeded sensor data? 10 minutes

Monday, August 10th, 2009

A paper by researchers at University of Washington, Intel, and Dartmouth reports on Exploring Privacy Concerns about Personal Sensing. Some interesting data:

In some cases, concerns about seemingly invasive sensors could be mitigated by changing the length of time that data were retained. While nearly half of the participants were unwilling to use GPS if the raw data (e.g., the latitude and longitude coordinates) were kept, all but one participant were willing to use it if the raw data were kept only for as long as was necessary to calculate the characteristics of detected physical activities (e.g., distance or pace of a run), and then promptly discarded. The exact length of the data window that the participants thought was acceptable varied, but most who wanted data purging thought that retaining one to 10 minutes of raw data at a time, unless a physical activity is being detected, was reasonable.

We found similar results for audio. A sliding data window of no more than one minute at a time of raw audio data was acceptable to 29% (7 of 24) of participants, although the majority (71%) found recording of any raw audio too invasive. Filtered audio fared better, however. If only a 10 minute sliding window of filtered audio was being saved, except for times when a physical activity is being detected, 62.5% (15 of 24) of participants were willing to use the microphone to get better activity detection.

And some recommendations:

Our results suggest at least three ways in which the acceptability of sensing can be increased, while respecting privacy. First, sensor data should be saved only when relevant activities are taking place. Results for both GPS and audio revealed that continuously purging the raw data increased user acceptance of both sensors. Second, whenever possible, a system’s core functionality should be based on minimallyinvasive sensing. The users can then be given a choice to decide whether to enable additional functionality that might require more invasive sensors. Physical activity detection, much of which can be done with a simple 3-D accelerometer, is a good example of a domain where such graded sensing could be implemented. And third, researchers should explore ways to capture only those features of the sensor data that are truly necessary for a given application. This means, however, that sensor systems might need to have enough computational power to perform onboard processing so that each application that uses a sensor can capture only the information that it needs.

We also note that users can make informed privacy trade-offs only if they understand what the technology is doing, why, and what the potential privacy and security implications are. Building visibility into systems so that users can see and control what data is being recorded and for how long supports informed use. Determining how this can best be done is a difficult, but important, design challenge.

More work along these lines, please. —Chris Peterson

New EFF whitepaper on responsible sensing technology

Friday, August 7th, 2009

Randall Lucas brings to our attention a new whitepaper over at EFF that will sound familiar to readers of this site:

We can’t stop the cascade of new location-based digital services. Nor would we want to — the benefits they offer are impressive. What urgently needs to change is that these systems need to be built with privacy as part of their original design. We can’t afford to have pervasive surveillance technology built into our electronic civic infrastructure by accident. We have the opportunity now to ensure that these dangers are averted.

Authors Andrew Blumberg of Stanford and EFF’s Peter Eckersley make specific suggestions on how to do this, without giving up the benefits widespread sensing will bring. Great to see EFF stepping up on sensing! —Chris Peterson

Separating raw sensor data from processed inferences

Thursday, August 6th, 2009

The sticky issue of who gets sensor data has been addressed by Guruduth Banavar and Abraham Bernstein in “Challenges in Design and Software Infrastructure for Ubiquitous Computing Applications” in the book Advances in Computers, Vol. 62, parts of which you can view at Amazon or Google Books:

Gathering data of any kind irrevocably leads to privacy concerns. Where should the data be stored and what boundaries shouldn’t it cross? Who should have access and who doesn’t? These questions aren’t new to ubiquitous computing. But the pervasiveness of these sensors adds a new layer of complexity to understanding and managing all the possible data streams. Can one subpoena the data collected by ubiquitous computing systems? As the answer is probably yes, there might be a demand for ubiquitous computing systems where the raw sensor data cannot be accessed at all, but only processed inferences from the data, like “burglar entry,” can.

Quite right, there is such a demand. How do we move forward from the demand to the reality? —Chris Peterson

Intuitive control, by you, of data sensed about you

Wednesday, August 5th, 2009

David Kotz over at Dartmouth has been doing some interesting work on helping individuals control data sensed about us:

As pervasive environments become more commonplace, the privacy of users is placed at increased risk. The numerous and diverse sensors in these environments can record users’ contextual information, leading to users unwittingly leaving “digital footprints.” Users must thus be allowed to control how their digital footprints are reported to third parties. While a significant amount of prior work has focused on location privacy, location is only one type of footprint, and we expect most users to be incapable of specifying fine-grained policies for a multitude of footprints. In this paper we present a policy language based on the metaphor of physical walls, and posit that users will find this abstraction to be an intuitive way to control access to their digital footprints. For example, users understand the privacy implications of meeting in a room enclosed by physical walls. By allowing users to deploy “virtual walls,” they can control the privacy of their digital footprints much in the same way they control their privacy in the physical world. We present a policy framework and model for virtual walls with three levels of transparency that correspond to intuitive levels of privacy, and the results of a user study that indicates that our model is easy to understand and use.

Sounds great! One quibble about “Users must thus be allowed to control how their digital footprints are reported to third parties” — who is the second party, and how do users control what that party gets? The sensor itself, or the sensor operator? In either case, that is also something to address up front.

I was interested and admittedly surprised to see that this research was funded by the Bureau of Justice Assistance at the U.S. Department of Justice. —Chris Peterson

Ethical contracts for emotion sensors

Tuesday, August 4th, 2009

Principled sensing will often involve getting permission from those being sensed. We can get some ideas about how to think about this process from the paper Affective Sensors, Privacy, and Ethical Contracts by two MIT Media lab researchers, Carson Reynolds (now at U. Tokyo) and Prof. Rosalind Picard. While not a new paper, it seems like a good place to get started for newcomers to the goal of appropriate sensing. From the abstract:

Sensing affect raises critical privacy concerns, which are examined here using ethical theory, and with a study that illuminates the connection between ethical theory and privacy. We take the perspective that affect sensing systems encode a designer’s ethical and moral decisions: which emotions will be recognized, who can access recognition results, and what use is made of recognized emotions. Previous work on privacy has argued that users want feedback and control over such ethical choices. In response, we develop ethical contracts from the theory of contractualism, which grounds moral decisions on mutual agreement. Current findings indicate that users report significantly more respect for privacy in systems with an ethical contract when compared to a control.

A later quote: “Our theory asserts that ethical decisions are encoded by interaction technology.” Sounds right to me. See the Affective Computing Group for more recent papers. —Chris Peterson