Archive for the ‘Projects’ Category

When to share the raw data & when not to

Monday, September 7th, 2009

Russ Nelson sends this nomination:

This website encourages people to publish the raw data directly from their hydrologic sensors. Seems to me like they’re the poster child for open source sensing.

http://his.cuahsi.org/index.html

For scientific purposes, sharing the raw data (in addition to any interesting conclusions) is the way to go. In sensing situations where there are privacy concerns, which may not occur in the case of hydrologic data, an open source design process might involve not sharing all the raw data. Figuring out which cases are which will be a challenge! Thanks for the pointer, Russ. —Chris Peterson

How long to keep unneeded sensor data? 10 minutes

Monday, August 10th, 2009

A paper by researchers at University of Washington, Intel, and Dartmouth reports on Exploring Privacy Concerns about Personal Sensing. Some interesting data:

In some cases, concerns about seemingly invasive sensors could be mitigated by changing the length of time that data were retained. While nearly half of the participants were unwilling to use GPS if the raw data (e.g., the latitude and longitude coordinates) were kept, all but one participant were willing to use it if the raw data were kept only for as long as was necessary to calculate the characteristics of detected physical activities (e.g., distance or pace of a run), and then promptly discarded. The exact length of the data window that the participants thought was acceptable varied, but most who wanted data purging thought that retaining one to 10 minutes of raw data at a time, unless a physical activity is being detected, was reasonable.

We found similar results for audio. A sliding data window of no more than one minute at a time of raw audio data was acceptable to 29% (7 of 24) of participants, although the majority (71%) found recording of any raw audio too invasive. Filtered audio fared better, however. If only a 10 minute sliding window of filtered audio was being saved, except for times when a physical activity is being detected, 62.5% (15 of 24) of participants were willing to use the microphone to get better activity detection.

And some recommendations:

Our results suggest at least three ways in which the acceptability of sensing can be increased, while respecting privacy. First, sensor data should be saved only when relevant activities are taking place. Results for both GPS and audio revealed that continuously purging the raw data increased user acceptance of both sensors. Second, whenever possible, a system’s core functionality should be based on minimallyinvasive sensing. The users can then be given a choice to decide whether to enable additional functionality that might require more invasive sensors. Physical activity detection, much of which can be done with a simple 3-D accelerometer, is a good example of a domain where such graded sensing could be implemented. And third, researchers should explore ways to capture only those features of the sensor data that are truly necessary for a given application. This means, however, that sensor systems might need to have enough computational power to perform onboard processing so that each application that uses a sensor can capture only the information that it needs.

We also note that users can make informed privacy trade-offs only if they understand what the technology is doing, why, and what the potential privacy and security implications are. Building visibility into systems so that users can see and control what data is being recorded and for how long supports informed use. Determining how this can best be done is a difficult, but important, design challenge.

More work along these lines, please. —Chris Peterson

Intuitive control, by you, of data sensed about you

Wednesday, August 5th, 2009

David Kotz over at Dartmouth has been doing some interesting work on helping individuals control data sensed about us:

As pervasive environments become more commonplace, the privacy of users is placed at increased risk. The numerous and diverse sensors in these environments can record users’ contextual information, leading to users unwittingly leaving “digital footprints.” Users must thus be allowed to control how their digital footprints are reported to third parties. While a significant amount of prior work has focused on location privacy, location is only one type of footprint, and we expect most users to be incapable of specifying fine-grained policies for a multitude of footprints. In this paper we present a policy language based on the metaphor of physical walls, and posit that users will find this abstraction to be an intuitive way to control access to their digital footprints. For example, users understand the privacy implications of meeting in a room enclosed by physical walls. By allowing users to deploy “virtual walls,” they can control the privacy of their digital footprints much in the same way they control their privacy in the physical world. We present a policy framework and model for virtual walls with three levels of transparency that correspond to intuitive levels of privacy, and the results of a user study that indicates that our model is easy to understand and use.

Sounds great! One quibble about “Users must thus be allowed to control how their digital footprints are reported to third parties” — who is the second party, and how do users control what that party gets? The sensor itself, or the sensor operator? In either case, that is also something to address up front.

I was interested and admittedly surprised to see that this research was funded by the Bureau of Justice Assistance at the U.S. Department of Justice. —Chris Peterson

Tracking the sensor revolution, for big bucks or for free

Thursday, July 23rd, 2009

Wireless Sensor Networks report cover

Tracking what’s happening with sensors today is an intimidating task. If you have US$2700 you can get a big report on Wireless Sensor Networks from Bharat Book Bureau, which appears to be based in India. If you don’t have this amount to spare, you can get a feel for what’s happening by just reading the long ad for the report, including the detailed table of contents. The summary has helpful orientation material:

Many now refer to traditional active RFID as First Generation. Examples of this include the device that opens your car from a distance and the device in your car windshield that uses a battery to incur and record non-stop tolling charges. Another example is the widespread tracking of military supplies and assets by electronically recording when they have been near an electronic device that reads the tag using radio waves. Real Time Location Systems RTLS, that continuously interrogate the tag from a distance, are called Second Generation active RFID and WSN is called Third Generation because it works in yet another completely different manner to provide its unique benefits…

Progress is now rapid and the much smaller size of the latest WSN tags is one indication of this. While the original concept was for billions or even trillions of tags the size of dust, the first ten years of development of USN has more often seen expensive tags, some the size of a videotape or, more recently, palm sized. However, further miniaturisation and cost reduction are now imminent.

The ToC lists many intriguing projects and companies worth a web search. There is a section on Impediments, which includes privacy concerns as the first listing. We can help with that! —Chris Peterson

Mass vehicle surveillance: the wrong way and the less-wrong way

Thursday, July 16th, 2009

Roger Clarke has a paper titled The Covert Implementation of Mass Vehicle Surveillance in Australia which looks at Automated Number Plate Recognition (ANPR), which he finds being done two different ways:

This paper outlines two alternative architectures for ANPR, referred to as the ‘mass surveillance’ and ‘blacklist-in-camera’ approaches. They reflect vastly different approaches to the balance between surveillance and civil liberties.

Basically it sounds like the wrong way is to collect all vehicle data in a centralized location regardless of whether the vehicle is suspected, and the less-wrong way is to have a list in the camera of numbers being looked for. About the latter:

Further key requirements of the ‘Blacklist in Camera’ design include: certified non-accessibility and non-recording of any personal data other than that arising under the above circumstances

This requirement is the kind of thing that Open Source Sensing advocates: note the word “certified”.

Apparently something somewhat similar to the latter method is done in Canada, but Australia is headed in the wrong direction, according to the author. —Chris Peterson

The Economist on mobile phone sensing pluses & minuses

Friday, July 10th, 2009

Alexandra Carmichael, co-founder of the open source health research site CureTogether, brings our attention to a piece in The Economist summarizing *some* of the current work on sensing using mobile phones. It concludes:

The technology is probably the easy part, however. For global networks of mobile sensors to provide useful insights, technology firms, governments, aid organisations and individuals will have to find ways to address concerns over privacy, accuracy, ownership and sovereignty. Only if they do so will it be possible to tap the gold mine of information inside the world’s billions of mobile phones.

This may be true, but these projects seem to be moving ahead in any case… —Chris Peterson

Scenarios of pervasive sensing & intelligent environments

Thursday, July 9th, 2009

Prof. Vic Callaghan of University of Essex (UK) brings to our attention a paper addressing issues of privacy and intelligent environments, which includes a number of scenarios that help make vivid what the future is bringing. His email is worth a read:

I just watched the video of your talk “Open Source Physical Security: Can we have both privacy and safety?“.

I think you raise a number of very important points about the potential for misuse of technology. I research in pervasive computing (Intelligent Environments, Pervasive Sensing, Digital Homes, Smart Homes etc) having previously been heavily involved in robotics. In this work I became aware of how technology could be misused, in a similar way to the nanotechnology you describe. We became so concerned that we gave a talk to the UN (as we felt it needed legislation or guidance at a very high level). More recently we wrote this up as an academic paper which suffered some opposition and modification before we were able to find and outlet willing to publish it (its a rather unpopular message). We are mainstream researchers in intelligent environments, that spent most of our life promoting this technology so it was, perhaps, a little unusual that we wrote an article that might be counter to its unfettered deployment. (more…)

Smartphone sensing in privacy-aware environments

Wednesday, July 8th, 2009

Steve Omohundro brings to our attention a talk at PARC on a sensing system that pays some attention to “privacy-by-design”, apparently:

This talk describes how the mobile internet is changing the face of traffic monitoring at a rapid pace. In the last five years, cellular phone technology has bypassed several attempts to construct dedicated infrastructure systems to monitor traffic. Today, GPS equipped smartphones are progressively morphing into an ubiquitous traffic monitoring system, with the potential to provide information almost everywhere in the transportation network. Traffic information systems of this type are one of the first instantiations of participatory sensing for large scale cyberphysical infrastructure systems.

(more…)

Sensor network is parasitic on living trees

Tuesday, July 7th, 2009

Following up on our “they really will be everywhere” theme, Laurie Sullivan of RFID Journal reports that sensor networks do not even need direct solar energy to operate now:

Forest-Monitoring Sensors Harvest Energy From Trees

The U.S. Forest Service is deploying a climate sensor network powered by energy harvested from living trees

July 2, 2009—The U.S. Forest Service has confirmed that it will purchase a climate sensor network this summer from Voltree Power that is powered by energy harvested from living trees. The system employs low-power radio transceivers, sensors and patented bioenergy-harvesting technology to predict and detect fires.

Using the word ‘parasitic’ here is more an attempt at humor than a complaint; the bioenergy-harvesting is a very clever technical achievement. Our point here is that soon there won’t be anywhere that sensors can’t operate… —Chris Peterson

Sensors at Google Internet Summit: see video

Monday, June 29th, 2009

Increasingly when we look at sensing we need to look at wireless technology as well. The topics were covered in one session at this year’s Google Internet Summit. Speakers were Craig Partridge (BBN), Larry Alder (Google), Sumit Agarwal (Google Mobile), Kevin Fall (Intel), and Deborah Estrin (UCLA).

For those of us who missed the conference, videos are posted. This session lasts 90 minutes; I haven’t tackled it yet but the talks by Estrin, Agarwal, and Partridge look relevant to our interests, and maybe the others as well. Partridge refers to a 10-20 year timeframe, by when this whole field should be doing amazing things (for good and maybe ill as well).

One problem with the “video-ization” of the world, as opposed to accessing text, is the difficulty of skimming to see if something is worth viewing/reading. I look forward to a future feature on YouTube to enable seeing a few seconds’ clip every five minutes or so. Thanks to bestechvideos.com for the pointer.