Behind Carnegie Mellon’s heated privacy dispute

[ad_1]

Carnegie Mellon University researchers plan to create advanced smart sensors called MITS. The sensors are intended to collect 12 types of environmental data, including motion, temperature and scrambled audio, in a more personalized and secure way than existing Internet of Things infrastructure. But after installing hundreds of sensors around a new campus building, the project took a surprising turn when some students and faculty accused the researchers of violating their privacy by not asking their permission first.

The debate at the Department of Software and Social Systems has been heated and complicated, highlighting just how nuanced the questions surrounding privacy and technology are. These are issues we all have to contend with as data ballooning in our homes, streets, cars, workplaces and most other places. As we wrote in the article, if technologists whose research agendas can’t reach consensus on privacy, where does that leave the rest of us?

It took us more than a year to report the story. We have attempted to present different perspectives on privacy, consent and the future of IoT technology while recognizing the roles that power, process and communication technologies play in deployment.

One truth is made clear in the report: privacy is subjective – there is no clear standard for what constitutes privacy-protecting technology, even in academic research. In CMU’s case, people on all sides of the debate were trying to advocate for a better future based on their own understanding of privacy. David Wieder, a doctoral student who focuses on tech ethics and is a central character in our story, told us, “I’m not willing to accept the premise of a future… where these kinds of sensors are everywhere.”

But the researchers he criticized were also trying to build a better future. Department Chair James Herbleb encouraged people to support Mits research. “I want to reiterate that this is a very important project… if you want to avoid a future where surveillance is normal and inevitable!” He wrote in an email to the members of the department.

Big questions about the future were in the CMU debate, and they reflect the same questions we all struggle with. Is a world full of IoT devices inevitable? Should we invest our time and effort in making our new technological world safer and more secure? Or should we reject the technology altogether? Under what conditions should we choose which option, and what methods are needed to make these decisions collectively and individually?

Questions around consent and how to communicate about data collection became flashpoints in the debate at CMU, and are central to technology regulation discussions. In Europe, for example, since the adoption of the EU’s General Data Protection Regulation, regulators have debated rules regarding data-based consent and data collection, following a flurry of pop-ups on the internet. Companies use pop-ups to comply with the law, but the messages are said to be ineffective when it comes to informing users about data collection and terms of service.

Throughout the story, we’ll similarly focus on the difference between what technically refers to privacy and social norms around things like notice and consent. Important techniques like edge computing can help protect privacy, but they can’t take the place of asking people if they want to participate in data collection in the first place. We also experienced confusion about what the project was and what data was being collected, and communications about data collection were often unclear and incomplete.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *

3 + 11 =