A recent WikiLeaks document dump, purportedly from the CIA, claimed that the agency can hack smart TVs and place them in “fake-off” mode, allowing an owner’s private conversations to be recorded and sent to a covert server.
Although many were surprised, in 2013, CNN reported on a flaw in Samsung TVs that would allow hackers to remotely switch on the TVs’ cameras without alerting the owner.
But the WikiLeaks exposure of possible CIA spycraft highlights the unraveling of cherished ideas of privacy at home as we enter the era of the Internet of Things — a world where many, if not all, of the objects surrounding us are “smart” (and therefore accessible to hackers).
Are we prepared for ubiquitous computing and its evil twin, ubiquitous surveillance?
Foreseeing such a future, the Helsinki Privacy Experiment explored the long-term psychological consequences of surveillance in the home. Though participants responded to the constant intrusion of a camera in their private space by changing their behavior to gain control of when they might be recorded, over time, most simply got used to it.
Privacy, it turns out, may not be so valuable after all.
A rose by any other name
The researchers began their 12-month experiment, the results of which were published five years ago in an ACM journal (PDF), by placing “behavioral observation systems” in 10 homes.
The technologies served two purposes: They functioned as media centers, equipped with TVs, DVD players and WiFi access; and they collected, stored and transferred network data as well as audio and video collected by cameras positioned around the homes.
Living in the tricked-out homes were 12 people, most in their 20s, though one 60-year-old also volunteered to be a privacy guinea pig. Five were female, seven male. Six were students, three had full-time jobs, one was unemployed, one was on maternity leave, and one was partially retired.
Each month, participants responded to questionnaires, and after six months, the researchers interviewed them. None of the volunteers reported feeling stressed, but some said they were annoyed, anxious and even enraged by the surveillance. Specifically, they said that being video-recorded while in the nude or having sex troubled them, and not being able to find solitude and isolation in their own homes, where it is expected, had disrupted their lives.
Among participants’ gravest concerns was the possibility of a public viewing of the videos. In particular, some feared that the material would be edited with some intention to misrepresent them. They’d be most unhappy, participants said, if private footage were shown to the authorities, their friends and their employers.
During the experiment, all but one participant began “privacy-seeking” behaviors: stopping some favorite activity entirely, for instance, or hiding it from the sensors. To avoid having their online searches tracked — which bothered participants as much as the cameras did — some began to visit Internet cafés.
Though in the earliest days, all the participants worried about the experience, after months of observation, 10 out of 12 reported becoming accustomed to a lack of privacy.
Is a technological invasion of privacy no different and possibly no worse than the nosy neighbor we all learn to avoid?
There’s always been surveillance, acknowledges Antti Oulasvirta, lead author of the Helsinki Privacy Experiment.
Neighbors have always witnessed some of our private moments, for example, while “the people we live with are always sort of surveilling us,” said Oulasvirta, an associate professor who teaches cognitive science, modeling, human performance and experience at Aalto University in Finland. However, there are major differences between human neighbors or partners and tech systems capable of observing and recording our private moments.
“With a neighbor and with a significant other, you can negotiate. You can say, ‘please don’t do that.’ Or you can close the blinds,” he said. “But with (technology), you cannot necessarily do that.”
Oulasvirta also acknowledges that technological privacy violations are nothing new. Concerns about this possibility “started with the internet in the late ’90s, when there were cookies and people were starting to be tracked, and it got worse with smartphones,” he said. “And now we have smart TVs, and eventually we will have IOT”: the Internet of Things.
Although there’s no real map for the road ahead, the history of “cookies” provides some insights that might come in handy.
The forgetful years
In the earliest years of the Internet, websites could not recall users. Each time you went to a different page, a website would forget you and any action you had taken. Naturally, this had real drawbacks.
“This is a bit like talking to someone with Alzheimer disease. Each interaction would result in having to introduce yourself again, and again, and again,” programmer Lou Montulli, formerly of Netscape, wrote in a blog entry. To solve this, Montulli envisioned storing data — a short string of unique text dubbed a “cookie” — on a user’s hard drive so that a website would recognize the device each time a user visited.
In fall 1994, Netscape released its new browser, complete with cookie specifications written by Montulli. Within a year, the browser had become the most popular in the world.
Soon, other programmers began to make use of cookies in unforeseen ways. “Most of those uses were fantastic, some of them were concerning,” Montulli wrote.
For example, online publishers began to count unique readers, while advertisers began to track online behavior to tailor their ads to consumers’ highly specific tastes.
By now, the business community generally agrees that Web tracking is essential, according to a report written for the Federal Trade Commission. Because this form of behavioral advertising often pays the bills, some people even argue that it benefits most of us who would rather not be charged for cruising the Internet. Besides, tracking policies are disclosed in those agreements we all consent to (without reading).
Still, nine out of 10 adults already believe that they have lost control of how their personal information is collected and used, according to the Pew Research Center. And while slightly more than half of Americans consider surveillance cameras at work an acceptable safety tradeoff, only about a quarter find it acceptable for “smart thermostats” in their homes to gather data on their comings and goings, reports Pew.
Future thoughts
Looking ahead to the Internet of Things, most tech experts predict that as our homes — TVs as well as innocuous-seeming dolls and toys — gain the ability to see and potentially record us, all of our former notions of privacy will vanish.
As Oulasvirta suggests, using cookies and spy hacks on our smartphones, companies and governments have found ways of monetizing or surveilling us for their own purposes. Why would they stop now?
Meanwhile, we are told that this “ubiquitous surveillance” is emerging primarily from our own “voluntary” choices. After all, we’re the ones who buy internet TVs and click the “I accept” button on privacy disclosure agreements.
Yet the standard smartphone operating systems don’t permit you to choose safeguards to prevent being surveilled, Oulasvirta observed.
At the same time, Lorrie Cranor, a professor of computer science and engineering and public policy at Carnegie Mellon University, questions “how voluntary our choices to use Internet-connected computers and mobile phones are in today’s society.” Applying for most jobs and keeping many jobs — those that require the use of a company phone and computer — require online participation, she noted. Even schools increasingly require students to remain connected.
“With the Internet of Things, people have a sense that there’s some privacy issues, but I don’t think they really understand what data is being collected, or how or why,” Cranor said.
Though not a psychologist, Cranor has conducted a great deal of behavioral research to better understand “usable privacy and security.” One of her behavioral studies exploring regret, for example, found that people talking to someone in person were more likely to feel bad about being critical, but on Twitter, they were more likely to regret revealing too much. Potholes in the online world apparently are not always easy to see.
“We conducted studies on advertising tracking a few years ago, and people kind of didn’t really know that it was happening” to the extent that it was, said Cranor, who has also served as chief technologist at the US Federal Trade Commission. “People would say, ‘Wow, it seems like they’re doing it behind my back. I didn’t know this was happening. It’s really creepy.’ ”
She thinks people are becoming increasingly aware of surveillance, but the extent to which it’s happening still surprises people. Looking ahead to homes chock full of smart devices, she expects “that people will wonder and not really know when they’re being tracked.”
Based on her research, people will deal with this in different ways. Some will respond by giving up.
“They basically say, ‘there’s nothing you can do; you’ve lost all your privacy. I can’t live my life feeling suspicious all the time, so I’m just not going to worry about it,’ ” Cranor said.
On the other side of the spectrum, people say they’ll just avoid the Internet, social networks, online banking and any sort of activity that might lead to surveillance.
Most people fall somewhere in the middle, picking a few focal points to worry about — social media, say — and leaving the rest alone, Cranor said.
“We’ve also seen that people are concerned when they don’t know who’s watching,” she said.
“The uncertainty and what people imagine is often actually worse than what’s really happening,” she said. “But not always.”
Sometimes, it’s worse than people imagine, and the possible consequences might be, too. After all, impersonal data in a database — “not your image, not your video” — can always be taken out of context, Cranor explained. A completely innocuous purchase pattern could be framed in a way that looks like you did something wrong.
“There are skeptics who say ‘privacy invasion doesn’t really hurt anybody’ or ‘show me the monetary damage,’ ” she said. “They’ll say, ‘If you have nothing to hide, you have nothing to worry about.’ But I don’t buy that. I think people are impacted in a serious way. These privacy concerns are very real.”
‘Visual privacy’
According to Bob Briscoe, chief research scientist in communications systems with Simula Research Laboratory, most of us have had countless benign experiences whenever we’ve given up our privacy.
These many experiences have taught us that revealing private information allows both commercial and public organizations to make our lives easier by targeting our needs. Rarely, if ever, does a problem occur. As a result, we feel complacent, and this leads to a general lack of concern about our privacy.
“However, once information is released, it can never be unreleased,” Briscoe said. “I look at history (e.g. McCarthyism), and I consider the possibility of a future when those with power take control over information about us stored many years before, that we did not realize was so revealing when we released it. For instance, how could I have realized that my recorded eye movements would reveal so much about my inner self?”
In Oulasvirta’s view, there’s no conjecture when it comes to the consequences of privacy invasions. “If you have surveillance, there’s no reported positive effects,” he said. “There’s a really interesting theory that we build on that sort of tells why the Internet of Things and surveillance are sort of attacking us strongly.”
As explained by Nancy M. Wells (PDF), an environmental psychologist at Cornell University, “from an environmental psychology perspective, privacy is control of access to self.” She wrote in an email, “we often think of privacy as the regulation of social interaction as well as regulation of access to self.”
Environmental psychology identifies several dimensions of privacy: solitude, reserve (the ability to limit what we tell others), isolation (the ability control our physical distance from others), anonymity (the ability to get away from social pressures and recover from social injuries) and intimacy.
According to Oulasvirta, “surveillance is potentially attacking all (these dimensions). It’s attacking the social and psychological foundation of humans.”
Wells explained that one aspect is informational privacy, but a more striking aspect is “visual privacy.”
Not being able to control who has visual access to us may lead to social withdrawal, she said. For example, research of people living in crowded conditions, who have little ability to control interactions with others, shows that some respond by withdrawing.
“The inability to regulate access to self due to surveillance could have a similar effect,” said Wells, who also noted that the unpredictability or “unknown” nature of surveillance is an important consideration.
“My sense is that the uncertainty of whether one is being observed will result in stress, which, over time, could erode mental health,” Wells said, adding, “the need to be perpetually vigilant is cognitively taxing” as well. The result, over time, would be mental fatigue “characterized by difficulty focusing and irritability.”
“On the personal level, you might be less effective at your job,” she said. “This sort of irritability could have implications for just coping out in the world and general interaction with people — not feeling patient in a way.”
Suspecting privacy invasions in our very homes, we might all become a little chillier, reluctant to extend a neighborly hand.
“On the other hand, humans do adapt — often to things that are not good for us — so we may ultimately become inured to privacy violations and to surveillance,” Wells said. “I hope not.”
For Oulasvirta, the most important question concerning privacy is simple: What are we going to do about it?
“It’s a really, really tricky question because it’s such a complex sociotechnical system we are talking about that involves, you know, social norms and legislation and people’s practices and homes and their devices and the Internet,” he said.
In the privacy experiment, Oulasvirta and his colleagues found that they could lower the perceived threat to privacy simply by making known the intentions and identities of those doing the surveillance.
Based on this finding, if people are informed “in an easily digestible form” how, when and who will be collecting data when they buy a smart item, it could alleviate the potential downside of smart living.
“Create a sort of ingredients label for IT,” Oulasvirta said. “That way, consumers could predict what is going to happen when they buy the device.”
“But that’s a small hack, right, and it doesn’t really solve the problem,” he said, adding that that in his experiment, everyone had volunteered. But in the real world, you buy a TV “and you get surveillance on top of it. Nobody volunteers for that.”
He dwells on the fact that even the people in his experiment who were most critical and worried about privacy invasions at first ultimately adjusted to it.
“This is what has been happening with the Internet and the smartphone and social media … and maybe will happen with IOT,” Oulasvirta said. Despite a lack of concern revealed in both his experiment and the world at large, he expects the prevalence of privacy issues to increase.
“There’s tremendous improvements in our ability to fabricate video and fabricate voice,” Oulasvirta said. Collected data can form the basis of a generated video where you are talking “and it would look pretty good. This is happening now. Within five years, this will be even more sophisticated.”