Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Get exclusive access to thought-provoking articles, bonus podcast content, and cutting-edge whitepapers. Become a member of the UX Magazine community today!

Home ›› Business Value and ROI ›› 6 Key Questions to Guide International UX Research ›› The Hubris and Humility of Opting In

The Hubris and Humility of Opting In

by Bryan Goodpaster
7 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

As we opt-in to giving more and more of our selves over to technology, we risk trading dignity for convenience.

In the wake of the National Security Agency’s big-data dragnet—and a slew of security gaffes prompting debate in the UK and Canada—digital users share an increased sense of vulnerability. The caliber of our exposure highlights the need to re-evaluate our daily interactions with technology and their impact on the fidelity of our privacy. There is a growing, general sentiment of unbalance in this evolving relationship. We now wake more often to email than the sunrise, and we are seeing the digitization of our daily interactions slowly and subtly take the reins from our grasp.

In hindsight, our accession into the digital age has always had strings attached. Historically speaking, our human-technological interdependency is rather young. In fact, just a little over a decade ago, only 25% of the world’s information was stored digitally. Today, according to Big Data: A Revolution That Will Transform How We Live, Work, and Think by Viktor Mayer-Schönberger and Kenneth Cukier, less than 2% of the world’s information is stored in non-digital form, and technology is constantly becoming more deeply embedded in every moment of our lives. With the advent of the smartphone, 10% of all photos taken in the history of the world were taken in the last twelve months. Near the end of 2013, Business Insider noted that more than 14 million photos were uploaded to Facebook every hour. Time recently cited a Nielsen report, stating that “Twitter activity reaches new people, and those new people … actually change [their] behavior based on a tweet.”

As surveillance technologies shrink in cost and grow in sophistication, we are increasingly unaware of the vast, cumulative data we offer up. According to a Wall Street Journal analysis, a typical American’s data is collected in more than 20 different ways during the course of everyday activity. In an era when cellphone data, web searches, online transactions, and social-media commentary are actively gathered, logged, and cross-compared, we’ve seemingly surrendered to the inevitability of trade-offs in a digital future.

It’s easy to understand how we arrived here. Our new vernacular is a series of logins, sign ups, replies, tags, upvotes, check-ins, likes, loves, links, tweets, and retweets. These memes and seemingly involuntary responses that populate our days seem ephemeral and momentary—practically inconsequential. Yet, the details we offer up are more concrete and cumulative than they seem. Our digital breadcrumbs are everywhere, and they extend well beyond a Facebook profile or the constant GPS data our mobile phones transmit from our pockets and purses.

We have opted into an unfathomable interdependency. Kevin Kelly, author of What Technology Wants, said “In the future, we’ll find it easier to love technology. Machines will win our hearts with every step they take in evolution.” Undoubtedly, this is a co-evolution. It’s a symbiotic relationship where we are becoming more and more enmeshed and less aware of the capacity of this evolving interconnection. Arguably, we are no more mindful of the bits and bytes that we tap, swipe, and key than we are of our own breathing. It’s a compulsory affair built on convenience and reward.

The inevitable price of the convenience of opting in is compromise

The rhetoric of data “sharing” assumes reciprocity. Although it may be logical to assume that we retain the power to control our digital privacy, like the bar-coded plastic membership cards that dangle from our keychains, our privacy is quickly slipping through our fingers. The true heirs of this data are platforms like Facebook, Google, Microsoft and others that we have gifted seemingly insignificant data to—under the guise of “sharing.” As search engines, social sites, and dating platforms share more and more user searches and preferences, it is increasingly clear that the true reciprocity in this relationship is more often between other social platforms and third parties. In fact, in January of this year Facebook formalized its “reciprocity policy,” stipulating “… if you use any Facebook APIs (application program interface) to build personalized or social experiences, you must also enable people to easily share their experiences back with people on Facebook.” Translation: I’ll share my (user) data if you share yours.

More recently, information scholar Michael Zimmer published what has been called “The Zuckerberg Files.” The archive is a scholarly collection hosted by the University of Wisconsin-Milwaukee that attempts to track every public utterance of the Facebook CEO. Of note is the number of times Zuckerberg publicly spoke the word “privacy.” According to the report, the CEO has only used the word privacy a total of three times to date. In October, Facebook formally sacked the privacy setting that enabled users to mask their profiles from being searched by name, saying the feature was outdated.

The next day, Google revealed a program that allows the company to use its customers’ own words and likenesses in ads for products they comment on, follow and “+1” (Google’s equivalent of a “like”). Google’s policy states that “you’re in control of what you share.” Facebook has asserted similar messaging, but “Public” has been its default setting since it radically changed its privacy options without notifying users in 2009.

This spring, Microsoft launched a campaign for its new Do Not Track browser. Describing the company as a committed advocate of consumer privacy, Ryan Gavin, Microsoft’s general manager for Windows, contended, “Privacy is core to the Microsoft brand. It’s not a flash in the pan for Microsoft.” Given that Microsoft is one of the companies accused of trafficking data to the National Security Agency, some view this release as little more than a feigned concern for the privacy of consumer data. Still, the “good guy” approach certainly doesn’t hurt when attempting to differentiate from competitor Google, who doled out a $22.5 million settlement to the Federal Trade Commission for privacy violations and faces antitrust and privacy challenges from the European Union.

Of the participants implicated in the NSA’s PRISM scandal exposed by Edward Snowden, Google appears to be getting squashed in the media. Charles Arthur, technology editor for The Guardian, asserts “Google+ isn’t a social network; it’s The Matrix.” Arthur makes the case that, while many may evaluate Google+ by social-media standards, it’s not a social-media platform. He argues that in reality Google+ functions as an invisible veil between users and the web. Arthur and others have asserted that Google has objectives beyond indexing the web using its analytic capabilities, which extend to its users.

Faced with suspect privacy policies, governments and institutions alike are scrambling to anonymize data, but this is proving to be a difficult—if not futile—endeavor. In an article for The Guardian this year, author Cory Doctorow illustrated how easy de-anonymizing data could be. He wrote, “There are lots of smokers in the health records, but once you narrow it down to an anonymous male black smoker born in 1965 who was presented at the emergency room with aching joints, it’s actually pretty simple to merge the ‘anonymous’ record with a different ‘anonymized’ database and out pops the near-certain identity of the patient.”

Unfortunately, our Orwellian course has been set. Not only have we already offered up oceans of data; the world we live in is saturated with sensors and surveillance technologies. No longer is a keystroke needed for digital exchange. Soon, biometrics will replace all password transactions. At September’s TechCrunch Disrupt Conference, Google’s security executive avowed that passwords are dead. The proliferation of biometric ATMs that launched in Japan in 2006 (and, more recently, fingerprint recognition released with this year’s Apple iPhone 5S), among other biometric technologies, will usher in the next generation of data exchange. Gestural semantics, retinal communication, and facial recognition will emerge as part of future bionetic interfaces. While facial recognition has not yet reached maturation, Facebook has the largest biometric database in the world.

Massive amounts of biometric data offer a brand the potential to access user data discretely, and without the assistance of a handheld device. Simple surveillance technologies can use facial recognition to assess buying habits, establish credit potential, or predict any number of purchase behaviors. Last year, a handful of retailers deployed bionic mannequins equipped with facial recognition software, surveillance cameras and sensors—capable of tracking shoppers’ gender, age, race, facial expressions, and more. Other technologies which use customers’ in-store Wi-Fi connections to track traffic patterns and shopper frequency are being used by retailers ranging from Family Dollar to Warby Parker.

These are the tools of the future of retail. Advancing bionetic interfaces may mean that the future of brand engagement will evolve into equal parts biology and psychology. Product, place, price, and promotion will be addressed with surgical precision. The galaxies of data most of us have already offered up may simply be waiting for the right technology to contextualize them—and it all begins with opting in.

Benjamin Franklin said, “Those who surrender freedom for security will not have, nor do they deserve, either one.” The inevitable price of the convenience of opting in is compromise. The promise of big data cannot be segregated from this price. Embracing the radical transparency at our threshold, many see a potentiality that far outweighs the threat—after all, what do we have to hide? Yet, privacy is not secrecy—and while there are things we should be comfortable bearing, our dignity should not be one of them. Whistleblower Edward Snowden said his biggest fear was that we “won’t be willing to take the risks necessary to stand up and fight to change things.” So, knowing what you know now, do you choose the red pill or the blue one?

Image of drinking sheep courtesy Shutterstock

post authorBryan Goodpaster

Bryan Goodpaster
Bryan Goodpaster is a creative director at LPK, where he is often called upon for his non-traditional approach and strategic consultancy—helping crack wicked brand problems and strategic conundrums for many category-leading brands. Part semiotician, part psychologist, Bryan has known what you really meant by that for over 15 years.

Tweet
Share
Post
Share
Email
Print

Related Articles

Discover the hidden costs of AI-driven connectivity, from environmental impacts to privacy risks. Explore how our increasing reliance on AI is reshaping personal relationships and raising ethical challenges in the digital age.

Article by Louis Byrd
The Hidden Cost of Being Connected in the Age of AI
  • The article discusses the hidden costs of AI-driven connectivity, focusing on its environmental and energy demands.
  • It examines how increased connectivity exposes users to privacy risks and weakens personal relationships.
  • The article also highlights the need for ethical considerations to ensure responsible AI development and usage.
Share:The Hidden Cost of Being Connected in the Age of AI
9 min read

Is AI reshaping creativity as we know it? This thought-provoking article delves into the rise of artificial intelligence in various creative fields, exploring its impact on innovation and the essence of human artistry. Discover whether AI is a collaborator or a competitor in the creative landscape.

Article by Oliver Inderwildi
The Ascent of AI: Is It Already Shaping Every Breakthrough and Even Taking Over Creativity?
  • The article explores the transformative impact of AI on creativity, questioning whether it is enhancing or overshadowing human ingenuity.
  • It discusses the implications of AI-generated content across various fields, including art, music, and writing, and its potential to redefine traditional creative processes.
  • The piece emphasizes the need for a balanced approach that values human creativity while leveraging AI’s capabilities, advocating for a collaborative rather than competitive relationship between the two.
Share:The Ascent of AI: Is It Already Shaping Every Breakthrough and Even Taking Over Creativity?
6 min read

Discover how GPT Researcher is transforming the research landscape by using multiple AI agents to deliver deeper, unbiased insights. With Tavily, this approach aims to redefine how we search for and interpret information.

Article by Assaf Elovic
You Are Doing Research Wrong
  • The article introduces GPT Researcher, an AI tool that uses multiple specialized agents to enhance research depth and accuracy beyond traditional search engines.
  • It explores how GPT Researcher’s agentic approach reduces bias by simulating a collaborative research process, focusing on factual, well-rounded responses.
  • The piece presents Tavily, a search engine aligned with GPT Researcher’s framework, aimed at delivering transparent and objective search results.
Share:You Are Doing Research Wrong
6 min read

Join the UX Magazine community!

Stay informed with exclusive content on the intersection of UX, AI agents, and agentic automation—essential reading for future-focused professionals.

Hello!

You're officially a member of the UX Magazine Community.
We're excited to have you with us!

Thank you!

To begin viewing member content, please verify your email.

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and