Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Business Value and ROI ›› 6 Key Questions to Guide International UX Research ›› The Hubris and Humility of Opting In

The Hubris and Humility of Opting In

by Bryan Goodpaster
7 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

As we opt-in to giving more and more of our selves over to technology, we risk trading dignity for convenience.

In the wake of the National Security Agency’s big-data dragnet—and a slew of security gaffes prompting debate in the UK and Canada—digital users share an increased sense of vulnerability. The caliber of our exposure highlights the need to re-evaluate our daily interactions with technology and their impact on the fidelity of our privacy. There is a growing, general sentiment of unbalance in this evolving relationship. We now wake more often to email than the sunrise, and we are seeing the digitization of our daily interactions slowly and subtly take the reins from our grasp.

In hindsight, our accession into the digital age has always had strings attached. Historically speaking, our human-technological interdependency is rather young. In fact, just a little over a decade ago, only 25% of the world’s information was stored digitally. Today, according to Big Data: A Revolution That Will Transform How We Live, Work, and Think by Viktor Mayer-Schönberger and Kenneth Cukier, less than 2% of the world’s information is stored in non-digital form, and technology is constantly becoming more deeply embedded in every moment of our lives. With the advent of the smartphone, 10% of all photos taken in the history of the world were taken in the last twelve months. Near the end of 2013, Business Insider noted that more than 14 million photos were uploaded to Facebook every hour. Time recently cited a Nielsen report, stating that “Twitter activity reaches new people, and those new people … actually change [their] behavior based on a tweet.”

As surveillance technologies shrink in cost and grow in sophistication, we are increasingly unaware of the vast, cumulative data we offer up. According to a Wall Street Journal analysis, a typical American’s data is collected in more than 20 different ways during the course of everyday activity. In an era when cellphone data, web searches, online transactions, and social-media commentary are actively gathered, logged, and cross-compared, we’ve seemingly surrendered to the inevitability of trade-offs in a digital future.

It’s easy to understand how we arrived here. Our new vernacular is a series of logins, sign ups, replies, tags, upvotes, check-ins, likes, loves, links, tweets, and retweets. These memes and seemingly involuntary responses that populate our days seem ephemeral and momentary—practically inconsequential. Yet, the details we offer up are more concrete and cumulative than they seem. Our digital breadcrumbs are everywhere, and they extend well beyond a Facebook profile or the constant GPS data our mobile phones transmit from our pockets and purses.

We have opted into an unfathomable interdependency. Kevin Kelly, author of What Technology Wants, said “In the future, we’ll find it easier to love technology. Machines will win our hearts with every step they take in evolution.” Undoubtedly, this is a co-evolution. It’s a symbiotic relationship where we are becoming more and more enmeshed and less aware of the capacity of this evolving interconnection. Arguably, we are no more mindful of the bits and bytes that we tap, swipe, and key than we are of our own breathing. It’s a compulsory affair built on convenience and reward.

The inevitable price of the convenience of opting in is compromise

The rhetoric of data “sharing” assumes reciprocity. Although it may be logical to assume that we retain the power to control our digital privacy, like the bar-coded plastic membership cards that dangle from our keychains, our privacy is quickly slipping through our fingers. The true heirs of this data are platforms like Facebook, Google, Microsoft and others that we have gifted seemingly insignificant data to—under the guise of “sharing.” As search engines, social sites, and dating platforms share more and more user searches and preferences, it is increasingly clear that the true reciprocity in this relationship is more often between other social platforms and third parties. In fact, in January of this year Facebook formalized its “reciprocity policy,” stipulating “… if you use any Facebook APIs (application program interface) to build personalized or social experiences, you must also enable people to easily share their experiences back with people on Facebook.” Translation: I’ll share my (user) data if you share yours.

More recently, information scholar Michael Zimmer published what has been called “The Zuckerberg Files.” The archive is a scholarly collection hosted by the University of Wisconsin-Milwaukee that attempts to track every public utterance of the Facebook CEO. Of note is the number of times Zuckerberg publicly spoke the word “privacy.” According to the report, the CEO has only used the word privacy a total of three times to date. In October, Facebook formally sacked the privacy setting that enabled users to mask their profiles from being searched by name, saying the feature was outdated.

The next day, Google revealed a program that allows the company to use its customers’ own words and likenesses in ads for products they comment on, follow and “+1” (Google’s equivalent of a “like”). Google’s policy states that “you’re in control of what you share.” Facebook has asserted similar messaging, but “Public” has been its default setting since it radically changed its privacy options without notifying users in 2009.

This spring, Microsoft launched a campaign for its new Do Not Track browser. Describing the company as a committed advocate of consumer privacy, Ryan Gavin, Microsoft’s general manager for Windows, contended, “Privacy is core to the Microsoft brand. It’s not a flash in the pan for Microsoft.” Given that Microsoft is one of the companies accused of trafficking data to the National Security Agency, some view this release as little more than a feigned concern for the privacy of consumer data. Still, the “good guy” approach certainly doesn’t hurt when attempting to differentiate from competitor Google, who doled out a $22.5 million settlement to the Federal Trade Commission for privacy violations and faces antitrust and privacy challenges from the European Union.

Of the participants implicated in the NSA’s PRISM scandal exposed by Edward Snowden, Google appears to be getting squashed in the media. Charles Arthur, technology editor for The Guardian, asserts “Google+ isn’t a social network; it’s The Matrix.” Arthur makes the case that, while many may evaluate Google+ by social-media standards, it’s not a social-media platform. He argues that in reality Google+ functions as an invisible veil between users and the web. Arthur and others have asserted that Google has objectives beyond indexing the web using its analytic capabilities, which extend to its users.

Faced with suspect privacy policies, governments and institutions alike are scrambling to anonymize data, but this is proving to be a difficult—if not futile—endeavor. In an article for The Guardian this year, author Cory Doctorow illustrated how easy de-anonymizing data could be. He wrote, “There are lots of smokers in the health records, but once you narrow it down to an anonymous male black smoker born in 1965 who was presented at the emergency room with aching joints, it’s actually pretty simple to merge the ‘anonymous’ record with a different ‘anonymized’ database and out pops the near-certain identity of the patient.”

Unfortunately, our Orwellian course has been set. Not only have we already offered up oceans of data; the world we live in is saturated with sensors and surveillance technologies. No longer is a keystroke needed for digital exchange. Soon, biometrics will replace all password transactions. At September’s TechCrunch Disrupt Conference, Google’s security executive avowed that passwords are dead. The proliferation of biometric ATMs that launched in Japan in 2006 (and, more recently, fingerprint recognition released with this year’s Apple iPhone 5S), among other biometric technologies, will usher in the next generation of data exchange. Gestural semantics, retinal communication, and facial recognition will emerge as part of future bionetic interfaces. While facial recognition has not yet reached maturation, Facebook has the largest biometric database in the world.

Massive amounts of biometric data offer a brand the potential to access user data discretely, and without the assistance of a handheld device. Simple surveillance technologies can use facial recognition to assess buying habits, establish credit potential, or predict any number of purchase behaviors. Last year, a handful of retailers deployed bionic mannequins equipped with facial recognition software, surveillance cameras and sensors—capable of tracking shoppers’ gender, age, race, facial expressions, and more. Other technologies which use customers’ in-store Wi-Fi connections to track traffic patterns and shopper frequency are being used by retailers ranging from Family Dollar to Warby Parker.

These are the tools of the future of retail. Advancing bionetic interfaces may mean that the future of brand engagement will evolve into equal parts biology and psychology. Product, place, price, and promotion will be addressed with surgical precision. The galaxies of data most of us have already offered up may simply be waiting for the right technology to contextualize them—and it all begins with opting in.

Benjamin Franklin said, “Those who surrender freedom for security will not have, nor do they deserve, either one.” The inevitable price of the convenience of opting in is compromise. The promise of big data cannot be segregated from this price. Embracing the radical transparency at our threshold, many see a potentiality that far outweighs the threat—after all, what do we have to hide? Yet, privacy is not secrecy—and while there are things we should be comfortable bearing, our dignity should not be one of them. Whistleblower Edward Snowden said his biggest fear was that we “won’t be willing to take the risks necessary to stand up and fight to change things.” So, knowing what you know now, do you choose the red pill or the blue one?

Image of drinking sheep courtesy Shutterstock

post authorBryan Goodpaster

Bryan Goodpaster
Bryan Goodpaster is a creative director at LPK, where he is often called upon for his non-traditional approach and strategic consultancy—helping crack wicked brand problems and strategic conundrums for many category-leading brands. Part semiotician, part psychologist, Bryan has known what you really meant by that for over 15 years.

Tweet
Share
Post
Share
Email
Print

Related Articles

Is true consciousness in computers a possibility, or merely a fantasy? The article delves into the philosophical and scientific debates surrounding the nature of consciousness and its potential in AI. Explore why modern neuroscience and AI fall short of creating genuine awareness, the limits of current technology, and the profound philosophical questions that challenge our understanding of mind and machine. Discover why the pursuit of conscious machines might be more about myth than reality.

Article by Peter D'Autry
Why Computers Can’t Be Conscious
  • The article examines why computers, despite advancements, cannot achieve consciousness like humans. It challenges the assumption that mimicking human behavior equates to genuine consciousness.
  • It critiques the reductionist approach of equating neural activity with consciousness and argues that the “hard problem” of consciousness remains unsolved. The piece also discusses the limitations of both neuroscience and AI in addressing this problem.
  • The article disputes the notion that increasing complexity in AI will lead to consciousness, highlighting that understanding and experience cannot be solely derived from computational processes.
  • It emphasizes the importance of physical interaction and the lived experience in consciousness, arguing that AI lacks the embodied context necessary for genuine understanding and consciousness.
Share:Why Computers Can’t Be Conscious
18 min read

AI is transforming financial inclusion for rural entrepreneurs by analyzing alternative data and automating community lending. Learn how these advancements open new doors for the unbanked and empower local businesses.

Article by Thasya Ingriany
AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
  • The article explores how AI can enhance financial systems for the unbanked by using alternative data to create accessible, user-friendly credit profiles for rural entrepreneurs.
  • It analyzes how AI can automate group lending practices, improve financial inclusion, and support rural entrepreneurs by strengthening community-driven financial networks like “gotong royong”.
Share:AI for the Unbanked: How Technology Can Empower Rural Entrepreneurs
5 min read

Curious about the future of AI? Discover how OpenAI’s “Strawberry” could transform LLMs with advanced reasoning and planning, tackling current limitations and bringing us closer to AGI. Find out how this breakthrough might redefine AI accuracy and reliability.

Article by Andrew Best
Why OpenAI’s “Strawberry” Is a Game Changer
  • The article explores how OpenAI’s “Strawberry” aims to enhance LLMs with advanced reasoning, overcoming limitations like simple errors and bringing us closer to AGI.
  • It investigates how OpenAI’s “Strawberry” might transform AI with its ability to perform in-depth research and validation, improving the reliability of AI responses.
Share:Why OpenAI’s “Strawberry” Is a Game Changer
3 min read

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and