Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Technology ›› Designing Agentive Technology: AI That Works for People

Designing Agentive Technology: AI That Works for People

by Christopher Noessel
8 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

A book excerpt from the book by Christopher Noessel: Chapter 12: Utopia, Dystopia, and Cat Videos

 

No technology is neutral. Not even the humble thermostat. It is apparent to any layperson that general artificial intelligence stands to transform culture in far-reaching and fundamental ways. It is frightening to consider that we don’t know what life looks like should super artificial intelligence evolve. But what about agentive technology? What inherent biases does it have? How will it be abused? What does it afford? What does it want?

I’m only an armchair philosopher, so please forgive if this is less of a unified theory of agency and more of a collection of forward-looking questions and answers that have come up as I worked on, discussed, and wrote about agentive technology with people all over the world.

But before we get there, let me explain the title of the chapter. Genevieve Bell is currently the Vice President and Fellow at Intel leading the Corporate Sensing and Insights group. Over the past years, she and I have wound up speaking at a few of the same events. When I was at Interaction12 in Dublin, I saw her giving one of my favorite talks, in which she reviewed the predictions that came before the advent of some older technologies. Across several case studies, she noted that predictions tend toward the extremes: the coming technology will usher in either a new golden age, or bring about a new dark age, with little nuance in between. Her main example recounted some of the public predictions that came at the turn of the prior century about the new-fangled electric light. *gasp*

The Tyranny of the Light Bulb

On one side, you had some people at the time who were really excited about the coming age of light. Crime would disappear, the argument went, since we could easily light every street, alley, and walkway brilliantly, giving the criminals nowhere to ply their dark trade. It would even mean universal education, since people could continue reading after the sun went down and there was no more work to do.

On the other side, you had some dire predictions as well. The working class would suffer greater exploitation since fat-cat business owners could illuminate their factories and eliminate sundown as an excuse to go home. Others worried that our circadian rhythms would fall out of sync with the sun and cause us no end of physical maladies.

Of course, now that we’re over 100 years into electric light, each of these predictions seems naively extreme. It’s more mundane than that. The truth ends up being somewhere in the middle. Crime doesn’t like well-lit areas, but it didn’t disappear because we didn’t light everything all the time, and some crime happens just fine in well-lit places. It didn’t mean universal education, because people had other things they wanted to do with the extra hours: like talking or playing parlor games. It wasn’t just more reading.

Similarly, yes, factory interiors were illuminated, but it meant a more predictable work schedule throughout the year, so businesses and workers benefitted from the predictability. And factories weren’t the only places illuminated. Nearby pubs were lit by electric light as well, giving those same workers the opportunity to socialize a bit with coworkers before heading home on roads lit by electric light, which was a decidedly positive addition to their days. And for circadian rhythms, well, OK, maybe that prediction was spot on. It’s hard to say. I need a disco nap.

The sum effects of electric light weren’t exactly neutral, either. It has enabled around-the-clock living, study, book-writing, and work. It enabled thicker and deeper buildings, with people squirreled away inside, cut off from natural sunlight. It drove the creation of our electricity infrastructure, which powered a great deal more than just light bulbs, and drove us to pollute more of the world. It added new shadows to our lives. It gave us movies and saved us countless minutes from having to stoke fires and light candles. It meant fewer things, you know, burning down. I can see positive and negative ways to interpret each of these effects.

But Will the Internet Save Us?

Bell recounts similar prediction patterns at the advent of the internet. Some predicted the complete loss of any sense of identity and a globally externalizing economy that would enslave us all. Others predicted the falling away of cultural differences and animosities since anyone could interact with anyone else around the globe at any time. A true golden age of peace. The reality, Bell notes, is much more mundane; we use this amazing worldwide connectivity machine to watch cat videos.

Photo of a Cat video  screen

We should keep this recurring pattern in mind as we discuss agentive tech. Will the new age it ushers in be golden or dark? Although it has been around on factory floors and in computer science for a while, it still feels new to us. Our tendency may be to valorize or demonize it, when we should keep in mind that the truth will be something in between with much more nuanced — even if far-reaching — effects.

Dr. Jekyll and Mr. Agent

Let’s start with the big one. Should you be scared of agentive tech? Will it tend toward evil? How might it be used for ill gain? Throughout most of this book, I’ve presented a progressive view, that agents will have a positive effect on the lives of their users. I’m a designer, by training: I want to make things work. But I’m also a skeptic, so let’s take some time to consider how they might be used perniciously.

The definition and qualities of agentive tech provided in earlier sections give you some foundational things to consider.

  • Agents are software, which acts on your behalf.
  • That software monitors data streams and responds to triggers with rules.
  • In the best cases, you tune them over time to be more effective and tailored for your needs.

Pull on the threads of these descriptions, and you can unravel some potential problems.

That it is software means it will come with all the problematic aspects of that medium. Notably, the majority of users of software cannot open it up to investigate what’s inside, if they suspect some problem or malfeasance. Worse, agents often operate when they’re out of your attention, and if they are doing nefarious things, you may not even know it.

Agents may not even need to be hijacked by criminals to behave in unsavory ways. In 2008, rumors began to circulate across the internet that the Apple “shuffle” algorithm felt less than random. David Braue of CNET ran a series of well-controlled tests on the software and showed that yes, it appeared that the “random” algorithm favored music that had been purchased from Apple’s iTunes store rather than music that had been ripped from a CDs, and it also favored artists from certain labels, notably Universal and Warner, presumably as a result of closed-door deals.

Obviously, companies can control their message very precisely to avoid promising objectivity in their agent’s behaviors, and as long as the effects are subtle, they will be able to get away with it. Imagine if the Narrative camera were programmed to favor images from the thousands taken across the day that included certain brands. Or worse, to insert those brands as subtly as possible in the images it took. Once I had iOS autocorrect the word “principled” — a perfectly valid English word that did not need correcting — to “Pringles,” — a brand name, and my first thought was, “Was this paid advertising?” Ethical product managers will need to take pains to prevent abuse of the algorithmic nature of agents and to ensure that their agents can be examined.

Deliberate malice may not even be the biggest concern. Any particular algorithm can formally encode the unconscious social or cognitive bias of its authors. You’ve probably already seen early face recognition from HP that couldn’t recognize dark-skinned people, or Nikon’s camera AI that asked photographers of subjects who had “double eyelids” if they were blinking after every single snap. I trust that neither is the result of deliberate malice on the part of the developers or the organizations, but the result is that the software is biased, working well for some people and poorly for others. On the bright side, once encoded, these biases can be exposed, critiqued, and corrected, explicitly.

You might think that open-source agents are the answer, but general consumers won’t have the programming expertise to understand what they saw even if they did look under the hood. But with open source, at least other experts (or other agents) would be able examine the code or run tests like Braue did, and raise the alarm for general consumers through social media or news channels. If, of course, consumers care. The furor over Apple’s non-random shuffle was short-lived, officially denied, and then died down. iTunes is still out there.

It’s not just the agents themselves that will be gamed. Once agents begin to have an effect on the marketplace, companies, depending on those effects, will do their best to game the data that drives the agent. Search engines and websites have long been in a “better mousetrap” arms race for most of the history of the web. If you’re not familiar with these, programs called web crawlers go from website to website and “read” it to determine that page’s content, quality, and relevancy to know how it relates to people’s searches. Because being high in the search result is critical for some businesses, a whole profession of specialists has emerged to reverse-engineer the key attributes that those web crawlers are looking for. Then they heavily modify their web pages to match those attributes. In this way, these search engine optimizers are gaming the data that drives the web crawler. So, even if you could engineer open source agents whose behavior holds no surprises, you still need to make sure that the data streams they monitor are as clean as possible. Which might mean more agents.

A last concern from the definition of agents comes from the idea that an agent becomes increasingly tailored to its user over time. That’s good from the perspective of service, but it poses a risk. Sophisticated agents will slowly build a detailed model of their users. That can be a dark temptation to identity hackers. Even if the data an agent has is as simple as emails and calendar events like birthdays, gaining access to the agent’s model of its user can be used to support more serious identity theft.

All told, the nature of agents poses some fairly serious threats via malicious actors. The security pressures on agents will be great. But I should note that not all agents can be misused. I’m not worried about either the Orbit Yard Enforcer or the Roomba. At least not in their current, well-constrained forms.

If you want to read the whole book, you may find it at Rosenfeld.

post authorChristopher Noessel

Christopher Noessel, This user does not have bio yet.

Tweet
Share
Post
Share
Email
Print

Related Articles

Users attribute human-like qualities to chatbots, anthropomorphizing the AI in four distinct ways — from basic courtesy to seeing AI as companions.

Article by Sarah Gibbons
The 4 Degrees of Anthropomorphism of Generative AI
  • The article delves into a qualitative usability study of ChatGPT, uncovering degrees of AI anthropomorphism in user behaviors.
  • The authors identify four levels — Courtesy, Reinforcement, Roleplay, and Companionship — providing insights into how users interact with and perceive generative AI.

Share:The 4 Degrees of Anthropomorphism of Generative AI
8 min read

ChatGPT can identify and describe human emotions in hypothetical scenarios.

Article by Marlynn Wei
ChatGPT Outperforms Humans in Emotional Awareness Test
  • New research found ChatGPT was able to outperform humans on an emotional awareness test.
  • Emotional awareness is the cognitive ability to conceptualize one’s own and others’ emotions.
  • Given the psychological risks of artificial intimacy, AI augmentation, and human-AI collaboration—keeping humans in the loop—will be a safer and more beneficial approach for now.
Share:ChatGPT Outperforms Humans in Emotional Awareness Test
3 min read

Did you know UX Magazine hosts the most popular podcast about conversational AI?

Listen to Invisible Machines

This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and