Flag

We stand with Ukraine and our team members from Ukraine. Here are ways you can help

Home ›› Artificial Intelligence ›› In the Garden of Hyperautomation

In the Garden of Hyperautomation

by Henry Comes-Pritchett
25 min read
Share this post on
Tweet
Share
Post
Share
Email
Print

Save

AI Tale of Two Topias

An odyssey exploring two possible outcomes for civilization as conversational AI takes hold—one brimming with the bright possibilities of user-controlled data, the other, decidedly dystopian.

Whether you’re hip to it or not, conversational AI—which is really the sequencing of technologies like NLU/NLP, code-free programming, RPA, and machine learning inside of organizational ecosystems—has already begun reshaping the world at large. Unsurprisingly, we’re seeing this primarily in business settings. Lemonade, a tech- and user-centric insurance company is upending its industry by providing customers with a rewarding experience buying insurance that’s facilitated by Maya, an intelligent digital worker described as “utterly charming” that can quickly connect dots and get customers insured. Maya is essentially an infinitely replicable agent that is always learning and doesn’t make the same mistake twice. Compare that with whatever it costs Allstate to retain more than 12,000 agents in the US and Canada who are likely using outdated legacy systems and it’s clear to see which way ROI is trending. 

Even bigger successes have been enjoyed by Ant Group (formerly Ant Financial) a nimble, Chinese financial giant that had surpassed the number of customers served by today’s largest US banks by more than 10 times back at the start of 2020. Their IPO—which would have been the world’s largest to date—collapsed after Chinese Communist Party leader Xi Jinping allegedly intervened. Subsequently, the company has broadened its scope past fintech to include sustainability and inclusive services (whatever those might be). Still, its core operations were built around a streamlined business structure that uses conversational AI to deliver meaningful experiences. While this kind of adoption of conversational AI in business settings is roundly expected to boom in the coming years, it will quickly seep into our daily lives as well, going beyond how we interact with the many companies in our lives and taking root in our interactions with all of the different technologies we regularly touch. 

I’ve always taken an interest in these topics, but like many cutting-edge things, they’re hard to approach. Especially if you have no idea where to start. Especially if you don’t have the expertise; the lexicon, the mindset, the lived experience. 

I’m a bit of an accidental Luddite, someone perpetually late to the party when it comes to the latest and greatest. Not to say that I’m completely unfamiliar with these things, just that integrating them into my work, and my life, is hard. And the kicker is that with each passing second, it gets exponentially harder to catch up. You can get lost in the mire very easily, your foot sticking deeper and deeper into the footpaths already wandered by people with the right stuff. That is unless you’re guided by the right people, at the right time, with the right tools.

Robb Wilson and Josh Tyson are the right people. Wilson’s upcoming book (co-authored with Tyson)  Age of Invisible Machines is the right tool. Right now, is the right time. Age of Invisible Machines is something very special. It’s a peek into the future. It’s a handbook for the adventurous, the daring. It’s also a crash course on the weird and wild world of automation, conversational artificial intelligence, and intelligent digital workers. It’s an inspiration that serves as the lifeblood of this article, the thrumming pulse. I was fortunate enough to get an advance copy of the book, and the chapter on ethics immediately stood out to me. As a recent college grad with a degree in philosophy and linguistics, the idea of conversations with machines is plenty intriguing, and the way this chapter lays bare the many ethical sandtraps that await as these technologies take flight put me in a pondering mood.

Here, I’ll provide in-depth ethical analyses of contrasting visions for how wide adoption of conversational AI might evolve. While one is a clear dystopia and the other a utopia, realistically these represent two sides of a coin. Or, better yet, they represent two hemispheres of a beguiling orb where there aren’t clear distinctions between what’s dark and what’s light. In reality, utopian ideals can quickly descend into dystopian constraints. However, for argument’s sake, we’re going to leave these somewhat muddy concepts into clean cuts, representative of the extremes of the spectrum. Our ultimate takeaway is that, in these fleeting moments when these technologies are still budding, we can find a sensible, intuitive, easily navigable path forward through their explosive blooms.

It Boils Down to Data and Who Owns It

The fine line we’ll travel forward is directly related to governance, and how it can either improve the state of affairs or greatly deteriorate them.  But what do we mean by governance? In the realm of conversational AI, it takes the form of an algocracy.

Algocracy is a fairly intuitive concept: it’s just like a bureaucracy, a democracy, or any x-cracy (In that there are rules, processes, punishments, rewards, etc. that govern and constrain human behavior), but algorithms are the rule-makers. This might sound odd at first, but an algocracy shares fundamental similarities with administrative systems we’re familiar with. An example: suppose someone breaks the law. How ought this person be reprimanded? 

By following the rules (at least, that’s how it’s supposed to work if you’re an idealist). You have an input, a series of rules to follow, and an equitable output. This sounds an awful lot like an algorithm. And one of the major benefits of algorithms is their general scalability: they can be equally as useful for one person as an entire nation. Bureaucracy (or democracy) is not elastic: rules and regulations have to be as general as possible for a variety of obvious reasons. An algorithm can be as fine-grained and fine-tuned as desired, with the possibility of case-by-case, individualistic (in context) governance becoming feasibility. 

There is an assumption here that you might be picking up on that an algocracy, of some nature, is possible. It’s not just possible–it’s (probably) inevitable. Going a step further, an algocracy (in a minimized sense) exists right now–a lot of your behavior is influenced by algorithms. It’s only natural to conclude that this trend is only going to progress, perhaps exponentially. That means right now, there are two (possibly intertwined) paths of nearly equal probability before us: one of utopia, the other dystopia. Both are algocracys, one libertarian, one authoritarian. This might seem like a tired cliche (and it very well might be), and there is certainly more than enough literature about both, but not much that approaches the topic from a pragmatic perspective. As technology becomes further entrenched and entwined with our waking lives, we shouldn’t let it govern more of our behavior and our decision-making by default. We need considerate and trustworthy people in the driver’s seat.

That’s a good word: trust. We invest far too much of the stuff into people and organizations that are, plainly, undeserving of it. This may sound like a pessimistic foregone conclusion, that these aforementioned entities are untrustworthy, but let’s think about this for a second. Why should you trust the current techno-oligarchy? What good reasons do you have to trust, Google, Microsoft, Meta, etc, when it comes to letting them access nearly every bit of information about you that exists? This is a serious question: think for a moment about what any of these companies has done to win your trust. Is there anything non-trivial? Anything genuinely meaningful? 

I didn’t think so. 

For a variety of sound, good reasons, you probably shouldn’t trust anyone with your data, much less a massive corporation that treats you as an abstraction. Really, that’s all you are–a sequence of numbers in some massive digital filing cabinet, your whole lived experience shaved down to a point. Sure, what Google (or anyone else) is doing with your data is (probably) inconsequential (this is at least what Google wants you to believe), but that does not mean that it will not be used for something much more nefarious in the future. “Why does a probable thing matter?” you ask “Surely there’s a chance, but there’s also a chance that the sun will explode tomorrow. We should be more concerned about what’s going on right now than what could be.” 

Good point, but let’s think about it in another way: If you do not own your data (spoiler: you don’t), then whoever has a reasonable claim to ownership of your data can do whatever they please with it. Another spoiler: Google and similar service providers do have reasonable claims of ownership of your data. They collect it, they categorize it, they, in large part, define what data is, and, most importantly, they put it to work.

Locke’s account of property ownership is twofold, with the latter being something we ought to think about: if your labor transforms something (like whatever the data equivalent of turning a tree into lumber is), you have a fairly strong claim to owning the fruits of your labor. There are good objections to this argument, mostly semantic (What’s effort? Is dumping a can of tomato juice into the ocean an entitlement to the ocean?), but there is one obvious rebuttal to this one in particular objection: the effort Google undertakes to transform what you do in the world into data is vastly different from dumping something in a body of water and saying “I own this! I am so smart!”. This is something intuitive, and Locke’s account of property rights is equally so. 

But there’s another account of ownership we should entertain: personal ownership. I don’t mean “what you own personally” (or perhaps better worded as “privately”), but what you are. Let’s see if this tracks: you exist. Obvious, uncontroversial, nothing more needs to be said here. The world exists, and so forth. What am I getting at here? Let’s recall an earlier segment about data, namely the not-too-controversial claim that large tech companies define what data is. This might seem a little confusing at first, especially since we take what data is (what it means, what it does, where it comes from, etc.) for granted. It just exists, at least in some respect–It does not exist in the external world. Nothing you do in your day-to-day is necessarily, or even vaguely, data (or data-like). It is information or facts about the state of affairs of the world across an arbitrary interval. What this means is that someone, or something, has to take the information in the world and turn it into data. Yes, yes, we already covered this–what’s the big deal? 

Well, as we covered already, you exist. And your existence entitles you to quite a lot. Your existence entitles you to (nearly) inalienable rights to the ownership of your body, your mind, your actions, and so on. The concept of “owning your actions”, while usually metaphorical, does illuminate something interesting: you do quite literally own what you do. You create your (voluntary) actions. That entitles you to own them (as per Locke), but there is something else going on here: these things are an extension of you. Your actions could not have existed without you, and you doing certain things of your own free will, and so forth. Your actions may seem as though they are this abstract, incorporeal, unreal thing, but for all intents and purposes, they are as real as your left arm. And you own your left arm for no other reason that it is an integral part of your being–so why shouldn’t we treat your actions the same? 

You might get a sense of what I’m getting at here: your actions are very real, you create them, you own them for a variety of reasons, all good things. Your actions necessarily create to information that can be processed as data. It follows you have a very strong claim of ownership to this information, this accidental data. But why does a company (or other entity) have any claim of ownership over your data–and even if they don’t, why do they get to own/use it? 

Because you have no idea what it even is, where it comes from, how it’s made, or when (or where) it is “collected”. There is an epistemological asymmetry here as vast and deep as a canyon, with you at the bottom and Google up on the cliffs, happy that it’s up there and you’re down here. That asymmetry is leveraged, taken advantage of, for an extremely transparent reason: your data makes a lot of other people a ton of money. A ton. You owning your data (and knowing everything about it) would cripple an entire industry that makes up roughly 10% of the United States’ yearly GDP. It is not in the best interest of any data provider/creator/curator to let you take the reins. But there seems to be a more salient issue than just income and profit margins–the age-old problem of what you do and do not own. There is seemingly a contradiction here: you, for all intents and purposes, have the strongest claim to ownership of your data. You might even, by default, own your data by extension of how it comes to be in the first place. However, you do not own it in any practical sense. That doesn’t seem right at all, and it’s one of those things, one of those messy situations that could eventually blow up in everyone’s faces (quite spectacularly, at that). 

So what? What is there to do about this? 

Let’s explore two possible futures. One that could very easily come to be: a utopia. A hyper-automated global civilization that is intentional, libertarian, and egalitarian. Another, one that more or less already exists, yet deteriorates quickly in the short term; a dystopia in no uncertain terms. We’ll encounter Wilson’s ideas along the way, in no small part because his philosophy for leveraging conversational AI is ethical, democratized, and accessible–ideal in every sense of the word. 

A Utopia of Self-Regulated Data 

It’s six in the morning. You’re slowly waking up, stirring in your bed and letting your eyes adjust to the breaking light of dawn. A softly lit, furry silhouette lays at your feet breathing softly. Your phone buzzes lightly on your tableside bed; dimly illuminated, much quicker to rouse. You envy it a little. You reach for it, and as your fingers graze its aluminum frame you become aware of how cold it is. You bring it close to your face to read the good morning texts from your partner who is on assignment abroad. You smile and respond, saying how much you miss them. Below the text is another, one from someone named “Alfonzo”. The message is quick but dense with information–a whole schedule of your day contained within it. You struggle a little to remember who Alfonzo is or why he knows every detail of your Tuesday, but the memory comes quickly–you made him. 

Alfonzo is not a person, at least not in a traditional sense. Texting with Alfonzo, you would have no idea–he responds just as you would expect a friend or a colleague to. He’s polite. He’s funny. He’s punctual. He’s a big help and asks nothing in return. It’s a little eerie at first, but you’ve become accustomed to the existential strangeness–the strangeness of natural communication between you and a ghost. 

Alfonzo was made roughly ten months ago, a week after you got access to a no code platform for orchestrating intelligent conversational AI and the ecosystems needed for sequencing the technologies they require (you can see such platforms and how they rank in an annual report from Gartner). You watched a few videos, read some blog posts, and had a decent idea of how to start. You installed the program and were met with splashes of color contrasted against a charcoal background–lego-like blocks populated a portion of the interface, and the rest was empty space. A void; the empty waters of creation. 

Before you could continue, a dialogue screen popped up. You couldn’t do anything to get rid of it except close the application and try again; but no luck. It kept coming back–a “digital equity wizard”, it called itself. In a large Sylexiad typeface, the wizard walked you through all the things the platform would and would not have access to. You thought this odd–you’d never run into something like this in your decades of internet wanderings, years of technological nomadism. Your experiences with user agreements were headache-inducing, confusing, and frustrating; you’d try to read through some of them to get a sense of how each particular company was screwing you over through a weird game of pseudo-coercion, but you’d never get past the first page.

No, this was unique. Refreshing. And, to your surprise, the platform was asking very, very little–just enough to fulfill some standard legal requirements, just enough to scrape by. It would occasionally quiz you to make sure you understood what was going on, and if you failed, it had you reread the section. Words were highlighted to give you quick definitions on more esoteric terms. In the end, you could authorize some special permissions, like basic access into the internal workings of your IDWs for, in large part, monetization. Opting in to the “pool”, it was called, would decrease the subscription cost for all users. A clause at the end of the section said that termination of your account would lead to whatever IDW you created being destroyed, as the source for each worker was held locally. You thought why not, read some more, took some more quizzes, and were on your way. 

A half an hour later Alfonzo 1.0 was born, created exactly in the image you wanted him to be. 

Alfonzo was now primed to help you organize your day to day. You gave him the permission to jump between your smart speaker, your phone, and your car to get a sense of how he could help. For the first week, he asked hundreds of questions about your routines, searching for contextual clues that help his machine learning abilities understand your actions and utterances. You modeled his synthetic speech pattern after James Gandolfini, and his text output after your dad (a diehard fan of The Sopranos as well). The first time he sent you a daily planner your heart skipped a beat; it was time. You followed it to the T and were surprised at the fact that you got home from your job half an hour earlier than you normally would. Alfonzo took the time to find the most efficient commute to your office, the best path through the building to your desk, the fastest and best coffee shop within a mile and charted a route running past your local grocer on your trip back home. (Alfonzo has access to your smart refrigerator and—using the appliance’s abilities track its contents, read info from the RFID tags on many of the containers within, and weigh the contents of them—suggests that you pick up a new container of oat milk and suggests you think about preparing the eggplant you bought last week.) Alfonzo even planned out a suggested evening routine, which you found to be particularly suitable. You went to bed feeling rejuvenated, a weight you never noticed before gone. 

Eventually, you trust Alfonzo enough to authorize access to various blockchain databases/protocols replete with information uploaded by users around the world. These decentralized communities are like a swap meet for data. They were created by like-minded communities of people, quasi-visionaries that loved “min-maxing” their lives with cutting-edge technology. All you needed to join was a custodial wallet and a small deposit into the protocol’s staking pool. 

The interface was simple; bright buttons that organically drew your attention to the variety of services you could use. There was a community tab, an “uploader”, and a collections space where you could view your minted “blurbs” and ones you had acquired from other users. These “blurbs” are a product of Alfonzo; during your time creating him, you found a “block” that when placed near the end of a loop would archive various things Alfonzo had done as you went through your day. Replete with metadata, the raw files were a mess you couldn’t make heads or tails of. Other intelligent digital workers, however, could interpret them like a native language–effortlessly, immediately. They were like snapshots of Alfonzo’s mind; how he thought, how he changed his increasingly complex neural architecture to handle novel events, and how he “memorized” things that you did that he could help with for next time. You uploaded a few of these blurbs; a movie night during a blockbuster release, a run during the morning, navigating a grocery store. 

You browsed for a little, seeing if there was anything useful you could hand-pick for Alfonzo to digest. There was an option to integrate Alfonzo into the network entirely, but you decided to bracket his transcendence for when you were more familiar with the network. 

You did some casual browsing, just to get a lay of the land, to orient yourself to the relevant cardinals. Your journey was cut short by a spectacular landmark of serious personal importance: the “navigation” tab, under “user experiences”. It caught your eye immediately: someone in China uploaded an “extreme traffic event” stub. A real gem, you thought. You acquired it and told Alfonzo to get to work, crossing your fingers, hoping giving a diamond to a newborn wouldn’t somehow come to haunt you.

A short while later, you got a text from him saying that he’d integrated that experience into one of his burgeoning neural networks. You didn’t even mention it was about traffic; he just knew. 

The user in China received a kickback with governance tokens for the specific protocol you both were using. Alfonzo got smarter, and your life got a little easier. As a thank you, the user in China authorized their intelligent digital worker to access the “movie night” stub and you received equitable compensation. You shared some kind words with each other and went your separate ways. 

This wasn’t just a unique experience for you or your new friend: no, this was happening all the time, everywhere, for everyone. Millions of people across the world were collaborating, interacting, and sharing snippets of their lives, packaged neatly as metadata and hashes. Their intelligent digital workers hummed in delight, transforming petabyte upon petabyte of information into works of art; rock tumblers for grownups. Their developers were constantly fine-tuning their general models, their Eves and Adams, their blank-slate IDWs, with the information users authorized them to access. 

A rhizome began to draw itself at the edges; no center, no focal points, no hierarchy. Novel connections between once-discrete IDW platforms, people, and nations, began to form. Hundreds of thousands of nodes. Millions of unique pathways. An infinite number of possibilities. 

Mapping it out, some users remarked that it looked a lot like a brain. You found one of these maps, these real-time representations of this hyper-network, and couldn’t help but stare in awe. 

It was like the night sky, a tapestry of a new cosmos; shooting stars flitted between shimmering galaxies of raw power. A little universe with its own rhythm. The superstructural amalgamation of millions of human minds and tens of millions of digital ones, constellations of a new mythology. 

It was beautiful. 

Though he doesn’t ask as many questions as he used to, Alfonzo is always learning and has become more and more helpful. You’ve been able to streamline your entire life (more or less) and your voluntary immersion into hyper automation has played no insignificant role. You have more time to spend doing what you love–caring for your pets, spending time with your partner, exploring your city, and engaging in long-neglected hobbies. Your work life has improved significantly as Alfonzo has been able to act as an omnipresent assistant: he’s been managing your schedule and helping you figure out how to navigate tricky assignments, suggesting meeting times and writing emails while you do the heavy lifting. He never misses a beat; you can’t help but think that you wished you’d done this sooner. Your company took notice of your performance and gave you a significant raise–not only that, you were promoted to project lead, and you’ve been able to do more than you’d ever thought possible. You talked to some amazed coworkers about Alfonzo and it didn’t take much convincing before they decided to get on board. Now, your entire team has a variety of intelligent digital workers that communicate with each other on a daily. The team’s performance is stellar–everyone receives various accolades for the work done. Word spreads quickly throughout the office and soon enough nearly everyone has their own Alfonzo.

 It’s like a switch is flipped: the company outperforms nearly every local competitor and starts attracting attention nationally for last year’s quarterly results. An IPO is in the books. Everyone is happier, healthier, more financially stable, and anxiety gone that many didn’t even realize was there. 

Time goes on. This phenomenon spreads from your office to your neighborhood, to your city, to your state, to the whole country. Entire megaregions of the United States are seemingly transformed: traffic disappears, micro to macro economies surge, and the general population’s wellbeing skyrockets to levels not seen since the immediate postwar period. Menial tasks are handled by an ever-growing network of intelligent workers, unlocking decades of suppressed creative output. Everyone, simply put, has more time to live. Everyone has the chance to do something more than just work, and when they do work, they’re able to do more, do better, and do faster, than ever before. 

This metamorphosis was achieved by little more than word of mouth and the innate human desire to collaborate, to work together to achieve something amazing. It’s voluntary, planned, private, and secure. 

The whole world rejoices in the new age. 

This utopia supposes that a lot of things change. It only works if you have complete custody over your data. It only works if there are tools made with the end result of data custody and private ownership of IDWs in mind. It only works if people who have a vested interest in it. This is basically an about-face from business as usual today, so what’s going to happen if none of that comes to fruition? 

A Data-Driven Dystopia 

It’s just about dawn and the horizon has just started to lighten; periwinkle bleeding into black. You haven’t slept very well–you came home exceptionally late after a challenging day at work. You didn’t have any time to cook (or eat, for that matter), and you feel, quite frankly, like shit. Your phone and watch are both making a racket–fully illuminated, beeping and buzzing; incessant, intractable, grating. You begrudgingly raise your wrist to your face, already aware of who (or, more accurately, what) is trying so desperately to get a hold of you.

It’s Achlys, your employer’s intelligent digital worker. Mastermind, master–it loves to keep you on a tight leash. You suspect your employer crafted Achlys to sound like Mark Wahlberg because they want it to have the air of a stringent personal trainer, but the way he bellows directives makes you want to hide in the nearest garbage bin.

Achlys came into your life unexpectedly. You started a shiny new job in a highrise not so long ago–at that point, it was just a glimmer in some developers’ eyes. A week later, they’d all hustle into a cramped room, pale-skinned and clammy, hungry for validation, starving for this to be over with. An executive and an intermediary – a non-descript man who understood the technical details and could readily translate the particulars into corporate-speak – watched as a gaggle of ravenous men rattled off statistics, use cases, and demos; breathless, shaking, eager. The intermediary didn’t say much–he didn’t need to. The executive’s eyes lit up at the slide with a bolded header titled “EXCEPTIONAL EMPLOYEE MONITORING”. That was all that needed to be seen, but the projected figures of increased output and decreased excess certainly helped. It was quick. The executive tossed a few words to them; chum for sharks. That’s all they needed, for now. 

A month later and you received an email with a few documents that corporate required you to sign to keep your position. Failure to do so would lead to immediate termination–a non-compete clause was the hook, Achlys, nestled deep in the bowels of the tiny typeface and legalese, line and sinker. A parasite whose first act of consuming you were a trick. A parasite who had unequivocal access to everything you did, said, saw; it was in you. The non-compete even had a section that detailed that Achlys could keep tabs on you indefinitely if you were to leave. A digital tapeworm. 

Disgusting creature. 

It set this alarm for you, at this dreadful hour, because your employer sent out an “all hands on deck” request because the quarter’s projections are abysmal. You stare at the matchbook-sized screen on your arm and tear up a little. It’s been a bad week, and it’s about to get much worse.

You get in your car and Achlys suggests a route, but it doesn’t really matter–there’s traffic in your neighborhood already. It seems as though everyone has been subjected to the same trials and tribulations over the last few days. You contemplate taking a tram, but asking Achlys gets you a scolding; you’d arrive half an hour later than projected, despite being on track to arrive two hours earlier than you normally would. You also know in the back of your mind that your employer makes a significant amount of money capturing and processing traffic-related information. You stare into the space between your eyes and the back of your head; empty, no expression. 

You arrive at your office, starving and a little weaker than the day before. The stairs are hard. Achlys tells you to hustle–your supervisor is preparing a mandatory pep talk scheduled to start in the next ten minutes. It suggests increasing your pace and stride. It doesn’t suggest any smart routes through your building–privacy-related lawsuits shot down a sizable chunk of the information your company (and pretty much any other) could use to train Achlys. They still collect it and sell it, but they can’t really do anything useful with it. You’ve heard about Denver’s success with private intelligent digital workers, but your state has passed local ordinances banning the use of them outright–your company banded with others and lobbied because they were losing out on sales of Achlys and other proprietary IDW’s, thus negatively affecting the macroeconomic status of the area as a whole. Oh well. 

You take a seat, nod hello to some of your familiar coworkers (nobody looks particularly great this morning), and wait until your supervisor wraps up a quick meeting with the board. The room is silent. The rising sun and watercolor sky bleed through the frosted glass on the east side of the building, but nobody’s skin stops looking a tinge gray. Your supervisor, bless her, comes down the wall with harrowed eyes and a messy hairdo. She stifles a yawn in her elbow before greeting everyone with a gravelly voice. You listen the best you can while trying to ignore your hunger and general malaise. Achlys detects an uptick in your pulse when your supervisor rings up your CAO on the conference monitor. She suggests a breathing exercise; your cheek twitches. 

The situation is dismal, according to a somewhat jittery older gentleman on the screen. He cites decreased morale as the underlying cause of the poor performance. You took a self-assessment a few weeks ago about your general wellbeing and career satisfaction and Achlys cross-referenced your biometrics to see if you were lying during it. You never got to know your results, but it seems as though corporate does; it seems like they know a lot. 

The talk wraps up with some half-hearted words about keeping your chin up. You get to work, juggling missing assignments, half-done projects, and incomplete reports. Your team barely talks. Your stomach is practically screaming at this point, and Achlys can tell–your blood sugar has deviated significantly in the last half hour from your baseline. It recommends heading downstairs to a company-owned cafeteria and eating while you work. You ignore it until it takes over your headphones, filling your ears with an uncanny, shrill voice; an amalgam of voices, a chimera of phonemes. The monotonous drone of Mark Whalberg; the manic enthusiasm of Tom Cruise, an unrelenting contradiction that baffles you nearly as much as it enrages. 

You inform your team that you need to take a quick break–some nod and others get up as well. You head down in a small, hunched dispatch. You’re back in under ten minutes. You eat sparingly as dread fills your stomach more than anything else.

The sun set a few hours ago. You’re exhausted. Everyone is. Achlys recommends heading home soon so you can get to bed and sleep for roughly six and a half hours; a later start than what you had today, at least. You bid your coworkers a goodbye, piercing a silence that has lasted the better part of half an hour. On your way out, you see your supervisor’s feet sticking out behind her desk. She’s slept here for who knows how long, and you contemplate doing the same, but you’d probably catch hell for it if anyone found out. Achlys charts a basic route to your apartment, and again it is of little help–seems as though everyone is getting out around this time. 

Home. Tired. You muster some strength to feed yourself–paltry pickings; good enough. 

You don’t even turn on the lights–you sit in still darkness and wait until you’re finished eating, your mind wandering; half here, half there. 

Bed. Soft. Dirty. Achlys recommends not looking at your phone too long before you go to sleep. An attempt to preserve what little REM you might get. 

You’re restless as you lay, curled, arms wrapped around your legs hugging them to your chest. Tears, mostly salt, creep down your thinning cheeks, and Achlys suggests some breathing exercises. 

You don’t remember falling asleep. 

Conclusion 

Back to reality here; deep breaths, it’ll be ok. That’s the hope, at least.

Here’s the issue: this is the path we’re headed down now. Like, right now. More tech-savvy readers might’ve picked up on an interesting theme: none of the technologies described are what you could call “futuristic”. There is no Google laMDA lurking in the background here, no Skynet or anything that would challenge long-held beliefs about the nature of the world. These are technologies that more or less exist in substantially similar ways as described in the article. This is to say: you could very well wake up in the next decade (or sooner) to one (or a mix of) these futures. And that’s a little scary because we’re essentially in the last 30 seconds of the game, and we either go for the buzzer-beater or try our best to keep the other guys from scoring. Either way, we ought to do something. 

And we should pay attention to the word ‘ought’. It’s stronger than it ‘should’. It’s much stronger than ‘could’. It’s a word of command, of obligation, of necessity. We are obligated to bring about a particular future because it is right–because it is good. Even if the chances of success are slim, those chances are better than the otherwise 100% probability of a data-driven dystopia if we do nothing at all. 

This is a risk we have to bear, a weight we must shoulder. 

That’s another word we really must pay attention to We. Not you, or me, or they–no, we. Us. This isn’t something that we have to do alone, nor is it something that we can do alone. This is an issue whose magnitude is universal and whose solution is equally expansive collective action. It may start with one, but it will surely end with many and the more the merrier.

Now, I’m going to say something that almost completely contradicts the above paragraph, but that’s alright because it’s true: start making the future you want a reality, right now. Start building communities. Start adopting cutting-edge technologies. Start finding service providers who care about you–or, better yet, become that service provider. If you’ve been waiting for a sign to start doing, consider this it. 

This article wouldn’t have been possible without Josh Tyson – many thanks! 

Get the book, Age of Invisible Machines.

post authorHenry Comes-Pritchett

Henry Comes-Pritchett,

Henry is a burgeoning philosopher and graduate from the University of Colorado Boulder. He holds a BA in Philosophy and Linguistics and published an undergraduate thesis titled Risky Simulations. He hopes to illuminate the intersections between computational linguistics, metaphysics, and user experience to reveal things interesting about the world, ourselves, and the awakening era of conversational intelligence. Henry is driven by the mysteries of the mind and language and finds endless motivation in the strangeness.

Tweet
Share
Post
Share
Email
Print
Ideas In Brief
  • Henry Comes-Pritchett explores two possible futures of hyperautomation: a self-custodial utopia, and a data-driven dystopia.
  • Comes’-Pritchett takes readers on a journey inspired by a sneak peek at, Age of Invisible Machines, an upcoming book by celebrated tech leader and design pioneer, Robb Wilson.
  • A philosophical treatise starts an odyssey that spans the breadth of possible civilizations, meeting the average people that inhabit them and observing their trials and tribulations.
  • The reader is ultimately left to decide what state of affairs they would prefer, with a call to action inviting those willing to change the world to start doing the work now.

Related Articles

Discover the future of user interfaces with aiOS, an AI-powered operating system that promises seamless, intuitive experiences by integrating dynamic interfaces, interoperable apps, and context-aware functionality. Could this be the next big thing in tech?

Article by Kshitij Agrawal
The Next Big AI-UX Trend—It’s not Conversational UI
  • The article explores the concept of an AI-powered operating system (aiOS), emphasizing dynamic interfaces, interoperable apps, context-aware functionality, and the idea that all interactions can serve as inputs and outputs.
  • It envisions a future where AI simplifies user experiences by seamlessly integrating apps and data, making interactions more intuitive and efficient.
  • The article suggests that aiOS could revolutionize how we interact with technology, bringing a more cohesive and intelligent user experience.
Share:The Next Big AI-UX Trend—It’s not Conversational UI
5 min read

Curious about the next frontier in AI design? Discover how AI can go beyond chatbots to create seamless, context-aware interactions that anticipate user needs. Dive into the future of AI in UX design with this insightful article!

Article by Maximillian Piras
When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces
  • The article explores the future of AI design, moving beyond simple chatbots to more sophisticated, integrated systems.
  • It argues that while conversational interfaces have been the focus, the potential for AI lies in creating seamless, contextual interactions across different platforms and devices.
  • The piece highlights the importance of understanding user intent and context, advocating for AI systems that can anticipate needs and provide personalized experiences.
Share:When Words Cannot Describe: Designing For AI Beyond Conversational Interfaces
21 min read

Uncover the dynamic landscape of UX design as artificial intelligence continues to reshape the field. With automated tools revolutionizing our roles, what does the future hold for designers?

Article by Michal Malewicz
The End of Design?
  • The article explores the impact of AI on UX design, questioning the future role of designers as automated tools become more prevalent.
  • It highlights the historical evolution of UX design and the commodification of design roles, emphasizing the shift from creative problem-solving to efficiency-driven practices.
  • It emphasizes the need for future designers to be generalists with strong decision-making skills, capable of leading projects and maintaining creativity in an AI-driven landscape.
Share:The End of Design?
9 min read

Tell us about you. Enroll in the course.

    This website uses cookies to ensure you get the best experience on our website. Check our privacy policy and