The talk is the next step in the the evolution of the thinking from this post from last year, Good for your health: Design Philosophy from the technology of healing, which first started to explore the idea of looking to health and healing practices for inspiration for technology design.
It also brings together a lot of other ideas I’ve been thinking about over the years, from the concept of “UX Cruelty,” which often results from “delightful” engagement-driving experiences that neglect the reality of anyone who isn’t the “ideal user,” to workflow planning methods and tools to avoid this cruelty by identifying different user scenarios and the experience requirements to support their needs.
A practical guide for designing products that help people get stuff done
An interesting thing about the interface experiences we all spend so much of our time with is that they are designed for the same type of use case:
Consumption.
Netflix, for example, would love nothing more than for you to spend as little time as possible actually doing anything. Their algorithmic recommendations reduce the time you spend searching for content you’ll want to watch, and each successive episode plays automatically after the last one is over without any input from you, the user, at all.
Beyond just media platforms, content consumption drives retail consumption as well:
The undisputed heavyweight of optimization positions both browsing use cases with an essentially identical interface:
(Get you a consumption-oriented UX that can do both.)
The sheer ubiquity of these interfaces, designed not just to be consumed but to, in turn, consume our time, can make it difficult to realize that taking design cues from consumption-driven products — and their practitioners’ many design philosophies — may not be right for every product.
At athenahealth we make the software doctors use to manage patients’ clinical data and run their businesses. Our entire product is oriented around work-critical application flows, not passive consumption.
Workflow: A sequence of tasks to produce a desired outcome.
This is where workflow planning matters most. Unlike a typical consumption-driven experience, maximizing users’ time spent is not a success indicator for us. In fact, it gets us further away from our actual goal — helping doctors interact with their patients instead of their screens.
If, like us, you’re building a product that is about helping people get complex work done, then You Won’t Believe These Six Crazy — just kidding, you know why you’re here. You want to know how you can design more effectively to help your users achieve their goals. So let’s get into it.
Understanding workflows.
“As a user, I want to do A, B, & C, so that I can complete the work I need to do.”
What would be the best design solution for this user story?
Is it like this…
Or maybe it’s like this…
Either approach would support the stated user need, but through two very divergent processes and, inevitably, interfaces. User stories tell you what to design. Workflow planning tells you how best to design it. And there are consequences for diving into designing for user stories before a solid workflow foundation is in place.
Without A Workflow Planning Methodology:
Design begins before downstream implications are understood
The resulting experience becomes a bunch of siloed pages or steps strung together vs. an integrated flow
Users end up faced with feature overload because the totality of the process has not been taken into account
And, worst case scenario, you end up reworking design (or even code) late in the game
How we do workflow planning.
Here’s our simple formula for how to break down workflows.
A workflow is the goal a user wants to accomplish — the thing that the product, or a particular part of it, is designed to help them get done— multiplied by scenarios — a set of circumstances that can affect how users accomplish said goal.
For example: if, as a patient, your goal is to prepare for an upcoming doctor’s appointment (filling out health history information, signing consent forms, paying the copay, etc.), the way you accomplish that goal would be affected by whether this is a brand new doctor you are going to see for the first time or if you’re coming in for a standing, weekly appointment. From the patient’s point of view, the goal is the same — you need to confirm the appointment and do any advance prep necessary — but the scenarios are completely different, and dictate distinct workflow solutions.
To solve for this, we’ve developed the 6-step process for workflow planning, below.
Step 1. Define your users’ goal / job to be done. (The reason they opened up your app or came to your site).
Step 2. Identify any scenarios that will impact this goal. What circumstances can be true about the world, the user, or the business objectives in different situations?
Step 3. Document the different workflows that result from these combinations.
Step 4. Are there other goals that a user wants to accomplish when they use your product, or a particular part of your product? Add them.
— Keep in mind, goals are not granular user stories; this is not the place to document an exhaustive list of tasks. A goal is the big picture thing a user wants to have accomplished — order a car, book a flight, file taxes — at the end of the workflow.
Step 5. Repeat the workflow documentation process. Add any additional scenarios you might encounter as you go along.
If you find you’re adding a lot of new scenarios for a specific goal, you might actually be better served by rethinking it as a different product / section altogether.
Step 6. Prioritize your workflows — because you can’t build everything at once.
Keep in mind, not every cell in the matrix will require its own separate workflow. One workflow may be able to support multiple goals and scenarios with some affordances for discreet needs. The point is to make sure you have your bases covered before diving into design.
Here’s a sample of a workflow planning process we did for the pre-appointment patient experience described above:
There are various ways to approach prioritization. It can be based on your product timeline or development cadence. Or you can focus first on the workflows that will solve the biggest pain points or address the most common scenarios. Up to you, so long as it gives you a roadmap for how to proceed. In the example above, our prioritization was based on building for the scenarios that require the most comprehensive information need first to then be able to easily modify for a range of variations.
Once you’ve got your workflows documented and prioritized you can move into diagramming the workflow steps.
Algorithms are the perfect tool for delivering individualized exploitation to billions. They may yet have potential for mobilizing collective power at scale.
Buy Me a Sandwich
Out in California, my friend is working on a new kind of energy storage startup. Here’s what they do. They buy electricity from the grid when it’s cheap (middle of the night), and store it in a battery in your house for you to use during the day, when demand is high. Participants will be able to save 20%-30% on their energy bills. And the best part is you don’t even have to do anything. The whole thing runs on algorithms.
Customers in the UK will soon find out. Recent reports suggest that three of the country’s largest supermarket chains are rolling out surge pricing in select stores. This means that prices will rise and fall over the course of the day in response to demand. Buying lunch at lunchtime will be like ordering an Uber at rush hour.
But what if an algorithm bought it for you at midnight instead?
Amid the infinite churn of natural and political terrors— juxtaposed with photos of babies made by people I love — blowing up my Facebook feed, I keep seeing a hipster hostage video selling me something called Paribus.
“Stores change their prices all the time — and many of them have price guarantees,” explains Paribus’ App Store description. “Paribus gets you money when prices drop. So if you buy something online from one of these retailers and it goes on sale later, Paribus automatically works to get you the price adjustment.”
Most likely the reason Facebook expects I’d be interested in this service (more on this later) is because I already have an app duking it out with Comcast to lower my bill.
It sends me updates via Facebook Messenger to let me know its progress:
And I message it back choose-your-own-adveture style responses:
This feature is part of a service called Trim, which positions itself as “a personal assistant that saves money for you.” (It makes money by taking a cut of the recovered charges.)
Trim calls its Comcast negotiator bot, knowingly:
And, indeed — we should all be thinking about the possibilities for algorithm defense.
Buy Me A Monopoly
The day Amazon announced its purchase of Whole Foods, shares in both Kroger and Walmart — which generates more than half its revenue from grocery sales — went into free-fall. Kroger’s in particular fell 8% in a matter of hours.
“Amazon has demonstrated that it is willing to invest to dominate the categories that it decides to compete in,” said Mark Baum, a senior vice president at the Food Marketing Institute. But the way Amazon “decides to compete” is to actually make the category uncompetitive, driving other players out until it has become the category.
Amazon isn’t abandoning online retail for brick-and-mortar. Rather, it’s planning to fuse the two. It’s going to digitize our daily lives in ways that make surge-pricing your groceries look primitive by comparison. It’s going to expand Silicon Valley’s surveillance-based business model into physical space, and make money from monitoring everything we do.
Concepts like Paribus, or Trim’s “Comcast Defense” are still primitive now, too, but extrapolate the possibilities out at scale. Imagine hundreds of thousands of people on a negotiation platform like this. That becomes the ability to exert real power. To pit Comcast and Verizon and AT&T against each other for who will offer the best deal, and have the leverage to switch all your members over en masse.
Reading Trim’s about page, it seems possible the thought could have crossed their minds:
So far, we’ve saved our users millions of dollars by automating little pieces of their day-to-day finances.
Now we’re starting to work on the hard stuff. Will I have enough money to retire someday? Which credit card should I get? Can my car insurance switch automatically to a cheaper provider?
Maybe Trim’s already thinking of the power of collective negotiating. Or maybe someone else will. The idea isn’t even all that new. It’s essentially how group insurance plans work. And unions. We’ve come up with it before. But we’ve never really tried it with code.
Buy Me A Pen
art by Curtis Mead
“When mass culture breaks apart,” Chris Anderson wrote a decade ago, in The Long Tail, “It doesn’t re-form into a different mass. Instead it turns into millions of microcultures, which coexist and interact in a baffling array of ways.”
On this new landscape of “massively parallel culture,” as Anderson called it, hyper-segmentation has become our manifest destiny.
Now we atomize the universal, dividing into ever nicher niches. We invent new market subsegments where none previously existed, or need to. We splash pink onto pens to create “differentiated” product lines:
Why?
Because segmentation sells.
Those pink pens “For Her”? They cost up to 70% more than Bic’s otherwise identical ones for everyone.
And the algorithms got this memo: there’s incentive to create ever more segmented “filter bubbles.”
Facebook knows your phone ID and can add it to your Facebook ID. It puts that together with the rest of your online activity: not just every site you’ve ever visited, but every click you’ve ever made. Facebook sees you, everywhere. Now, thanks to its partnerships with the old-school credit firms, Facebook knew who everybody was, where they lived, and everything they’d ever bought with plastic in a real-world offline shop. All this information is used for a purpose which is, in the final analysis, profoundly bathetic. It is to sell you things via online ads.
Buy Me Clicks
All day long, Facebook’s News Feed algorithm “is mapping your brain, seeking patterns of engagement,” writes Tobias Rose. “It can predict what you’ll click on better than anyone you know. It shows you stories, tracks your responses, and filters out the ones that you are least likely to respond to. It follows the videos you watch, the photos you hover over, and every link you click on.”
And Facebook follows you even when you’re not on Facebook. “Because every website you’ve ever visited (more or less) has planted a cookie on your web browser,” writes Lanchester, “when you go to a new site, there is a real-time auction, in millionths of a second, to decide what your eyeballs are worth and what ads should be served to them, based on what your interests, and income level and whatnot, are known to be.”
Facebook’s algorithms can create not only a private, personal pipeline of media, but an entirely individualized reality where information is repackaged for you — like pink pens — dynamically, in real-time, to whatever color, shape, or price point will extract the most value out of you specifically.
Four researchers based in Spain creat[ed] automated [shopper] personas to behave as if, in one case, ‘budget conscious’ and in another ‘affluent’, and then checking to see if their different behaviour led to different prices. It did: a search for headphones returned a set of results which were on average four times more expensive for the affluent persona. An airline-ticket discount site charged higher fares to the affluent consumer. In general, the location of the searcher caused prices to vary by as much as 166 per cent.
An anti-Clinton ad repeating a notorious speech she made in 1996 on the subject of ‘super-predators’… was sent to African-American voters in areas where the Republicans were trying, successfully as it turned out, to suppress the Democrat vote. Nobody else saw the ads.
“As consumers,” Tarnoff writes, “we’re nearly powerless, but as citizens, we can demand more democratic control of our data. The only solution is political.”
I’ve had the same thought. In an article last year about how technology has gotten so good at degrading us even some of its creators are starting to have enough, I wrote, “We take it for granted now, that cars have seat-belts to keep the squishy humans inside from flying through a meat-grinder of glass and metal during a collision. But they didn’t always. How did we ever get so clever? Regulation.”
We’ve been classified and stereotyped and divided and conquered by algorithms. Lines of code deliver custom-targeted exploitation to billions of earthlings at once.
Can our individual fragments of power be scaled towards something bigger by them as well?
“Addressing our biggest issues as a species — from climate change, to pandemics, to poverty —” (to Jesus Christ, have you tried ever canceling your Comcast account?) “— requires us to have a common narrative of the honest problems we face,” writes Rose. “Without this, we are undermining our greatest strength — our unique ability to cooperate and share the careful and important burdens of being human.”
As individuals we are indeed basically powerless, and algorithms have proven a stunningly effective tool for extracting ever greater value out of our atomization. We have perhaps yet to imagine the potential of what algorithms deployed to concentrate our individual power into a group force can achieve at a global scale.
One thing is for certain — we ned to start thinking about defense.
Oh, and if you’re a homeowner in California (or have a cool landlord) and want to lower your energy bill (and “inadvertently” accelerate the adoption of renewables to the grid), go sign up to be a beta tester for my friend’s energy startup! They’ll install a battery for free at your home to reduce your electric bill if you let them train their algorithms to predict when you’re going to need power.
The story of the biggest transformation of our time has a marketing problem: no one knows it’s happening.
There were many important events that happened in 2016. Some were deafening, trumpeting the seemingly inexplicable ascent of backwards-facing forces. But one event of great historical significance went largely unremarked upon.
In 2016 solar power became the cheapest form of new electricity on the planet and for the first time in history installed more new electric capacity than any other energy source.
Amid the sepia haze oozing from the past’s rusting, orange pipeline, humanity was placing a serious bet on a new kind of future. And you didn’t even know about it.
That’s a problem.
Powering Disruption
It was a bit like if you had a source of whale blubber in the 1840s and it could be used as fuel. Before gas came along, if you traded in whale blubber, you were the richest man on Earth. Then gas came along and you’d be stuck with your whale blubber. Sorry mate — history’s moving along.
— Brian Eno
“The beginning of the end for fossil fuels,” according to Bloomberg, occurred in 2013. “The world is now adding more capacity for renewable power each year than coal, natural gas, and oil combined. And there’s no going back…. The shift will continue to accelerate, and by 2030 more than four times as much renewable capacity will be added.”
The International Energy Agency’s Executive Director, Fatih Birol, said, “We are witnessing a transformation of global power markets led by renewables.”
“While solar was bound to fall below wind eventually, given its steeper price declines, few predicted it would happen this soon,” notes Bloomberg.
Later campaigns for the iPhone didn’t even show the product at all:
The product became the conduit to the experience. And the experience that solar has to sell is Future.
– Claim the Narrative of Future –
Two decades ago — back when it was still possible to talk about the future as anything but dystopia — a series of ads painted a striking vision of how that future was going to unfold. “Have you ever borrowed a book from thousands of miles away,” asked the ad voice. “Crossed the country without stopping to ask for directions? Or watched the movie you wanted to, the minute you wanted to?”
“You will,” said the voice, “and the company that will bring it to you: AT&T.”
Today I use a device to do basically 90% of what those ads predicted. (OK, I’ve never sent a fax from the beach, or tucked a baby in from a phone-booth, but you can’t get the Future 100% right). All of these things are so obvious and mundane now we barely even remember — some of us never knew — there was a time before. But, indeed, there was a point when this fantastical world was the future, and the future still seemed like a fantastical world.
There are no grand visions for the future now, no scenarios for humanity that don’t fill us with dread. A dying oligarchy tells us dissolution is freedom; regression is hope. It has disfigured our understanding of what’s happening in our world. The result is a gaping void in our collective vision when we look ahead. 17 years in to our new century there is a desperate hunger for a bright vision for the future, and at the moment arguably no one outside the world of clean energy has a legitimate claim to one. In the end, it’s not about utility bills or net metering laws or even solar panels for that matter. It’s about a vision of a Future worth demanding. Solar has the opportunity to be the voice of that vision for decades to come with a simple, cohesive, culture-focused messaging strategy.
Technology has gotten so good at degrading people, even its creators are starting to have enough.
Last year I had what you could call an existential crisis. This psychological hangover was induced by the dissolution of an app I’d launched, which had amassed millions of users but not enough money to keep it alive, a divorce from a second startup I co-founded that cost me layer’s fees and the price of a dear friendship, and topped off finally with a roofie of heartbreak.
It felt for a while like I had fallen through the trap door of the universe. I had discovered what everyone up on the floorboards above, walking noisily about, convinced they were going somewhere, was too blind to see: nothing mattered.
While in the void, I saw Apple unveil the Apple Pencil and thought about how vehemently opposed to a product like this Steve Jobs had been during his lifetime. “God gave us 10 styluses,” proclaimed a man not known for a belief in a power higher than himself, “let’s not invent another.” This prophetic aversion had led directly to the invention the iPhone, a touch-screen device that in a few years had wholly remade the world in its self-images.
“As soon as you have a stylus, you’re dead,” Jobs had famously declared. But, of course, at 56, Steve Jobs was dead, and as soon as he was, Apple had a stylus.
“Think about that when you’re fighting for some creative idea at work today,” I’d tell people.
All meaning we ascribed to anything we did with our lives seemed ridiculous; an absurd delusion. I wanted to do nothing. I wanted to pursue nothing. I wanted to create nothing. There was no point to any of it.
All of this felt compounded — and if I stared too long into the glare of the digital sun, perpetuated — by an increasingly paralyzing rupture I was watching run down the fabric of the culture.
“Our technology is turning us into objects,” I’d written in a 2013 post titled Objectionable, tracing the trajectory from selfies to self-objectification to dehumanization. In a ubiquitously mediated environment, I argued, re-conceiving ourselves and one another as media products was inevitable. “We are all selfies. We are all profiles. We are all objects. And we can’t stop. And the trouble is, it doesn’t matter how you treat objects,” I wrote. “It’s not like they’re people.”
Then came Tinder.
“By reducing people to one-time-use bodies and sex to an on-demand exchange, dating apps have made what they yield us disposable and cheap,” I wrote in an An App For Love. “From swipe to sex, the relentless, grinding repetitiveness inherent in every aspect of the ‘swipe app’ experience sabotages the very mechanics that trigger the brain system for romantic love,” I wrote, referencing Helen Fisher’s extensive neuroscience research on the phenomenon. “Love knows what it likes. And we could be building tech engineered for it — creating a match between the research and the product experience, as it were. But we aren’t. Love has been, literally, written out of the code for a generation afraid to catch feelings. Instead we swipe through thousands of instant people. We learn nothing of them and share nothing of ourselves to be known. We strip ourselves down to anatomy and we become invisible.”
We had succeeded, I wrote in the summer of 2015, in disrupting love. That would be the last thing I’d write before nothing became worth writing about.
In the fall of 2016 there is a toxic sense pervading the culture that no one is deserving of dignity anymore. You can feel it too as you sense your own being swiped away. It’s not just in our politics. It’s in our pockets. It’s in the nameless dread we feel of our devices now. Their myriad manipulations invading our attention, degrading our autonomy, making us sick. One third of us would rather give up sex than our smartphone. The rest of us can’t even get out of bed in the morning without first tapping the black, glass syringe. We have accepted our addiction to digital dopamine so thoroughly we take its unrelenting coercion for granted. We all live under the tyranny of technology now. It’s gotten to the point that even some technology creators are recoiling at the results their creations have wrought on our brave new world, and trying to find ways to design different.
.
PRISONERS OF OUR OWN DEVICE
In “The Scientists Who Make Apps Addictive,” published in The Economist’s 1843 Magazine’s October issue, Ian Leslie writes about Addiction by Design, a book exploring machine gambling in Las Vegas, by anthropologist Natasha Dow Schüll:
The capacity of slot machines to keep people transfixed is now the engine of Las Vegas’s economy. Over the last 20 years, roulette wheels and craps tables have been swept away to make space for a new generation of machines: no longer mechanical contraptions (they have no lever), they contain complex computers produced in collaborations between software engineers, mathematicians, script writers and graphic artists.
The casinos aim to maximise what they call “time-on-device”. The environment in which the machines sit is designed to keep people playing. Gamblers can order drinks and food from the screen. Lighting, decor, noise levels, even the way the machines smell — everything is meticulously calibrated. Not just the brightness, but also the angle of the lighting is deliberate: research has found that light drains gamblers’ energy fastest when it hits their foreheads.
But it is the variation in rewards that is the key to time-on-device. The machines are programmed to create near misses: winning symbols appear just above or below the “payline” far more often than chance alone would dictate. Losses are thus reframed as potential wins, motivating [players] to try again. Mathematicians design payout schedules to ensure that people keep playing while they steadily lose money. Alternative schedules are matched to different types of players, with differing appetites for risk: some gamblers are drawn towards the possibility of big wins and big losses, others prefer a drip-feed of little payouts (as a game designer told Schüll, “Some people want to be bled slowly”). The mathematicians are constantly refining their models and experimenting with new ones, wrapping their formulae around the contours of the cerebral cortex.
Gamblers themselves talk about “the machine zone”: a mental state in which their attention is locked into the screen in front of them, and the rest of the world fades away. “You’re in a trance,” one gambler explains to Schüll. “The zone is like a magnet,” says another. “It just pulls you in and holds you there.”
Of course, these days we’re all captive to a screen explicitly designed to exploit our psychology and maximize time-on-device every waking moment, everywhere we go. The average person checks their phone 150 times a day, and each compulsive tug on our own, private slot machine is not the result of conscious choice or weak willpower. It’s engineered.
“There’s a thousand people on the other side of the screen whose job is to break down whatever responsibility I can maintain,” explains Tristan Harris, formerly Google’s Design Ethicist.
Designers, technologists, product managers, data scientists, ad sales executives — the job of all these people, as Harris says, “is to hook people.”
“I saw the best minds of my generation destroyed by madness starving hysterical,” wrote the beat poet Alan Ginsberg in his 1956 magnum opus, Howl.
.
There is a huge incentive for technology companies to keep us in thrall to their machine zone. From small startups to the massive, public corporations that are now America’s most valuable companies, the dual motivators of reward from advertisers, and punishment from stockholders or investors compel them to make sure they keep you, me, and everyone we know constantly and increasingly engaged.
The emails that induce you to buy right away, the apps and games that rivet your attention, the online forms that nudge you towards one decision over another: all are designed to hack the human brain and capitalise on its instincts, quirks and flaws.
When you get to the end of an episode of “House of Cards” on Netflix, the next episode plays automatically. It is harder to stop than to carry on. Facebook gives your new profile photo a special prominence in the news feeds of your friends, because it knows that this is a moment when you are vulnerable to social approval, and that “likes” and comments will draw you in repeatedly. LinkedIn sends you an invitation to connect, which gives you a little rush of dopamine — how important I must be! — even though that person probably clicked unthinkingly on a menu of suggested contacts. Unconscious impulses are transformed into social obligations, which compel attention, which is sold for cash.
The same month as the article above came out, Briana Bosker wrote in The Atlantic:
Sites foster a sort of distracted lingering partly by lumping multiple services together. To answer the friend request, we’ll pass by the News Feed, where pictures and auto-play videos seduce us into scrolling through an infinite stream of posts — what Harris calls a “bottomless bowl,” referring to a study that found people eat 73 percent more soup out of self-refilling bowls than out of regular ones, without realizing they’ve consumed extra. Checking that Facebook friend request will take only a few seconds, we reason, though research shows that when interrupted, people take an average of 25 minutes to return to their original task. The “friend request” tab will nudge us to add even more contacts by suggesting “people you may know,” and in a split second, our unconscious impulses cause the cycle to continue on the recipient’s phone.
Food companies engineer flavors to exploit our biological desire for sugary, salty, and fatty foods. Social technology taps into our psychological cravings. We are social creatures, after all.
Bosker writes:
The trend is toward deeper manipulation in ever more sophisticated forms. Harris fears that Snapchat’s tactics for hooking users make Facebook’s look quaint. Facebook automatically tells a message’s sender when the recipient reads the note — a design choice that activates our hardwired sense of social reciprocity and encourages the recipient to respond. Snapchat ups the ante: Unless the default settings are changed, users are informed the instant a friend begins typing a message to them — which effectively makes it a faux pas not to finish a message you start. Harris worries that the app’s Snapstreak feature, which displays how many days in a row two friends have snapped each other and rewards their loyalty with an emoji, seems to have been pulled straight from [Stanford experimental psychologist B.J.] Fogg’s inventory of persuasive tactics. Research shared with Harris by Emily Weinstein, a Harvard doctoral candidate, shows that Snapstreak is driving some teenagers nuts — to the point that before going on vacation, they give friends their log-in information and beg them to snap in their stead. “To be honest, it made me sick to my stomach to hear these anecdotes,” Harris told me.
But aren’t we all feeling a little bit sicker?
.
UNCOMFORTABLY NUMB
“Like many addicts, I had sensed a personal crash coming,” Andrew Sullivan wrote in a September New York Magazine essay about how his relationship to technology nearly destroyed him:
For a decade and a half, I’d been a web obsessive, publishing blog posts multiple times a day, seven days a week, and ultimately corralling a team that curated the web every 20 minutes during peak hours. Each morning began with a full immersion in the stream of internet consciousness and news, jumping from site to site, tweet to tweet, breaking news story to hottest take, scanning countless images and videos, catching up with multiple memes. Throughout the day, I’d cough up an insight or an argument or a joke about what had just occurred or what was happening right now. And at times, as events took over, I’d spend weeks manically grabbing every tiny scrap of a developing story in order to fuse them into a narrative in real time.
I was, in other words, a very early adopter of what we might now call living-in-the-web. And as the years went by, I realized I was no longer alone. Facebook soon gave everyone the equivalent of their own blog and their own audience. More and more people got a smartphone — connecting them instantly to a deluge of febrile content, forcing them to cull and absorb and assimilate the online torrent as relentlessly as I had once. It was ubiquitous now, this virtual living, this never-stopping, this always-updating. I remember when I decided to raise the ante on my blog in 2007 and update every half-hour or so, and my editor looked at me as if I were insane. But the insanity was now banality; the once-unimaginable pace of the professional blogger was now the default for everyone.
If the internet killed you, I used to joke, then I would be the first to find out. Years later, the joke was running thin. In the last year of my blogging life, my health began to give out. Four bronchial infections in 12 months had become progressively harder to kick. Vacations, such as they were, had become mere opportunities for sleep. My dreams were filled with the snippets of code I used each day to update the site. My friendships had atrophied as my time away from the web dwindled. My doctor, dispensing one more course of antibiotics, finally laid it on the line: “Did you really survive HIV to die of the web?”
In an essay on contemplation, the Christian writer Alan Jacobs recently commended the comedian Louis C.K. for withholding smartphones from his children. On the Conan O’Brien show, C.K. explained why: “You need to build an ability to just be yourself and not be doing something. That’s what the phones are taking away,” he said. “Underneath in your life there’s that thing … that forever empty … that knowledge that it’s all for nothing and you’re alone … That’s why we text and drive … because we don’t want to be alone for a second.”
He recalled a moment driving his car when a Bruce Springsteen song came on the radio. It triggered a sudden, unexpected surge of sadness. He instinctively went to pick up his phone and text as many friends as possible. Then he changed his mind, left his phone where it was, and pulled over to the side of the road to weep. He allowed himself for once to be alone with his feelings, to be overwhelmed by them, to experience them with no instant distraction, no digital assist. And then he was able to discover, in a manner now remote from most of us, the relief of crawling out of the hole of misery by himself. As he said of the distracted modern world we now live in: “You never feel completely sad or completely happy, you just feel … kinda satisfied with your products. And then you die. So that’s why I don’t want to get a phone for my kids.”
“When you’re feeling uncertain,” says Nir Eyal, author of Hooked: How to Build Habit-Forming Products, “Before you ask why you’re uncertain, you Google. When you’re lonely, before you’re even conscious of feeling it, you go to Facebook. Before you know you’re bored, you’re on YouTube. Nothing tells you to do these things. The users trigger themselves.”
Technology’s relentless intrusion isn’t just hijacking our attention, it’s being deliberately designed to disrupt another kind of human experience:
Endless distraction is being engineered to prevent us from experiencing emotion. A form of, literally, psychological abuse.
What is this doing to our mental health?
The American College Health Association’s Spring 2016 survey of 95,761 students found that 17% of the nation’s college population has been diagnosed with or treated for anxiety problems during the past year. 13.9% were diagnosed with or treated for depression. Up from 11.6% and 10.7% respectively just five years ago.
Ohio State has seen a 43% jump in the past five years in the number of students being treated at the university’s counseling center. At the University of Central Florida in Orlando, the increase has been about 12% each year over the past decade. At the University of Michigan in Ann Arbor, demand for counseling-center services has increased by 36% in the last seven years.
There is a mental health crisis of, literally, epidemic proportion going on.
The Wall Street Journal suggests it is “unclear why the rates of mental-health problems seem to be increasing.” Anxiety and depression rising in tandem with cohorts of young people who have spent more and more of their lives enmeshed in addictive technology with each successive year may simply be a coincidence.
Last year, Sales’s Vanity Fair article, Tinder and the Dawn of the ‘Dating Apocalypse,’ provided an ethnography of the dystopian dating app culture, but unlike her other subjects, McLeod wasn’t just another helpless user at the mercy of dating app technology. He is the founder of the dating app Hinge.
“It was crazy,” McLeod said. “I had $10 million in the bank. I had resources. I had a team. But as a C.E.O. I felt more powerless than I did when I had, like, no money in the bank and this thing was just getting started.”
“When your article came out,” McLeod had written to Sales back in August, “it was the first among many realizations that Hinge had morphed into something other than what I originally set out to build. Your honest depiction of the dating app landscape has contributed to a massive change we’re making at Hinge later this fall. I wanted to thank you for helping us realize that we needed to make a change.”
McLeod, 32, had launched Hinge in early 2013, fresh out of Harvard Business School, with the hope of becoming the “Match for my generation” — in other words a dating site that would facilitate committed relationships for younger people who were less inclined to use the leading and yet now antiquated (in Internet years) service. He was a bit of a romantic; last November a “Modern Love” column in the New York Times told the story of how he made a mad rush to Zurich to convince his college sweetheart not to marry the man she was engaged to (she and McLeod plan to marry this coming February). So nothing in his makeup nor his original plans for his company fit in with it becoming a way for Wall Street fuckboys to get laid. (“Hinge is my thing,” said a finance bro in my piece, a line McLeod says made him blanch.)
Sales’s article was a reckoning. Within a few months of its publication the Hinge team began conducting user research that, as McLeod writes, “would reveal how alarmingly accurate its indictment of swiping apps was.”
7 in 10 surveyed women on the leading swiping app have received sexually explicit messages or images.
6 in 10 men on the leading swiping app are looking primarily for flings or entertainment.
30% of surveyed women on swiping apps have been lied to about a match’s relationship status.
22% of men on Hinge have used a swiping app while on a date.
54% of singles on Hinge report feeling lonely after swiping on swiping apps.
21% of surveyed users on the leading swiping app have been ghosted after sleeping with a match.
81% of Hinge users have never found a long-term relationship on any swiping app.
Unsurprisingly, among the main design conventions McLeod and his team identified that have become standard in swipe apps are: a “slot-machine interface” that encourages people to keep swiping, and the dehumanizing representation of “people as playing cards.” These design choices not only “lead to the pathologically objectifying way many choose to engage with the real humans on the other side of the app” but also serve the business purpose of orienting users “towards engagement, not finding relationships.”
On the eve of Hinge’s relaunch as a completely overhauled product, (sans “swiping, matching, timers and games”), McLeod wrote:
The most popular swiping app boasts that users login on average 11 times per day spending up to 90 minutes per day swiping, and have accumulated on average over 200 matches. However, for the vast majority of users this has led to exactly zero relationships.
Like a casino, a swiping app isn’t designed to help you win; it’s designed to keep you playing so the house wins. Swiping is an addictive game designed to keep you single.
Given the current state of our culture, it’s now more critical than ever that there exist a service that helps those bold enough to seek real relationships find meaningful connections. With it, we hope we can pave the way for a new normal in dating culture that treats people with dignity and helps those seeking relationships find what they’re really looking for.
On October 11 the new Hinge launched on iOS in the US, UK, Canada, Australia, and India.
“What responsibility comes with the ability to influence the psychology of a billion people,” asks Tristan Harris. “Behavior design can seem lightweight because it’s mostly just clicking on screens. But what happens when you magnify that into an entire global economy? Then it becomes about power.”
Prior to his appointment as Google’s Design Ethicist, Harris worked on the Gmail Inbox app, which is where he began to think about the ways experience design choices can have ramifications on society at a mass scale. In 2016 Harris left Google to found Time Well Spent, a consultancy that works with tech startups proactively seeking to create more conscious user experiences, and to raise awareness for how digital technology is exploiting our psychology in ways we’ve come to take for granted.
As Bosker writes, Harris’s team at Google “dedicated months to fine-tuning the aesthetics of the Gmail app with the aim of building a more ‘delightful’ email experience. But to him that missed the bigger picture: Instead of trying to improve email, why not ask how email could improve our lives — or, for that matter, whether each design decision was making our lives worse?”
Like McLeod and Harris these are questions technology creators seem to be starting to ask themselves.
“We suck at dealing with abuse and trolls on the platform,” Dick Costolo, Twitter’s former CEO, wrote in a leaked memo in 2015. “We’ve sucked at it for years. It’s no secret. We lose core user after core user by not addressing simple trolling issues that they face every day. I’m frankly ashamed of how poorly we’ve dealt with this issue during my tenure as CEO…. I take PERSONAL responsibility for our failure to deal with this as a company.”
In February of 2015 Costolo was confident the platform would “start kicking these people off right and left,” but the problem might have proven too big for Costolo to solve. Before the end of year, he was out. Now, Costolo’s newly-announced post-Twitter venture, Chorus, is a fitness product aiming, unambiguously, to make people’s lives better.
Gabe Zichermann literally wrote the book on Gamification By Design. Now he focuses his energy on a new startup called Onward, which aims to “transform how people relate to addictive and compulsive behaviors in their lives, giving them more control and overall satisfaction.” The company is working with UCLA-based clinical advisors to design a software product to help people “achieve tech-life balance by reducing the pull from social media, porn, sex, gambling, games or shopping.” Not surprisingly, “Onward is designed and built by a team of expert technologists whose lives have been personally affected by addiction.”
“A new class of tech elites [is] ‘waking up’ to their industry’s unwelcome side effects,” Bosker writes. “For many entrepreneurs, this epiphany has come with age, children, and the peace of mind of having several million in the bank.” Bosker quotes Soren Gordhamer, the creator of Wisdom 2.0, a conference series about maintaining “presence and purpose” in the digital age: “They feel guilty,” Gordhamer says.
Perhaps. Or perhaps the realization of how much power they wield may, for some, eventually broaden their scope of what it is they can do with it. It is worth noting that this nascent trend has arrived as many of the technologists and entrepreneurs responsible for shaping the digital experiences of our lives have matured from scrappy upstarts with something to prove into established industry players. Has success felt as meaningful as they’d imagined it would? Did all their hard work and late nights actually create something of value? Has their impact on the world turned out to be what they’d hoped?
Confronted by this self-reflection some are discovering they have the power, and time yet, to make a change.
.
FLESH AND BONE BY THE TELEPHONE
Y-Combinator, the famous startup incubator that has helped create successes like AirBnB, DropBox, Instacart, Reddit, and more, has a motto —
This is an awesome mantra for founders of a commercial enterprise (and for VCs hoping to get a big return on their investment to preach to startups), but the trouble is, just because people want a thing doesn’t always mean that it’s all that awesome. People, for example, want heroin. That doesn’t necessarily mean that your company ought to make it.
What people want can be hacked. Addiction, in fact, changes the structure of the brain. There is a very fine line between making a thing people want and coercing our desires in the first place. By exploiting our psychological susceptibilities, as Harris says, technology companies “are getting better at getting people to make the choices they want them to make.”
Through Time Well Spent Harris is championing the adoption of design standards that treat users’ psychology with respect rather than as an exploitable resource, a concept he refers to as a “Hippocratic oath” for software creators.
Dating back to the physicians of Ancient Greece, the Hippocratic oath states the obligations and proper conduct of doctors. The pledge is taken upon graduation from medical school, oftentimes depicted in the shorthand: “first do no harm.”
But industrialists are not doctors — the psychological motivations that compel someone to go through years and years of grueling, expensive training in order to help heal people are often not the same as those of someone who self-selects to drop out of college to start a company that gives people what they want even if it’s kinda pretty bad for them — it turns out.
The biggest obstacle to incorporating ethical design and “agency” is not technical complexity. According to Harris, it’s a “will thing.” And on that front, even his supporters worry that the culture of Silicon Valley may be inherently at odds with anything that undermines engagement or growth. “This is not the place where people tend to want to slow down and be deliberate about their actions and how their actions impact others,” says Jason Fried, who has spent the past 12 years running Basecamp, a project-management tool. “They want to make things more sugary and more tasty, and pull you in, and justify billions of dollars of valuation and hundreds of millions of dollars [in] VC funds.”
In the Atlantic, Josh Elman, “a Silicon Valley veteran with the venture-capital firm Greylock Partners,” offers another analogous example. “Elman compares the tech industry to Big Tobacco.”
New guidelines released by the American Academy of Pediatrics in October exhort app developers to “cease making apps for children younger than 18 months until evidence of benefit is demonstrated.” And “Eliminate advertising and unhealthy messages on apps” for kids five and younger.
But the question is — who’s going to hold them accountable when, inevitably, they don’t? Big Tobacco did not decide to stop advertising to children, and fast food chains didn’t decide to start disclosing the caloric content of their products out of the goodness of their black lungs and arterially-sclerosed hearts. It seems unlikely that relying on the personal conscience of each individual app developer to guide them towards moral integrity is really going to be a scalable solution.
We take it for granted now, that cars have seat-belts to keep the squishy humans inside from flying through a meat-grinder of glass and metal during a collision. But they didn’t always.
How did we ever get so clever?
Regulation. It took legislation to make seat belts standard in cars. And more legislation to require their use. And it has saved lives. Regulation changes not only what’s legal, but what’s normal. Car manufacturers didn’t used to talk about seat belts because it made people think of the reality of car crashes, and cars were sold on the “fantasy” of driving. Now cars are sold on safety.
In the wake of a national mental health crisis and corporations setting out to explicitly exploit our psychology, it’s worth considering the options we have as citizens, not just consumers.
.
SOME COURTESY, SOME SYMPATHY, AND SOME TASTE
In the early 2010’s, Wired founder Kevin Kelly’s book, What Technology Wants, became quite fashionable with a certain type of tech person for absolving himself of any moral or ethical responsibility in his creations. According to this mechanistic faith, technology just wants to, like, live its true self; who are its creators to have any say in the inevitable?
In this vision we are all trapped in a mobius loop of technological determinism. Product creators are powerless to do anything but give people what they want and users are helpless to resist coercion into what they’re given and all of us are slaves to whatever technology wants. No one is accountable while everyone loses dignity.
Yet it is the very power to create products used by millions, or billions of people that is the power to shape — or misshape—the world. Technology wants to irradiate us as much as it wants to cure us. Technology wants to bomb our cities as much as it wants to power them. Technology wants to destroy us as much as it wants to save us. Which is to say, technology wants NOTHING. Technology has no innately arising desires of its own independent of the desires of the people who create it.
At some point in the past 70 years we seem to have forgotten what we knew in the aftermath of WW2, which is that what technology wants is in the eyes of its creators. And engineering technology for objectification, addiction, psychological exploitation, mental abuse….is degrading. It’s degrading to the people who use it, and it’s degrading to the people who make it — it takes a certain kind of personality to feel good depriving people of their humanity, and not everyone has it. You might be one of those with a resistance.
From Andrew Carnegie to Bill Gates some of the most successful technology entrepreneurs, and humans, in history have ended up pursuing ways to contribute to the world after they retired from their commercial endeavors. Perhaps for some in the new generation the prospect of segregating the two is no longer quite as appealing.
In the end, after every single tech CEO alive today is dead, their company, should it outlive them, will make the product they never wanted to see in the world. Some, it seems, are starting to ask themselves whether it’s what they want to be doing while they’re still living.