We work directly with technologists to create a new definition of success—one that honors human nature, grows responsibly, and helps us live in alignment with our deepest values.
We work directly with technologists to create a new definition of success—one that honors human nature, grows responsibly, and helps us live in alignment with our deepest values.
Publicly launching in early fall 2021, Fundamentals of Humane Technology explores the personal, societal, and practical challenges of being a humane technologist. Participants will leave the course with a strong conceptual framework, hands-on tools, and an ecosystem of support from peers and experts.
Before you dive into the principles, it is helpful to understand the context that drives our work. Tech culture needs an upgrade. To enter a world where all technology is humane, we need to replace old assumptions with deeper understanding of how to add value to people’s lives.
Watch this presentation by Tristan Harris at What's Now SF
(An updated written version of the principles follows below.)
Some technologists believe that technology is neutral. But in truth, it never is—for three reasons.
First, our values and assumptions are baked into what we build. Anytime you put content or interface choices in front of a user you are influencing them; whether that is by selecting a default, choosing what content is shown and in what order, or providing a recommendation. Since it is impossible to present all available choices with equal priority, what you choose to emphasize is an expression of your values.
Second, just as our values and assumptions are baked into what we build, the values and assumptions of the world shape the effects of new technology, regardless of the inventor's intentions. Economic pressures (for example, the pressure to grow sales for shareholders) or social dynamics (for example, one ethnic group wielding powerful tech against a marginalized ethnic group) can have profound unintended consequences. Most often, the result is a widening of inequity in the world.
The third way technology is not neutral is that every single interaction a person has, whether with people or products, changes them. Even a hammer, which seems like a neutral tool, makes our arm stronger when we use it. Just like real-world architecture and urban planning influence how people feel and interact, digital technology shapes us online. For example, a social media environment of likes, comments, and shares shapes what we choose to post and reactions to our content shapes how we feel about what we posted.
Neutrality is a myth. Humanity’s current and future crises need your hands on the steering wheel.
To see the full implications of technology being values-laden, we must consider the vulnerabilities of the human brain. Many books have been written about the myriad cognitive biases evolution has left us with, and our tendency to overestimate our agency over them. To quickly understand this, think of the last time you watched one more YouTube video than you had intended. YouTube’s recommendation algorithm is expert at figuring out what makes you keep watching—it doesn’t care what you intend to do with the next minutes of your life, let alone help you honor that intention.?
Simple engagement metrics like watch time or clicks often fail to reveal a user’s true intent because of our many cognitive biases. When you ignore these biases, or optimize for engagement by taking advantage of them, a cascade of harms emerges.?
Confirmation bias causes us to engage more with content that supports our views, leading to filter bubbles and the proliferation of fake news. Present bias, which prioritizes short-term gains, leads us to binge-watch as self-medication when we’re stressed instead of addressing the source of our stress. The need for social acceptance drives us to adopt toxic behavior we see others using in an online group, even when we would not normally behave that way.?
Aggressively optimizing for engagement metrics is like taking your hand off the steering wheel. It puts the users’ paleolithic, inherently vulnerable brains in charge of determining what is valuable for your product. This approach, combined with the latest machine learning and A/B testing techniques, result in a broad series of harms unleashed at scale, which we call human downgrading.
Our vision is to replace the current harmful assumptions that shape product development culture with a new mindset that will generate humane technology. Integrating this new paradigm will mean process changes, time, resources, and energy within the product organization and beyond.?
We realize systemic cultural change is never an easy task, with many opposing forces. Join a conversation if you have ideas for how to help move this change forward or specific requests that you think CHT may be positioned to fulfill.
Center for Humane Technology and many other organizations are creating these conditions through a combination of pressures from the media, parents, kids, investors, shareholders, tech employees like you, regulation and more.?
?Together, we can chart the path forward and lay the foundation for more humane technologies. Read more about how we're rebuilding the system.
This new paradigm is for technologists who accept that technology is increasingly shaping our social fabric and want to apply their exceptional skills to realign technology with humanity.
When you obsess over engagement metrics, you will fall into the trap of assuming you are giving people what they want, when you may actually be preying on inherent vulnerabilities. Outrageous headlines make us click even when we know we should be doing something else. Seeing someone has more followers than we do makes us feel inferior. Knowing our friends are together without us makes us feel left out. And false information, once we believe it, is very hard to displace.
Instead, you can be values-driven while still being informed by metrics. You can spend your time thinking about the specific values (e.g., health, well-being, connection, productivity, fun, creativity…) you intend to create with your product or feature. Those values can be a source of inspiration and prioritization. You can measure your success directly by investing in mechanisms of understanding that match the complexity of what you value, e.g. qualitative research and bringing in outside expertise.
Not everything needs an upgrade. Under the right conditions, humans are highly capable of accomplishing goals, connecting with others, having fun, and doing many other things technology seeks to help with. Technology can give space for that brilliance to thrive, or it can displace and atrophy it. In each design choice, you can support the conditions in which brilliance naturally occurs.
For example, Living Room Conversations was created with the understanding that when people find similarities with each other and connect as human beings, they can more easily find common ground and shared perspective. Another example is online group technology like MeetUp that encourages in-person get-togethers to deepen connections.
Ideally your organization would clearly understand the harms it creates and would perfectly incentivize mitigating them. In practice these harms are complex, shifting, and difficult to?understand. Because of this, it is important to build a visceral, empathetic connection between product teams and the users they serve.
While many of today’s best practices use personas, focus groups, or “jobs to be done” to gain empathy for the user, humane technology requires that you internalize the pain your users experience, as if it were your own. Imagine the following scenarios:
This mindset leads to a drive for deeper understanding and caution. It’s a mindset that your decision-makers and product team must share, for the many people impacted by your work: your users, the people around them (friends, family, colleagues, etc.), different socioeconomic populations (age, income level, disabilities, cultures, etc.), and so on.
As the world becomes increasingly complex and unpredictable, our capacity to understand our emerging reality and make meaningful choices can quickly become overwhelmed. As a technologist, you can help people make choices in ways that are informed, thoughtful, and aligned with their values as well as the fragile social and environmental systems they inhabit.
For example, when presenting new information, appropriate framing can help people make good decisions. The same information lands differently when framed in a more relatable context. Hearing that COVID-19 has a 1% case fatality rate might not mean much to you. But hearing that COVID-19 is several times deadlier than the flu, helps anyone immediately understand it in relation to something they already know. When people are presented with information in an intuitive way, they are empowered to make wise choices.
In a world where apps are competing constantly for our attention, our mindfulness is under attack. Mindfulness is being aware of, in a calm and balanced way, what's happening in our mind, in our body, and around us. Mindfulness allows us to act with intention and to avoid a life that becomes a series of automatic actions and reactions, often based on fear and a scarcity mindset. But like any other capacity, mindfulness is one that can be developed. You can help your users regain and increase their capacity for awareness, rather than racing to win more of their attention.
For example, a mail application that by default makes a sound and puts up a notification when mail is received could instead have the user opt in to turn notifications on. Another example is the Apple Watch Breathe app, which supports people in periodically taking a moment to simply focus on their breath.
Today’s technology has increasing asymmetric power over the humans that use it. Machine learning, microtargeting, recommendation engines, deep fakes are all examples of technologies that dramatically increase the opportunity for creating harm, especially at scale. To mitigate this, you can invest in understanding the delicate cognitive, social, economic and ecological systems that your technology operates in, what harms your product may be generating, and ways to mitigate those harms.
For example, a product originally intended only for adults will almost inevitably be exposed to children as it scales; a platform originally intended for entertainment can become a target for disinformation; and a product that mostly benefits people in an industrialized country may mostly produce harms in a third world country. What feels like a remote possibility when you first launch becomes a guarantee as you scale to millions of people.?
And even “good” things, when done to an extreme, will have unintended consequences. At first glance, Likes seem like a great signal about what the user wants to see more of. But at scale, it ends up creating filter bubbles and the fragmentation of shared truth.
A book by Batya Friedman and David G. Hendry on how to put values at the center of your design process
A book by Sasha Costanza-Chock on an approach to design that is led by marginalized communities
A book by Richard H. Thaler and Cass R. Sunstein on the concept of “choice architecture” and how to enable wise choices
A book by Dan Ariely on how we make decisions and understanding cognitive bias. See also Ariely’s Irrational Labs.
A comprehensive list of our inherent biases
Science-based insights for a meaningful life; Inspiration and research to obsess over values and strengthen natural brilliance
Our take on organizing common human vulnerabilities and a guide to assess your current product