gears

User experience literature is ripe with discussion of trust. Your software should always aim to gain the trust of your users to build a sense of credibility. This very important principle improves usability and quality.

All of our interactions are, in some sense, rooted within a particular stratum of trust. We interact with the coffee shop barista in a very different way than we do with our spouse or a stranger on the street. I’ve got nothing against Denise at the local Starbucks, but I’m just not going to share my deepest emotions with her, and (hopefully) nor will she share hers with me.

This social interaction analogy, however, reveals a missing component in the discussion of trust as it relates to UX. It’s always described as one-way.

We as software designers and developers need to gain the trust of our users, but what about the other way around? Our social relationships come with tacit two-way bindings, so to speak. I’ll tell you mine if you tell me yours. There is little discussion of when and how we should trust our users.

Your user isn’t a moron

Well, they might be, but whatever level of moron or genius they are, try to express some faith that they have at least a minimum understanding of your software’s domain. Your software is designed to help its users complete a specific set of tasks. Presumably they know what that task is and have a set of skills and knowledge necessary that makes that task relevant to them.

By developing a deep understanding of the types of people our software serves we can both improve the software itself, and streamline the software creation process. One of the most useful things a UX designer can do is to study their users and generate personas—documents that characterize specific fictional users, their goals, their personality, and their knowledge. Not only does this help us decide what we need to build in our software, but also—and sometimes more importantly—what not to build and why they deserve our trust.

An example

I was recently involved in a two-week long discovery process for an ambitious project: a suite of apps designed to help physicians learn and improve medical outcomes more effectively. We spent several days outlining user journeys and creating stories.

An important component of the app asked physicians to contribute content by summarizing medical literature; the question of copyrights came up. How would we, the app designers and builders, enforce copyrights? How would we prevent nefarious quacks from simply reprinting professional journal material as their own?

Obviously we couldn’t program this insurmountably complex and nuanced task.

What about the company who owns the product? Surely they could implement a system of review whereby all submitted content was checked for copyright infringement. In this case, they would cease to be a medical content company, and morph into a massive legal review department (with a small offshoot left to maintain some… what were they in the business of again…medical content?). Not to mention, all copyright liability would henceforth fall squarely in their lap.

This was neither advisable nor sustainable.

Ultimately it hit us: how about we just trust them? Considering the target user—highly educated medical professionals with a strong work ethic and a desire to wield influence—what was the likelihood of serious and prevalent copyright infringement? As soon as we took a step back and reviewed our well-defined user personas, it was clear. This was a very unlikely problem that did not warrant such an all-encompassing solution. A simple content flagging mechanism would suffice.

Use your user

Covet the knowledge and skills your users possess—try and use it if you can. For the project we just discussed, this was a clear guiding principle from the outset.

Crowdsourcing isn’t new and it has been an effective means of gathering an otherwise impossible amount of data and content for a long time. The more user contribution and user vetting a crowdsourcing system employs, generally the better quality the content.

Another example

Waze App report screen
So many “waze” to report things!

I have recently started using Waze, a largely fantastic and popular turn-by-turn navigation app that has embraced their users in a way other navigation apps have not. It relies heavily on user-submitted reports of traffic, road hazards, police activity and more, allowing it to provide relatively accurate real-time information that other apps cannot. When, for example, I am warned about an upcoming hazard, and lo-and-behold there is indeed a large potted geranium rolling down the highway, I feel prepared, connected, and empowered. I am happy.

Waze has essentially said to their users “we trust you to tell us what’s happening.” And consequently, users feel a sense of ownership and loyalty for the app. It’s not just an app they use, they are a part of the app and its community.

The consequences of mistrust

Waze is a great app, but mistrust can creep into even the most well-intentioned software. There is one feature of this app that bugs the heck out of me, revealing a dark side that is unfortunately ever-present.

Waze App danger warning

Imagine you’re driving, with your iPhone caddied alongside you, and turn-by-turn navigation in progress. Suddenly, for some unusual but irrelevant reason, you realize you need to go somewhere else instead. You touch the Waze interface to enter a new destination and, bam, an alert pops up warning you that you shouldn’t use the app while driving. There’s an option to cancel, or to declare you are a passenger, so everything’s ok.

Any interaction that is based on a lie is fundamentally flawed.

I hope the massive problem with this interface workflow is clear.

  1. User decides to do something marginally dangerous.
  2. App claims it does not want you to do this, and presents a choice.
  3. User now burdened with reading material and decision making while driving.
  4. Situational danger has now increased.
  5. User swerves to avoid the compact car they swear wasn’t there before.
  6. User lies to the app and decides to impersonate a passenger.

I suspect one or more lawyers seeking to avoid liability were responsible for this feature, since it does not fit the M.O. of an otherwise excellent set of user-focused features.

By deciding to mistrust a user in this situation, the app has most likely increased the danger they are supposedly aiming to reduce. Had Waze just trusted the user’s discretion, and allowed them to do what they needed to do, it probably would have been safer. In most cases, the user is probably just going to lie and say they’re a passenger, then do what they need to do anyway. Any interaction that is based on a lie is fundamentally flawed. The bond of trust that Waze has worked so hard to build is eroded.

A big fat scary caveat

Or, the consequences of trust

A long time ago, on an web far away, we were warned not to trust anyone or anything we found on the internet. Who knows what happened to all those psychopaths that used to populate chat rooms. My how things change. Today, the internet is just about the largest source of truth in existence. People find loving spouses, buy groceries and medications, get paid, file income taxes, and read the news on the internet. We have all learned to trust in the security and veracity of websites (even if they don’t necessarily trust us).

But is it premature to forget your mama’s warning from 1998? As creators of the web’s software, we designers and developers have an ethical duty to do what we can to promote the truth, to justify the trust we engender. But this is not so simple; as Facebook CEO Mark Zuckerberg puts it:

Identifying the “truth” is complicated. While some hoaxes can be completely debunked, a greater amount of content, including from mainstream sources, often gets the basic idea right but some details wrong or omitted. An even greater volume of stories express an opinion that many will disagree with and flag as incorrect even when factual.

The psychopaths are still out there. Or more accurately, the politically motivated, and the misinformed, and the highly opinionated are still out there. The internet, fortunately and unfortunately, provides an accessible, democratic medium for anyone to expound upon their views, and occasionally spread misinformation.

There is no simple answer to this; it’s an ethical dilemma whose answer will be borne out over time. For those of us who build and design the internet, the least we can do is to expect and encourage our users to maintain some skepticism in our own motivations.

In summary

Just as we try to establish relationships of varying levels of trust with our family, friends, coworkers, and acquaintances, so too should we software designers and developers try to create appropriately trusting relationships with our users that go both ways.

Trust your users where appropriate, but don’t expect them to trust you. Be honest and ethical, and they will reward you with reciprocated trust and loyalty.

Just trust me.