Jackson Fish Market
Posted on June 19, 2012 by hillel on Making Things Special

The Dark Art of Defining Success in User Experience Design

The technology industry is filled with engineers. Logic and reason as the tools they love best. This makes sense. Programming is essentially a logical puzzle. However, when it comes to user experience, these tools are not always the right fit. When a piece of software isn’t performant, we measure it’s speed, its footprint, and make changes until it conforms to our definition of success. This is pretty straightforward and makes a lot of sense. And when it comes to specific user experience problems (e.g. users are deleting things by accident, users can’t find this button, etc.) the same approach is relatively effective. Identify the problem in very specific terms. Measure the baseline performance of the UX, propose changes, test changes with the a representative audience sample, measure, iterate, deploy. Done and done.

The problem comes when we are designing v1 products or doing major redesigns of existing products. In the latter case at least we have the previous version of the product against which we can measure. But even there, with these broader design efforts, there are so many factors at play that measurements become a muddle. Let’s take a simple example and see just how complicated it can be to understand the results:

Let’s say company X has two goals in redesigning their software. In the first goal they’d like to increase the engagement of their customers with their product. They measure engagement based on things like page views, minutes spent on the site, etc. Their second goal is to streamline the clunky user interface they have for some of the main functions of their software. To solve these problems, the software designers create a new polished streamlined user experience with some key new features designed to pique the user’s interest. Sounds good, right?

The new version of the software is rolled out and what happens? Engagement goes way way down. Are the new “engaging” features failing or is the streamlining eliminating a bunch of wasted page views and time on the site that users used to waste in the old clunky design? Perhaps if we could measure exactly how much time was wasted in the old design we could then compare that to the deltas we’re seeing and the remaining change would give us an indication of whether the new features are having any impact. Additionally, we should measure specific user interaction with the new features. That will give us some sense. But in the meantime, the overall engagement numbers are way way down, and the subtleties of these tradeoffs are lost in the discussion with the CEO and the CFO. Furthermore, the deadline and constrained resources for the project means that the analytics code that needed to be written to measure the efficacy of the new features never actually got done. But wait… there’s more.

A small percentage of the site’s users are complaining. Loudly. About the redesign. The “streamlining” moved some things around and they are pissed off. What percentage of the userbase do they represent? It’s impossible to know. Half a percent? Ten percent? Would you piss off ten percent of your users to make the other 90% way happier? Which ten percent did you anger – the most valuable customers or least valuable? Do you even have the tools to measure that? Are they leaving the service or just venting? Oh yeah, the press could go either way. They may love it. Or hate it. Or ignore it altogether. And it’s not clear their opinion is anything other than ranting or has any impact on your potential customers making purchase decisions.

It gets complicated.

Some companies solve this problem by just measuring overall customer satisfaction, or financial metrics. These methods don’t validate the design changes individually, just the bigger picture, which frankly is ultimately the point of the design in the first place. And when it comes to a new product there’s no previous version against which you can compare.

What’s a company to do? Throw up their hands? Trust the UX practitioners to not screw it up too badly? Listen to their gut instincts? Spend weeks or months coming up with elaborate mechanisms and schemes to measure the things that can be measured? (You’d be surprised how much money, time, and energy is often wasted in this pursuit because some software executive won’t accept the degree of ambiguity involved in these broad stroke design efforts.)

We believe there are three things that help you feel good (and responsible) when it comes to major new design work:

1) Acceptance. Understand that the factors in success are often too complicated to tease apart, and there’s some things you simply won’t be able to measure in as detailed a fashion as you would like.
2) Big Picture Perspective. Focus on the bigger picture around customer satisfaction, winning key reviews, customer testimonials, and the excitement of your own employees. (Don’t underestimate this. Having a great design can do wonders for morale.)
3) Focus on the Few. When you need to focus on the details, make them count. The screen on which you ask for money is a great place to get very specific about measurement, conversion rates, and doing lots of iterative testing.

In a world of fast iteration, small teams, and constrained resources, the industrial strength testing and measurement efforts required to even shed light on a fraction of the mechanics involved in the success or failure of a broad user experience design effort are simply not realistic for most projects.

Executives at companies large and small don’t like it when questions have fuzzy answers. It makes them feel like people aren’t really trying. And some engineers are all too happy to tell them that UX design can be reduced completely to a science. But sometimes reality is fuzzy. Once you accept that, you can focus on a few things that matter, and not get distracted by the broader set of things that are simply harder to know.

Leave a Reply