All Articles

Assessing User Impact

Josh Smith

With hundreds of different methodologies a designer could employ to craft a design solution for their business, there comes a point where the amount of effort invested exceeds the actual value gained from the solution, by both the business and the customers.

Although designers may feel comfortable with a tried and true design process that includes both collaborative and individual user research, modeling, or prototyping, it may not be necessary to do all the things all the time. Yes, in some instances the business may value several cycles of usability testing and high-fidelity end-to-end prototypes, however in other instances a quick whiteboard concept with developers may suffice.

So how do designers determine the right-size design process for each business initiative? How do we figure out when to go big on our design process, and when to go lean?

Implement a risk-based decision-making framework that informs design strategy and investment.

Quantifying User Impact

As a design leader over the past decade, I’ve had the honor and opportunity to partner with my teams to build and refine a framework for product and user experience designers to quickly assess how much time and effort to invest in each initiative for their business.

The approach is to rate five indicators of user impact on various scales, add up the numbers, and group ranges by impacts of High, Medium, Low, and Marginal. Then, your design team determines the fidelity of the design process you want to apply for each impact.

Below are the five indicators we’ve determined over the years are the most helpful.

1. Volume

What percentage of total users within the business line are likely impacted by the outcomes of this initiative?

Scale: 1–10

2. Frequency

How frequent will their interaction likely be with new or changed experiences?

Scale: 1–10

3. Criticality

How critical to these users’ operations are the experiences we are changing or introducing?

Scale: 1–5

4. Complexity

How complex is this initiative’s workflows or tasks for the users to accomplish?

Scale: 1–5

5. Ubiquity

How unique to the business and its intellectual property is both the problem and the subsequent interactions that comprise the solution?

Scale: 1–5

The final impact score is determined from the addition of these 5 indicators. A final score will range from 5–35, and can be bucketed into the following impacts.

  • High (26–35)
  • Medium (16–25)
  • Low (6–15)
  • Marginal (5)

Why the different scales for frequency and criticality?

Over time, our teams have realized that both frequency and criticality can be measured at finer 10-point scale, and they also can carry more weight to the impact. For instance, even if an experience is not critical, complex, or unique, if every user is using it every day (e.g. authentication, navigation, etc.), then it’s going to have a high impact. It’s also easier to define criticality, complexity, and ubiquity on a 5-point scale, because the delta between the points are greater.

The intent of the assessment is to be able to do it on the fly in conversations with your business or technical partners; something to be able to immediately communicate to them in terms of effort and complexity of the design work needed for the initiative.

Based on this initial assessment, you can define various design processes to ensure you create a solution that balances risk, cost, and speed. Below is an example of what our teams have used in the past.

This is just a starter to get you running with your own assessment. Your team can make any determinations about what to do at each impact level. You may want to go super granular about methodologies at each level (e.g. flow diagrams, wireframes, ideation sessions, etc.), or you may want to impose forcing functions at each level for review or acceptance, as we have done. It’s up to what makes the most sense for your design team at your business.

An important benefit, I believe, by a framework like this, is that it improves the efficiency of your team by aligning effort to impact, guiding your team to spend less time on the business initiatives that have low impact, and more time on the initiatives that have higher impact. You can use this framework to determine when to say no to design requests, and when to say, ‘This initiative poses to significant a risk to not include us, and here’s why…’

In this way, it may also be a helpful way to train your team to think about the relationship between user and business impact, and communicate with your business partners (e.g. product owners, product managers) about the relationship between design effort and business-user value.

A decision-making framework also supports articulating and escalating risk

Sometimes the business comes to design and asks for a solution in a week’s time, but the user impact is so high that the framework calls for effort that is known to take multiple weeks. It’s in these times that the framework can be most helpful, because you can show your business partner the level of effort needed to reduce risk, and begin crossing off all the methodologies that need to be sacrificed to meet the time constraint, and explain the impact.

If this is helpful, feel free to use it. If you noticed ways to improve it, all the better!