How to Measure Any Design Decision
People hire value-based designers with the fundamental expectation that they’ll economically benefit the business. And measurement, which is the process of determining the effects of design decisions, is a natural extension of the value-based designer’s business focus.
Measurement is the only rational way to justify the existence of design work. All value-based design decisions must be measured with respect to a primary metric that directly affects the business’s revenue or expenses.
Primary metrics
Some of the most oft-used primary metrics include:
- Average revenue per user (ARPU): The total amount of money the business makes in new transactions over a given period, divided by the total number of visitors during the same period.
- Conversion rate: The percentage of visitors who become paying customers over a given period. Only revenue-generating transactions apply to a business’s conversion rate. For example, conversions from trial to paid plans would factor into the overall conversion rate. It does not include people who sign up for free trials or mailing lists, since these metrics do not involve money exchanging hands.
- Average order value (AOV): The sum of all revenue over a given time period, divided by the number of orders during the same period. This metric is especially useful for online stores. When AOV increases, the business increases its top-line revenue without needing to increase ad spend or other inbound traffic initiatives. Upsells, downsells, cross-sells, and add-ons are the most common ways for online stores to increase their AOV.
- Lifetime value (LTV): Especially useful for subscription and software businesses, LTV is the average amount the customer spends for the duration of their relationship with the business. This is averaged across all customers and frequently segmented by plan type, signup date, and/or business size.
- Repeat customer rate: The proportion of customers who come back to place subsequent orders. Related to this metric is average orders per customer, as well as 30-, 60-, 90-, and 365-day reorder rates.
- Churn: The share of customers who cancel their subscription plans over a 30-day period. Reducing churn is obviously a good thing, but it also has exponential impact, since you’re multiplying the share of churned customers month over month.
- Monthly recurring revenue (MRR): The sum of all revenue a business makes from subscription or other recurring charges in a month, minus the revenue lost from churn. Monthly recurring revenue is the lifeblood of most subscription businesses, and conversion rate increases usually map to a corresponding boost in MRR. You might also see annual recurring revenue for semiannual or annual subscription businesses.
- Upgrade rate: The proportion of customers who take specifically targeted upgrade offers before checking out. For subscription businesses, this is often the monthly share of customers who upgrade their plans minus the monthly share of customers who downgrade their plans.
- Refund rate: The share of customers who request refunds (when they contact the business) or chargebacks (when they contact their credit card issuer) over a 30-day period.
Most stores will want to pay attention to average revenue per user, conversion rate, and average order value as their primary metrics.
Secondary metrics
In addition to the above, every business has many secondary metrics, which are any non-revenue-generating metrics that can be used as a reasonable proxy for primary business goals. Put another way, customer behaviors predict business performance – if you find the right behaviors to measure.
For example, corporate messaging platform Slack discovered that teams that send 2,000 messages in aggregate are 93% likely to convert to paid plans. It stands to reason, then, that one of Slack’s goals is to get team members to send more messages. They’ll get more value from Slack as a result, and Slack will perform better as a business. Here, sending messages is the secondary metric for the primary metric of conversion to paid plans.
Finding useful metrics
Finding useful secondary metrics for a business is not easy, and they may shift over time. Yet doing so will result in a greater focus on what matters to the business – both experientially (for the customers) and economically (for the business’s continued success).
All secondary metrics are rooted in observable customer behavior. For example, onboarding is an observable behavior, while conversions (and hence increased ARPU) is the end result for the business.
Secondary and primary metrics correlate when changes in one correspond to changes of similar magnitude in the other. Use your analytics tool to track each secondary metric as an event, and then see what happens to your primary metrics when a given event increases or decreases in frequency.
Some correlations are well established. For example, most software businesses see a significant reduction in churn if a greater share of new customers successfully completes the product’s onboarding. You may want to measure the share of customers who successfully complete onboarding – and segment those who complete your onboarding from those who don’t, to see if there are any major differences in behavior.
And online stores tend to increase their AOV when they add upsells and minimum-order thresholds for free shipping. Upsell take rate acts as a good secondary metric for most stores to measure.
Explore what secondary metrics apply to the industry you’re designing for, and think about what design improvements could manifest for the business.
Primary metrics supersede all other goals
Value-based designers only focus on goals that have a quantitative economic benefit to the business. Goals that do not commonly have direct economic impact, but are often viewed as such, include:
- Mailing list signups. Yes, you should be running a mailing list, and you should be selling to that mailing list consistently. As a result, new subscribers can have a clear economic value. But that value changes as your list grows, and you may not have a clear way to calculate your list’s value right away. Unless you know the specific dollar amount of a new subscriber, you can’t count on new signups having economic value.
- Engagement. People can click all they want on a page, but the only click that matters to a business is on the “place order” button.
- Pages per session. If people are browsing your site a lot, are they fascinated or confused? Would a focused, short interaction help or harm the business, instead?
- Adds to cart. This might result in increased transaction volume, but it might not. It’s far better to measure every step of a conversion funnel (product → cart → checkout → confirmation, for example) to gain a comprehensive portrait of customers’ behavior.
- Views of the pricing or signup pages. This is a similar issue to “adds to cart,” above. Increased upper-funnel traffic is not an adequate predictor of conversion lift.
- Social shares. Interest in your brand is great, but social performance is a poor predictor of revenue generation. In fact, social outlets often represent the least engaged, and least wallet-out, of all traffic to the business.
If a value-based designer can’t make the connection between a given metric and its economic impact, it is, without exception, not worth their focus. Businesses neglect their revenue-generating abilities at their peril, and they should be well aware of this when determining how to best leverage their design talent.
Analytics
Value-based designers measure the results of their design decisions through analytics. In addition to the analytics you look at as part of your research process, you should:
- Assess any revenue-leaking issues, both across segments (e.g., what works for one browser might not in another) and pages (e.g., what works on one product page may not on another).
- Maintain a record of when all new design decisions were made, so that you can track the long-term progress of design improvements. The best way to do this is through annotations, which we’ll explore shortly.
- Create reports to show how the business is improving as a result of better design.
Now, let’s go through how to actually measure the impact of your work. The best way to do this – in fact, usually the only way that people measure design at all – is through your business’s analytics tools.
So, let’s talk about how to work with Google Analytics to figure out the ramifications of your design work.
Annotations
The most common way to track changes over time in analytics is through annotations, which indicate a specific date that a change was made. Annotations are essential for tracking:
- When you kicked off an experiment.
- When you stopped an experiment.
- When you rolled out a winning variation.
- When you made any one-off change to a design.
Annotations give you a master record of your design changes, which is useful for measuring how they affect the business.
As nice-to-haves, it also helps to establish annotations for:
- every change in test parameters (a bandit variant lost, for example)
- ad campaigns
- holidays (if you do a Black Friday sale, for example)
- outages
- new feature rollouts
- new blog posts
- significant events regarding your competitors
Annotations are terrific for explaining your Google Analytics data, tying it to real-world events. Why did conversions drop on this day? Why was there a spike in sales here?
Creating annotations is easy. On any view, click the little arrow tab at the bottom of the date range graph:
Doing so provides the ability to create a new annotation, using the link at right:
Using the date picker & description, you enter your annotation:
Do what you can to ensure that annotations have a consistent nomenclature. I use “Type: Event” for mine. For example:
- Launch: New pricing page test
- Stop: New pricing page test
- Design change: New pricing page
- AdWords: Are you still suffering from high prices?
This allows you to refer back to annotations more easily.
Reference
An annotation looks like this on the graph, with a little quote balloon at the bottom:
You can view a whole date range’s annotations by opening the same tab that you used to create them:
Annotations let you gauge the impact of design decisions (“we launched the redesign here and this happened”), clarify events (“we launched a huge AdWords campaign on this date, so the conversion rate dropped as less qualified traffic came in”), and specify long-term projects (“our sales are seasonal and began around March 15 last year; let’s watch out for that this year”).
For example
At Draft, annotations helped us accurately measure how much we helped skincare brand BOOM! improve their conversion rate and average order value after we did our initial round of one-off fixes.
According to GA, their average page load time was 16.8 seconds. That is crazy high, and it was almost certainly harming conversion. I cut it by a third by compressing all images, removing redundant JavaScript calls, and uninstalling a couple of apps that were harming page weight.
To be clear, a one-third improvement still doesn’t result in the world’s best load time. We could always improve this – but then we’d be cutting into how the page currently operates, and I wanted to keep everything else fairly constant to start.
After we launched these changes, BOOM!’s conversion rate went up by 9.51%.
So, to recap, Draft:
- Looked at a store’s analytics for 5 minutes.
- Did a few things that took maybe an afternoon to implement.
- Made the store a lot of extra money, into perpetuity.
And we were able to point to the time periods before and after those changes were made, as well as the year-over-year change, to assess the magnitude of improvement.
Impact may have lag time
Remember, though, that there may not be a clean relationship between when you do something and its corresponding impact. Checking secondary metrics can be useful, since they may have a more direct impact, but often those can be misleading when considering the most important things for a business’s health.
If you run an online store, changes will be felt quickly, unless the store has a lot of repeat customers. If the past month’s share of repeat orders is higher than usual, it’s likely that the repeat customers are already familiar with the store and might be less swayed by any changes to it. In this case, track existing customers in a separate segment, and run separate reports on them when measuring your impact.
If you work for a software business, impact is likely to have a non-negligible lag time. With a 14-day trial, for example, you’ll understand the impact of significant changes 14 days after deploying them. In your analytics tool, add 14 days to any time-scale calculations that you perform.
Customer support scores
You shouldn’t look only at analytic when considering secondary metrics. For example, most support platforms offer internal tools that indicate customer satisfaction. The support team typically pays careful attention to these metrics, using them to understand how well the team is doing. The value-based designer can also measure time-scale improvements in customer support in these metrics. Good design means a lower support load, after all.
Ask for access to the business’s customer support tool. Then, review the monthly volume of support tickets, as well as any metrics on customer satisfaction. Track how they’re changing over time.
If you’re using support inquiries to create new design decisions, you should also track whether there’s a significant change in the volume of inquiries after new design decisions roll out to all customers.
Be skeptical
When it comes to your data, it’s extremely healthy and reasonable to be skeptical of what is being reported. If something looks too good to be true, it might be. Keep digging to find the real story. It falls on the value-based designer to craft an honest portrait of the business’s health, no matter what that may look like.
Move the needle. Act without fear.
Get closer to the money with your design work. Stay relevant in a changing economy & uncertain world. Join Draft today and learn how to design for the highest impact possible:
A port of actual value in an online storm of listicle posts.