How to Measure Any Design Decision
People hire value-based designers with the fundamental expectation that they’ll economically benefit the business. And measurement, which is the process of determining the effects of design decisions, is a natural extension of the value-based designer’s business focus.
Measurement is the only rational way to justify the existence of design work. All value-based design decisions must be measured with respect to a primary metric that directly affects the business’s revenue or expenses.
Some of the most oft-cited primary metrics include:
- Average revenue per user (ARPU): The total amount of money the business makes in new transactions over a given period, divided by the total number of visitors during the same period.
- Conversion rate: The percentage of visitors who become paying customers over a given period. Only revenue-generating transactions apply to a business’s conversion rate. For example, conversions from trial to paid plans would factor into the overall conversion rate. It does not include people who sign up for free trials or mailing lists, since these metrics do not involve money exchanging hands.
- Average order value (AOV): The sum of all revenue over a given time period, divided by the number of orders during the same period. This metric is especially useful for online stores. When AOV increases, the business increases its top-line revenue without needing to increase ad spend or other inbound traffic initiatives. Upsells, downsells, cross-sells, and add-ons are the most common ways for online stores to increase their AOV.
- Lifetime value (LTV): Especially useful for subscription and software businesses, LTV is the average amount the customer spends for the duration of their relationship with the business. This is averaged across all customers and frequently segmented by plan type, signup date, and/or business size.
- Repeat customer rate: The proportion of customers who come back to place subsequent orders. Related to this metric is average orders per customer, as well as 30-, 60-, 90-, and 365-day reorder rates.
- Churn: The share of customers who cancel their subscription plans over a 30-day period. Reducing churn is obviously a good thing, but it also has exponential impact, since you’re multiplying the share of churned customers month over month.
- Monthly recurring revenue (MRR): The sum of all revenue a business makes from subscription or other recurring charges in a month, minus the revenue lost from churn. Monthly recurring revenue is the lifeblood of most subscription businesses, and conversion rate increases usually map to a corresponding boost in MRR. You might also see annual recurring revenue for semiannual or annual subscription businesses.
- Upgrade rate: The proportion of customers who take specifically targeted upgrade offers before checking out. For subscription businesses, this is often the monthly share of customers who upgrade their plans minus the monthly share of customers who downgrade their plans.
- Refund rate: The share of customers who request refunds (when they contact the business) or chargebacks (when they contact their credit card issuer) over a 30-day period.
In conjunction with the above, every business has many additional secondary metrics, which are any non-revenue-generating metrics that can be used as a reasonable proxy for primary business goals. Put another way, customer behaviors can predict business performance – if you find the right secondary metrics to measure.
For example, corporate messaging platform Slack discovered that teams that send 2,000 messages in aggregate are 93% likely to convert to paid plans. It stands to reason, then, that one of Slack’s goals is to get team members to send more messages. They’ll get more value from Slack as a result, and Slack will perform better as a business.
Finding useful metrics
Finding useful secondary metrics for a business is not easy, and they may shift over time. Yet doing so will result in a greater focus on what matters to the business – both experientially (for the customers) and economically (for the business’s continued success).
All secondary metrics are rooted in observable customer behavior. For example, onboarding is an observable behavior, while conversions (and hence increased ARPU) is the end result for the business.
Secondary and primary metrics correlate when changes in one correspond to changes of similar magnitude in the other. Use your analytics tool to track each secondary metric as an event, and then see what happens to your primary metrics when a given event increases or decreases in frequency.
Some correlations are well established. For example, most software businesses see a significant reduction in churn if a greater share of new customers successfully completes the product’s onboarding. You may want to measure the share of customers who successfully complete onboarding – and segment those who complete your onboarding from those who don’t, to see if there are any major differences in behavior.
And online stores tend to increase their AOV when they add upsells and minimum-order thresholds for free shipping. Upsell take rate acts as a good secondary metric for most stores to measure.
Explore what secondary metrics apply to the industry you’re designing for, and think about what design improvements could manifest for the business.
Primary metrics supersede all other goals
Value-based designers only focus on goals that have a quantitative economic benefit to the business. Goals that do not commonly have direct economic impact, but are often viewed as such, include:
- Mailing list signups. Yes, you should be running a mailing list, and you should be selling to that mailing list consistently. As a result, new subscribers can have a clear economic value. But that value changes as your list grows, and you may not have a clear way to calculate your list’s value right away. Unless you know the specific dollar amount of a new subscriber, you can’t count on new signups having economic value.
- Engagement. People can click all they want on a page, but the only click that matters to a business is on the “place order” button.
- Pages per session. If people are browsing your site a lot, are they fascinated or confused? Would a focused, short interaction help or harm the business, instead?
- Adds to cart. This might result in increased transaction volume, but it might not. It’s far better to measure every step of a conversion funnel (product → cart → checkout → confirmation, for example) to gain a comprehensive portrait of customers’ behavior.
- Views of the pricing or signup pages. This is a similar issue to “adds to cart,” above. Increased upper-funnel traffic is not an adequate predictor of conversion lift.
- Social shares. Interest in your brand is great, but social performance is a poor predictor of revenue generation. In fact, social outlets often represent the least engaged, and least wallet-out, of all traffic to the business.
If a value-based designer can’t make the connection between a given metric and its economic impact, it is, without exception, not worth their focus. Businesses neglect their revenue-generating abilities at their peril, and they should be well aware of this when determining how to best leverage their design talent.
Value-based designers measure the results of their design decisions through analytics. In addition to the analytics you look at as part of your research process, you should:
- Assess any revenue-leaking issues, both across segments (e.g., what works for one browser might not in another) and pages (e.g., what works on one product page may not on another).
- Maintain a record of when all new design decisions were made, so that you can track the long-term progress of design improvements. The best way to do this is through annotations, which we’ll explore shortly.
- Create reports to show how the business is improving as a result of better design.
Measurement with analytics
Now, let’s go through how to actually measure the impact of your work. The best way to do this – in fact, usually the only way that people measure design at all – is through your business’s analytics tools.
So, let’s talk about how to work with Google Analytics to figure out the ramifications of your design work.
First off, you should have revenue and conversion rate tracked in your business’s Google Analytics account.
Revenue and conversion rate are expressed in Google Analytics as goals. Go to “Admin” in the header navigation, and then go to “Goals” in the right-hand column. If you haven’t configured any goals, you’ll see something like this:
Hit “+ NEW GOAL” there, and create a new goal for a sale.
Let’s say you work at a SaaS, and you want to create a signup goal. Hit the “Create an account” radio button and then continue.
- Make the goal a destination – for example, the first page that people see when onboarding onto your product.
- Ascribe a monetary value (optional), which should be the LTV of your customer. Keep in mind that this only works for customers on a specific plan with a relatively inflexible LTV!
- Turn on the funnel you expect people to follow, and fill in the fields for each page. This is terrific for SaaS businesses.
- Finally, click “Verify This Goal” to ensure that it’s firing correctly. If it’s not, you need to bugfix GA.
Your final form should look something like this:
Next, create a goal for revenue. Go through the same process to create a goal, using the checkout thank-you page as your destination.
The most common way to track changes over time in analytics is through annotations, which indicate a specific date that a change was made. Annotations are essential for tracking:
- When you kicked off an experiment.
- When you stopped an experiment.
- When you rolled out a winning variation.
- When you made any one-off change to a design.
Annotations give you a master record of your design changes, which is useful for measuring how they affect the business.
As nice-to-haves, it also helps to establish annotations for:
- every change in test parameters (a bandit variant lost, for example)
- ad campaigns
- holidays (if you do a Black Friday sale, for example)
- new feature rollouts
- new blog posts
- significant events regarding your competitors
Annotations are terrific for explaining your Google Analytics data, tying it to real-world events. Why did conversions drop on this day? Why was there a spike in sales here?
Creating annotations is easy. On any view, click the little arrow tab at the bottom of the date range graph:
Doing so provides the ability to create a new annotation, using the link at right:
Using the date picker & description, you enter your annotation:
Do what you can to ensure that annotations have a consistent nomenclature. I use “Type: Event” for mine. For example:
- Launch: New pricing page test
- Stop: New pricing page test
- Design change: New pricing page
- AdWords: Are you still suffering from high prices?
This allows you to refer back to annotations more easily.
An annotation looks like this on the graph, with a little quote balloon at the bottom:
You can view a whole date range’s annotations by opening the same tab that you used to create them:
Annotations let you gauge the impact of design decisions (“we launched the redesign here and this happened”), clarify events (“we launched a huge AdWords campaign on this date, so the conversion rate dropped as less qualified traffic came in”), and specify long-term projects (“our sales are seasonal and began around March 15 last year; let’s watch out for that this year”).
At Draft, annotations helped us accurately measure how much we helped skincare brand BOOM! improve their conversion rate and average order value after we did our initial round of one-off fixes.
To be clear, a one-third improvement still doesn’t result in the world’s best load time. We could always improve this – but then we’d be cutting into how the page currently operates, and I wanted to keep everything else fairly constant to start.
After we launched these changes, BOOM!’s conversion rate went up by 9.51%.
So, to recap, Draft:
- Looked at a store’s analytics for 5 minutes.
- Did a few things that took maybe an afternoon to implement.
- Made the store a lot of extra money, into perpetuity.
And we were able to point to the time periods before and after those changes were made, as well as the year-over-year change, to assess the magnitude of improvement.
Impact may have lag time
Remember, though, that there may not be a clean relationship between when you do something and its corresponding impact. Checking secondary metrics can be useful, since they may have a more direct impact, but often those can be misleading when considering the most important things for a business’s health.
If you run an online store, changes will be felt quickly, unless the store has a lot of repeat customers. If the past month’s share of repeat orders is higher than usual, it’s likely that the repeat customers are already familiar with the store and might be less swayed by any changes to it. In this case, track existing customers in a separate segment, and run separate reports on them when measuring your impact.
If you work for a software business, impact is likely to have a non-negligible lag time. With a 14-day trial, for example, you’ll understand the impact of significant changes 14 days after deploying them. In your analytics tool, add 14 days to any time-scale calculations that you perform.
Calculating long-term impact
Annotations should also be used to determine the beginning of value-based design work. That way, you can use the time period from that date until now to determine the overall impact of your work.
Set that as your time period in your analytics tool, and then compare your primary metrics against the previous time period, as well as any year-over-year changes.
Heat & scroll maps
On a monthly basis, generate heat & scroll maps on any pages where analyzing customer behavior would be valuable to the business. New heat and scroll maps should also be generated when any significant design decisions are rolled out to all customers.
Compare the new set of heat maps with the previous round you created, and determine if there are any significant differences in where people are focusing their attention, as well as anyplace new that might have been ignored.
Some heat-mapping tools allow you to hover over various elements, to see how many people interacted with each. If you’re trying to determine if more people interacted with an element after a design decision was made, you’ll want to run a chi-squared test against the total number of people who visited each heat map.
Scroll maps yield similarly quantitative analysis. When comparing scroll maps, you’ll want to analyze the points that 75%, 50%, 25%, and 0% of customers are scrolling to on a page. Take note of any differences between scroll maps as well.
Customer support scores
Most support platforms offer internal tools that indicate customer satisfaction. The support team typically pays careful attention to these metrics, using them to understand how well the team is doing. The value-based designer can also measure time-scale improvements in customer support in these metrics. Good design means a lower support load, after all.
Ask for access to the business’s customer support tool. Then, review the monthly volume of support tickets, as well as any metrics on customer satisfaction. Track how they’re changing over time.
If you’re using support inquiries to create new design decisions, you should also track whether there’s a significant change in the volume of inquiries after new design decisions roll out to all customers.
Business intelligence (BI) platforms are dashboards that track specific events across many tools. For example, online stores may use BI applications to track order fulfillment, customer service inquiries, long-term revenue projections, and website analytics.
BI tools can overlap with analytics in terms of what they provide, but they are not a substitute for direct access to analytics. BI usually work bests when providing insight into the operational functions of the business – and it often provides summaries of data, not full data sets. Value-based designers should get access to both the business’s BI platform and its analytics tool in situations where both exist.
BI platforms are great for assessing long-term changes in business growth. They’re also useful for auditing customer service issues that can be used to address objections on a sales pitch. For example, if a designer discovers that many people express concerns about shipping, they might try incorporating free-shipping thresholds in a future experiment. In this case, they’re using the BI application more as a research tool than as a place to measure the impact of their decisions.
When it comes to your data, it’s extremely healthy and reasonable to be skeptical of what is being reported. If something looks too good to be true, it might be. Keep digging to find the real story. It falls on the value-based designer to craft an honest portrait of the business’s health, no matter what that may look like.
Most stores get CRO wrong.
They plan CRO activities at the wrong time, or they don’t know how to research new tests. Our free CRO roadmap draws on our years of experience to make CRO easy for you to implement for your store.
Sign up below to get your Ecommerce CRO Roadmap today: