Prioritization
How do you know what to do next? Every optimization program has a common thread: making sense of all of the design decisions you could test, and sorting them in an order that makes sense for the business.
You need a way to sort new design decisions consistently. In doing so, individual egos take a back seat, flattening the org chart and embracing rational decision-making.
Obviously, we have our own methodology in the Draft Method. Here’s what you can do to prioritize new test ideas quickly.
First, make sure your ideas are researched
Regardless of whether you’re making a one-off change or planning a new experiment, it must be supported by research. Researched test ideas always outperform vague guessing. If an idea is unresearched, then either confirm it with research or remove it.
Prioritizing new experiment ideas
In order to make sure you’re prioritizing sensibly, you should dive into each design decision and understand how easy it is to make, what potential impact it might have, and how well it fits with your business’s strategy.
Each of these parameters is scored from 0 to 10. Then you add them up, and the final score determines what to pursue first. Easy!
Parameter 1: Feasibility
First, how easy is it to build the thing? Note that high scores are good, so we’ll be starting from 10 (representing the easiest possible implementation) and working our way down to 0.
If you’re using a framework to build it yourself
The easiest tests are the ones that you can build yourself in your experimentation framework.
- If you’re making strictly copy changes or removing semantically-scoped elements, score a 10. You can do this yourself in your sleep.
- If you’re rearranging elements, incorporating different sets of changes for mobile & desktop, or writing any CSS to hide or unhide specific features or dynamic functionality, score a 9.
- If you have to incorporate JavaScript in your prototype, score a 8.
If you’re working with developers
If you need to enlist others to build out any server-side functionality, start at 8, and subtract a point for each of the following that’s true, stopping at 0:
- Separate versions need to be built for mobile & desktop.
- Dynamic functionality needs to be built for every product in a collection page.
- Filtering or sorting elements are affected in any way other than removal.
- New standard-issue elements are being added to a page.
- New scarcity plays are being added to a page.
- Simple dynamic functionality, like a countdown timer to a static time, is being added to a page.
And subtract two points for any of the following that’s true:
- Specific features need to be rolled out for wildcard pages, such as different copy for every single product detail page.
- A high-level framework needs to be built out, such as a feature flag.
- Elements are affected that have direct functional ramifications for the customer’s transaction (such as the add to cart button, upsells, etc), mandating more QA before launch.
- Multiple pages need to be adapted to handle edge cases or address a consistent customer experience.
- Significant portions of technical debt need to be navigated.
Don’t throw away low scores
With our other two prioritization metrics, we recommend throwing away any design decisions that score at a 5 or less. With this metric, though, you may encounter difficult implementations that remain high-reward. Impact and business-strategic fit matter more than the difficulty of implementation.
As a result, you should keep all ideas, even ones that happen to score a zero. It may be that you find an easier way of doing it in the future, and it may be that building it out actually helps. High risk ideas can still be high reward.
Parameter 2: Impact
Impact is the hardest and most important metric to measure, and it’s also the highest priority when vetting new experiment ideas in the middle of a downturn.
Now more than ever, you need to maximize the win rate of your experiments. Now is not the time to play fast & loose with your experimentation framework. You don’t have the time or the luxury.
Set 1: add a point for yes, subtract a point for no
- Will over 80% of customers see the change?
- Does the change apply to a segment of customers that converts at least 1/3 below the store average?
- Is the change mobile first?
- Is the change significant enough to change over half of the page’s elements?
- Is the element located above the fold on mobile?
- Does the element map directly to customer motivations?
- Does the test occur on a page that’s within the conversion funnel (home, collection, product, cart, checkout)?
- Does the change specifically map to a purchasing decision?
Set 2: add a point for yes, do nothing for no
- Does the element promote scarcity?
- Does the element promote increased AOV (e.g. upsells, add-ons) or CLTV (e.g. subscriptions)?
- Does the element simplify the page?
- Is the element noticeable within 1 second on average on behavior recordings?
- Has a previous experiment been run on this element?
Set 3: add a point for yes, subtract 2 points for no
If you answered “yes” to the previous question, did the experiment succeed or fail? (Subtract 2 points if you got a null result. Don’t answer this question if you answered “no” to the previous question.)
Here’s how you score this:
- Start with 5 points.
- Go through each group of questions, and add & subtract points accordingly.
- Discard the idea if you score below a 7.
- Prioritize it like you normally would if you score a 7 or above.
Parameter 3: Business Alignment
Finally, how closely does the decision jibe with your business’s long-term strategy?
First, you need a strategy
Optimization isn’t for businesses that lack a deliberate strategy. If you’re just playing defense, or if your strategy shifts day to day, then you’re probably reading the wrong blog.
That being said, as of this writing, we’re in the middle of a major shift in the economy and society. I think everyone gets a pass when changing their strategy. Heck, Draft is changing our strategy. So now is a great time to be conceiving of a strategy if you haven’t done so before – or reworking a strategy that you might have set during a bygone time.
Let’s talk about how to do that.
How are you adapting to the new normal?
If you aren’t changing right now, your business is headed for the graveyard. Write down 5 things that your business is doing, or hopes to do soon, to adapt to the current state of society.
What products are you releasing over the next year?
If you’re planning on releasing anything new, write down everything that you’re planning on releasing between now and this time next year.
For every product you write down, also write down what you will need from the business with respect to messaging, marketing, traffic generation, and information architecture.
How are you addressing competition?
Since competitive analyses are vital components of any research routine, you’l probably want to incorporate any response to competition into your business strategy.
First, if you haven’t run a competitive analysis yet, you need to do so. It’ll take about an afternoon.
Then, come up with your 3 biggest competitors, and lay out specific plans for addressing each one. Are you shifting your messaging? Targeting different customers? Researching new design decisions for their appropriateness to your store?
Are you entering any new markets?
Obviously, your business is contingent on the market that you’re serving. If you’re planning on shifting focus to serve a different group of people anytime in the next year, write out who you currently serve, who you plan on serving, and any corresponding changes in tactics that might necessitate serving them better.
Scoring new test ideas
Once you have your strategic plan together, compile every component of your strategy together into a list. For each new decision, start at a score of 5 out of 10. Scoring goes as follows:
- Add 1 point if the decision supports the strategy.
- Subtract 2 points if the decision goes against the strategy.
- Do nothing if the decision isn’t applicable or could go either way.
Here, you’re subtracting more than you’re adding as a check on rash decisions that might go against your business’s long-term strategy. If you want to be even more cautious, subtract 3 points, instead.
Cut any decisions that score a 4 or below, and sum up the rest with your other prioritization scores.
Add ‘Em Up
You now have an aggregate number from 0 to 30.
At this point, you can safely throw away any tests that score below 10; there’s always lower-hanging fruit to be found, even if it requires more research. Then, sort the rest from 30 to 11.
Tool-wise, Trello – or some similar sort of kanban board – is a great way to manage the order of tests. Here’s the template we use at Draft for all of our clients, if you need a place to get started.
Prioritizing one-off changes
In short, we recommend prioritizing all experiments by three parameters:
- Feasibility
- Impact
- Business alignment
But what about one-off fixes? Do they get prioritized in the same way?
No.
Feasibility & impact still matter, but over time we’ve come to decide that a third metric matters more than business alignment when it comes to one-off fixes.
Why? Because fixing bugs aligns with business strategy. After all, on what planet will you encounter an issue on Android and hear “no, that’s not part of our strategy, let’s keep it”?
So let’s talk about what else to focus on, instead: context fit.
What is context fit?
I spent 34 pages describing context in my first book, so that’s the long answer.
The short answer is that context fit is defined by whether a new or changed element stylistically connects to the elements around it. Some elements should stand out – in a way that is consistent.
For example, any calls to action that move a customer to the next step in the funnel should be a distinct, consistent color from step to step, such that it isn’t used anywhere else in the store’s layout. Contextually they stand out, but for a good reason.
Determining context fit
Here are the questions we answer at Draft to determine context fit. Start with 10 points.
- Does the changed element use the same styling as the rest of the layout? If no, deduct 2 points.
- Does the changed element use the same typography as the rest of the layout? If no, deduct 2 points.
- Does the changed or new element make a substantial change to the layout across devices? If yes, deduct 2 points.
- Does the changed or new element make a substantial change to the interaction model? If yes, deduct 2 points.
- If applicable, does the copy in the element fit the voice & tone of the rest of the business? If no, deduct 2 points.
- If significant changes are being made, what is the conversion- or usability-focused justification for them? If there is significant justification for each deduction, add 1 point.
Your final score is out of 10. Add this to your scores for feasibility & impact, and you have the same score, out of 30, that you normally would when prioritizing experiments.
Some design decisions might need to be changed to fit context. This is good for customers and good for your business. Nobody wants surprises where they’re browsing your store; those add cognitive overhead, which reduces conversion. The goal is to score higher on context fit, not to let poorly-fitting fixes slide down the prioritization scale.
Always keep generating ideas
Remember that you’re never done generating new test ideas, and creating as many researched ideas as possible is the best way to keep things fresh with your optimization practice!
Move the needle. Act without fear.
Get closer to the money with your design work. Stay relevant in a changing economy & uncertain world. Join Draft today and learn how to design for the highest impact possible:
A port of actual value in an online storm of listicle posts.