Out now: Value-Based Design, the definitive way to prove your design’s worth. Read it.

Prioritization

 

How do you know what to test next? There are tons of different resources for this, many of which have their own TLAs:

  • PIE, by WiderFunnel.
  • ICE, a classic business-101 way to look at optimization problems.
  • PXL, by ConversionXL, which weighs many factors in what to test next.

Obviously, we have our own methodology in the Draft Method as well. Here’s what you can do to prioritize new test ideas quickly.

First, ignore any ideas that are not – or can’t be – supported by research. We’ve mentioned this over and over in the past, but researched test ideas always outperform vague guessing.

PXL’s framework accounts for this by devoting four criteria explicitly to research; the Draft Method accounts for this by making research a prerequisite for any test idea. If an idea hasn’t been researched, you need to follow the hunch and confirm it through analytics, heat maps, usability tests, or customer interviews.

There are 3 parameters that you should assess for every test idea, scored from 1 to 10:

Parameter 1: Feasibility

In short, how hard is this to build out? Does it require development effort, new prototypes, wireframes, or conditional logic?

Score a 10 if you can knock this out in 5 minutes using your testing framework alone; score a 1 if it requires major refactors of the software, new testing frameworks, building out something in-house, etc.

Parameter 2: Impact

“Impact” is usually the area with the most guesswork. How likely is this change to make an impact on the overall site?

Here are some questions to ask when assessing impact:

  • Does the change apply to a large swath of prospective customers?
  • Does the change apply to a segment of customers that broadly converts below the norm?
  • How drastic is the change being made? Are you just changing a button color, or are you throwing the button away and replacing it with something entirely new?
  • Is the element located above the fold, especially on mobile?
  • Does the element map directly to customer motivations?
  • Does the element simplify the page?
  • Is the element noticeable within 5 seconds?
  • Does the test occur on a high-traffic page?
  • Does the test occur on a page that’s within the conversion funnel?
  • How much of the business’s revenue depends on this particular element performing better?

Score a 10 if this is changing a highly load-bearing element in a radical way; score a 1 if you’re changing your button from green to blue. (Don’t do that.)

Parameter 3: Business Alignment

On a scale from 1 to 10, how much does this align with your business’s goals?

Keep in mind that this is not an answer to how much someone in sales wants the test to happen, nor is it an answer to how passionately the CEO cares. Business strategy is always a long-term play.

If tests are obviously contradictory to the core branding, values, or terms of the business, they should only go live with the greatest of caution and care – and only after considerable research in support of them.

On the other hand, if the business’s long-term strategy jibes strongly with the sort of test you’re putting together, then that’s a solid motivator for your launching the test sooner.

Add ‘Em Up

You now have an aggregate number from 3 to 30.

At this point, I usually throw away any tests that score below 10; there’s always lower-hanging fruit to be found, even if it requires more research from the team.

And then I sort the rest, going slowly down the list from 30 to 11 – and always trying to find more high-ranking tests in the meantime. Tool-wise, Trello – or some similar sort of kanban board – is a great way to manage the order of tests. Here’s the template we use at Draft for all of our clients, if you need a place to get started.

Remember that you’re never done generating new test ideas, and refreshing the list with high-priority, high-impact tests is the best way to keep an optimization practice fresh!

← Back to the Blog Check Out Draft’s Store →