Status Calls and Creating & Presenting Reports
If you’re anything like Draft, you get on client calls to chat about optimization every week. And since we like providing the same standard of service to all of our clients, and because it’s way easier to follow a consistent and repeatable plan, we have a rough format for our calls.
Let’s talk about what you should be covering in your weekly optimization meetings, and how you can keep things as focused and actionable as possible. Our meetings follow 5 parts, and are usually finished in under a half-hour.
Here’s the format:
1. Testing progress
First, you’ll want to cover how things are going as far as new experiments are concerned.
- What experiments have you called lately? What were their results? What ramifications do these results have for the final design?
- What experiments are currently running, and when do you expect them to be completed?
- What experiments are ready to be launched? What are their hypotheses, and what research supports them? When will they launch? Do they need final client approval?
2. New research & prioritization
Next, go into what research has been performed for new test ideas, and talk about what test ideas are slated to be built next.
This is often where arbitrary opinions will come into play, and people will try reshuffling the board. Remember your prioritization framework, and try to rely on a clearer process for building out new experiments.
3. Building prototypes & blockers
Next, you’ll want to talk about build progress. Since you’ll usually have 2 or 3 things in the queue, you’ll want to coordinate with your developers about each experiment’s progress – as well as any issues that are relying on other people.
4. Long-term plans
Finally, it’s worth dedicating a section of each meeting to long-term issues that may come up. Black Friday seems to come sooner every year!
5. Recap of action items
In steps 2 and 3, you’ll be coming up with a set of things that people are responsible for. At the end of every meeting we hold at Draft, we list all of the things that people are responsible for, and when we expect them to be finished.
Creating & Presenting Reports
You’ve finished an experiment, and now it’s time to share it with the team. How do you frame your findings? How do you make next steps clearly legible to the team? And what sort of presentation should you employ?
What every report should have
First, let’s talk about what every single report should have, for every experiment that you run, no matter what.
You may have described these things to the team in the past, or you may know them like the back of your hand. Assume that the team is too busy to care about your experiments, and make sure that you reiterate everything that you can. Copy and paste if that’s easier for you.
Comprehensive summaries also help a lot for archival, so you can refer back to the experiments you did in the past – as well as the research that supported each of them.
The supporting research
Since all experiments must be supported by research, what was the line of thinking that led you to run this experiment? What evidence did you see?
This only needs to be a few sentences, but if you want to incorporate the research inline – especially if it’s visually oriented, like with heat maps – then that would probably be helpful.
You already wrote up a hypothesis for this experiment, because you’re not a monster and need them for prioritization, anyway. You should reiterate the hypothesis, in order to make the team aware of the design decision at play, the primary metic, its intended lift, and the experiment’s sample size.
A description of each variant, where applicable
If you’re running more than one variant, you should list each variant in the experiment and why you decided on running each of them – no matter how subtle or small the differences may be.
Only now do you get into the results of your experiment. List your primary metric, expected lift, and likelihood of confidence. Then, list any secondary metrics, their corresponding lifts, and confidences. This can be in the form of a table, or as a few paragraphs.
Now might be a good time to remind people that 95% or higher indicates a conclusive result, and that 82% is not, in fact, a B- when it comes to statistics.
This is also a good time to talk about the knock-on effects of an experiment, especially if you run per-variation heat maps or explore many lagging metrics as potential trade-offs to your design decision.
Next steps & recommendations
People are probably going to glaze over the numbers and skip straight to this section, so you need to make sure you come across as poised and confident.
Fundamentally, the point of an experiment is to understand whether to roll a design decision out to all customers. Any next steps needs to directly and clearly answer this question. The answer could also involve more research, a re-test of a similar idea, or simply moving on to tests on other parts of the page.
The format of the report
PDF is king. You should establish a clear template to fill out for each test, following the format above.
Draft uses Trello cards, usually, but that’s because we’ve trained our clients to live in Trello.
Presenting the report
Now that you have the report together, how do you present it?
Draft works remotely for all of our clients, so we’ve obviously bolstered our tactics on this front. That said, these will be useful for any business.
Plot Successes on Trello
Your Trello board should have three columns for all completed A/B tests: wins, losses, and inconclusive/null results. Here’s the board that Draft uses with all of its clients.
This allows the rest of your team to understand what’s already won, so they don’t replicate the work you did in the past.
And the whole team can celebrate winning tests. A little-used feature of Trello is watching columns, which notifies you every time something has been dragged into, or updated in, a column:
This is especially useful for knowing when tests have been called – and for developers to know what’s won, so they can roll changes out to all customers.
All team members should watch winners, losers, and inconclusive/null results: so they can understand when testing continues to progress, as well as what should be done next.
Track Events in Google Analytics
Winning tests and conversion rate increases should be marked with annotations in Google Analytics.
Your entire team is going to ignore Google Analytics into perpetuity. This is because GA’s feral UX mess is your job, not theirs. That said, annotations still give you a better sense of how to create accurate reports within GA without having to sift through a calendar or project board. They also allow you continuity in handing your work off to other optimization experts.
Share Wins, Replete with Animated GIFs, in Group Chat
This should be self-explanatory, but: if you get a win, you should be trumpeting it to the heavens.
In Real Life
Let’s say you have a physical office. What can you do to get folks excited in real life?
Set Up a Bulletin Board
Not a white board. Not a surface that is used for other things. You need an actual, dedicated, analog bulletin board that is used exclusively for optimization wins.
One backed with cork. One you use thumb tacks on. One, ideally, behind a locked door that only you have the key to.
A good way to get this past the folks in facilities management: start with an unlocked board and decorate it like you’re in a dang elementary school:
Go to a school supply shop near you, or hit up Amazon if you’re lazy. The only people who hate bulletin boards that look like this are jilted schoolteachers and wrong, sad individuals.
The bulletin board should contain two regions: one for winners, and one for current tests.
For winners, print out the change, hypothesis, screenshots, lift, and confidence level.
For current tests, print out the Trello card that corresponds to the test that’s currently active – and list the page and customer segment that’s receiving the test.
Announce at Meetings
Optimization wins should be announced at the biggest possible meeting you can get away with. If you’re on a small team, announce it there. If you’re at a big company that has all-hands meetings, try to get on the agenda.
Your announcement should be the same as what you throw on the bulletin board: announce the change, reason for it (with research!), screenshots, and then the test’s results in the form of lift and confidence level.
Pause for thunderous applause.
Literally Print Your PDF Reports and Place Them On the CEO’s Desk, With the Good News Highlighted in Yellow, and Action Items Highlighted in Orange or Pink
I don’t know what else I need to add to this bullet point. It seems pretty self-explanatory to me.
Creating and sharing experiment results is good for archival purposes, but it’s also good for justifying your continued experimentation and optimization operations.
Most stores get CRO wrong.
They plan CRO activities at the wrong time, or they don’t know how to research new tests. Our free CRO roadmap draws on our years of experience to make CRO easy for you to implement for your store.
Sign up below to get your Ecommerce CRO Roadmap today: