We’ve all been there: that new feature you’ve been obsessively refining over the last few months is finally ready to go live. You’re glowing with pride for what you and your team has built. But when release day finally rolls around, it does… well, nothing. No movement to your KPIs and a lukewarm reaction from users. All those hours spent polishing the perfect experience and nothing to show for it.
Or perhaps you’ve gone too far the other way. You’re concerned about wasting time on low-impact features, so instead you hack together minimum viable products (MVPs) to test out your ideas. For a while, all seems well; you’re moving fast and your metrics are climbing. But one day you wake up to a product chock-full of barely working features, a codebase suffocating under mounting tech debt, and an increasingly exasperated user base.
My guess is that every Product team will encounter these pitfalls sooner or later. It’s just really hard simultaneously to maximise velocity, impact, and quality.
We’ve certainly fallen into these traps multiple times at Thread. So last year we set out to solve the problem once and for all. Could we find a process that allowed us to move fast, uncovering value as efficiently as possible, whilst also ensuring we deliver high-quality experiences built for the long-term? I’m excited to share the Test & Delivery framework, which has enabled us to do exactly that.
Whenever you release a new feature or change to our product, there are two key questions you should ask yourself:
The first question encompasses the value risk we may face with any project: Did we actually build something that moved the needle? It doesn’t matter how well we polished the feature if we didn’t get the impact that justified the project in the first place. Whereas the second question focuses on the execution risk. Did we build something people will love to use? Did we introduce any tech debt or other complexity we might pay for in the future?
The core insight of the Test & Delivery framework is that it’s really hard, if not impossible, sufficiently to account for both value and execution risk at the same time, in the same project. Think about it for a minute and this is kind of obvious: how can you validate an idea quickly whilst simultaneously building it into a polished, high quality experience? It just can’t be done.
So in Test & Delivery, you don’t even try to reconcile this tension: instead, every project you work on is classified as one of two types. Either it’s a Test project, in which you expend as little time and effort as possible to validate your idea. Or it’s a Delivery project, in which you take an already-validated idea and invest in a high-quality execution. Put another way: in Test you account for the value risk and then in Delivery you account for execution risk.
I’ll talk through the details of the process we use at Thread below.
Let’s say you’ve got a great idea for a new feature. You’re excited to get stuck in building it, and think there’s a chance it’ll have a huge impact on your metrics and UX. However, it could take weeks to build it in a way that meets your quality bar. This means there’s quite a high value risk: what if you spend all that time building something that doesn’t end up moving the needle? So, unless you already have reason to be super-confident in the idea, you’ll probably want to validate it first. In this case, you’ll kick-off a Test project.
To be clear: The goal of a Test project is not to prove scientifically, beyond all reasonable doubt, that the feature is going to be impactful. Instead, your aim is to get to a point where you and your team feel confident enough to invest the additional time in executing a high quality version of the feature. This is always going to involve a good deal of judgement: Test projects are about informing rather than driving your decisions.
A Test project begins with a pre-mortem: essentially, you try to identify all the reasons the feature could end up failing to move the needle. Ask yourself and your team: “Imagine we’re sitting in a meeting in a few weeks’ time to discuss why the feature was a failure. What do you think we’d be discussing?” For example, maybe too few users engaged with the feature in the first place? Or maybe the UI was too complex or unintuitive? Or maybe the content just wasn’t valuable enough to the user? The purpose of this exercise is to uncover the riskiest assumptions associated with the project. What are the assumptions which could easily turn out to be false and thus cause the project to fail?
Next, you design minimum viable tests (MVTs) to validate these risky assumptions. An MVT is simply the minimum amount of work you need to do to validate an assumption. This could take the form of an A/B test, a simple prototype for user testing, or even an engineering spike. Basically, you just do whatever you need to test out your assumptions as quickly as possible.
Once you’ve gathered results and insights from your MVTs, you first stop and remove any code associated with the tests. This is an important point. Given that MVTs are designed to validate assumptions as fast as possible, they’ll naturally cut a few corners along the way. If you allow an ever-increasing array of MVTs to linger in your product, you’ll harm your product and code quality over time.
Depending on the results of your MVTs, you can then go one of three ways. If you’re lucky, you’ll have reached sufficient confidence in the idea that you’re happy to invest in a Delivery project. On the other end of the spectrum, your tests could have been so disastrous that you completely kill the idea and move on to other things. The third possibility is that the results were promising, but not a slam-dunk. In this case, you might have ideas for iterations which you can test out in another Test project.
Let’s say your Test project was a success, and you feel confident enough to commit the time to building a polished version of the feature. In this case, you’ll want to kick-off a Delivery project. From this point, you simply need to define and execute a high-quality realisation of the feature. This is where you think about all those edge-cases you glossed over when hacking together your MVTs. You obsess over the design details that’ll elevate the UX to something seamless and delightful. You also take steps to minimise technical debt, ensuring you can continue to move fast in the future.
Of course, it’s likely you’ll still need to make pragmatic compromises in Delivery projects. Even if you’re confident in a feature’s value, you probably don’t have the freedom to spend as long as you like on the final version. We’ve all got got business goals, after all. For example, you might decide to introduce a small amount of technical debt in order to dramatically reduce the estimate. Or perhaps you’ll reign in the scope to exclude some costly UX polish. The goal in Delivery isn’t necessarily to realise the very best execution, but instead to build features into your product which are designed to stick around for the long-term.
At Thread we’ve noticed a number of key benefits from adopting Test & Delivery as a core part of our process, some intended and others unexpected:
The ultimate goal of product development is to create a seamless experience which repeatedly and reliably delivers value to its users. Test & Delivery is a super valuable tool for any team aspiring to achieve that goal. First, you move fast, validating your ideas, uncovering the unique ways you can solve your users’ problems and add value to their lives. Next, you pause, take stock, and use these insights to craft an experience which people won’t just use, but will love.