More on Minimum Viable Products
I’ve recently finished reading “The Startup Way” by Eric Ries. One of the themes focuses on testing assumptions around product or service offerings, something Eric Ries refers to as ‘Leap of Faith Assumptions’.
Unless they are tested, these assumptions are merely an educated guess and not something that significant company or personal resources can confidently invest in. We have all seen companies invest in a product only to find that after spending months or years of development (and a good deal of money) that it’s a flop. This can be attributed to a number of reasons, such as unexpected customer behaviour, the product not solving the customers problem, it creates a different problem or it doesn’t scale.
Start-ups need to ensure that their scarce resources are being used wisely. One method to help is experimentation to test key assumptions underpinning the idea. A Minimum Viable Product (MVP) can test customer responses quickly and efficiently, adjust assumptions or changes as required.
MVPs don’t need to be perfect and they are not a step towards scaling up into volume production. They can be in a variety of different forms depending on the idea and assumptions. One example of an MVP was where PowerPoint slides were used to understand customer interaction with a touchscreen. The customer physically touched various menu items, which were static images the controller clicked through to show the next screen. It took less than a day to put the images together and there was no coding required.
MVPs should be treated as an experiment with outcomes to learn from. These should be easily actioned and demonstrate causal behaviour. Effects due to changes to the product should also be clear. Some changes may be subtle such as colour tone, others may be more obvious like a change of material, size, weight or interface. Best practise says the results from an MVP experiment should be reported in an accessible way so participants can understand – not disguised in technicalities and data which withstands the scrutiny of an audit.
When experiment results are available the final step is to use the collected data and decide whether to continue with the current product or change something. This should be done regularly throughout the project. If you decide to change, then the overall vision should not change. It might entail adding a new feature to the product or focusing on an alternate market. This is sometimes known as a Pivot or Persevere meeting, the key question at these meetings should be “Is our current strategy taking us closer to our vision?”
On its own a single MVP is of little use. The power comes from using the findings to inform your product development which is then tested with other MVPs. Taking a Build-Measure-Learn approach with an enthusiasm for experimentation and learning shifts the balance of product development success. This allows the original idea to be improved and refined not by luck or intuition but by the customers who will use the product.
Eric Ries’ book reminded me of the importance and value of MVPs. It is something I’ve been talking to some of our clients’ companies about for their electronic product design. Where our clients are well established in a market, have great data about their customers’ needs and behaviours and are launching the next generation of a product, there is less uncertainty is therefore less need for experimentation.
Where clients are setting their sights on a new market then the value of experimentation increases. Examples include modifying existing customer products to add the feature of interest, using an off-the-shelf electronic development kit to rapidly prototype a feature or to use a wired together sensor and actuator to quickly and efficiently prototype an idea.
Ultimately the product still needs to be developed in some guise. Isn’t it better to develop something that has been shaped and influenced by the very people that will buy and use it?