With your timescales being tight and your budgets constrained, every aspect of your project’s time and cost is under scrutiny. Traditionally integration and test come towards the end stages of a project when the pressure on you is greatest. Reducing product testing to meet a release date is on the radar of many project managers.
There isn’t a simple, one-size fits all answer for if this is a good idea or not. As with many aspects of projects and engineering a good deal comes down to the risks and impact of a failure occurring and the options for rectifying a post-release problem. We do a lot of product testing and we have four questions to help you assess how much product testing you need for your product. We also explore what types of product testing you might need. Product testing may include environmental testing chambers for example.
Can or should your product be connected to the internet?
Remember the dark days before the internet provided ways of remotely updating software, adding features and fixing bugs? Software was either provided on a series of floppy disks or, in the case of gaming machines, programmed into ROM cartridges. Software didn’t have an easy means of being updated. A software release was a serious and final prospect. Software updates were few and far between and, most often, the next software release was effectively a new product that the consumer needed to purchase again.
Because of this, software development projects typically took a long time to complete. Most often, they used a waterfall model of delivery (requirements -> design -> development -> testing – > deployment).
Due to the finality of a software release, lots of time was dedicated to testing with users and for functionality. In the 1990s it was estimated that the time taken, from business need (requirements) to production software, could be up to three years. The requirements the business had when the project began were often very different to their needs when the software was finally delivered.
When Atari didn’t product test
Timescales often need to be compressed for commercial reasons. One notable example of this was Atari’s development and release of the ET video game. This was completed in a couple of months ready for Christmas 1982. To do that Atari decided to skip audience testing of the game. The software was programmed into ROM and sold to customers in the millions. The game was quickly discovered to have serious problems where the protagonist fell repeatedly into holes making it almost impossible to play. The game is often listed as the “worst video game ever” with many of the unsold game cartridges being dumped in a landfill in New Mexico. The re-discovery and excavation of the game has been made into a documentary, Atari : Game Over.
When connectivity is not a good idea
The lack of remote update ability isn’t limited to products developed in the 80’s and 90’s. There are still many types of products that cannot be updated remotely due to constraints including:
- compliance approvals
- available battery life
- communications bandwidth, or
- lack of writable memory in devices with ROM-based communications stacks.
What are the risks of not testing during electronic product development?
If you fail to test your electronics product during various stages of development then you can end up launching an unreliable product. This leads to reputational damage or there could be product recalls. The costs of redesign will be significant. Here are just some of the risks that can happen:
Lots of software is readily updated via an internet connection. Windows operating systems, iPhones, and even Tesla cars are frequently updated to bring new features and to address software bugs and security issues identified during use.
Where frequent software updates are expected the update mechanism must be robust enough to recover from failed remote updates. Each software release pushed out brings an element of risk. Care is needed to ensure that the software release is fit for purpose and, at the bare minimum, won’t prevent the device from accepting a further update should problems arise. Google fell foul to this in 2019 where their Home devices had a firmware update that bricked (stopped device working such that its function is no greater than a house brick!). Devices which then had to be replaced.
Where software can be remotely updated there is less need to test every single use case and include every feature at first release. Identified problems can be resolved later and adding new features is easy. The software released does need to be sufficiently well tested to not break users trust, maintain brand values and to confirm that further updates can be performed remotely.
Malware or Hacking
Internet connectivity means that, at least theoretically, anyone can connect to your device and make software changes or additions. As early as 2014, IoT devices were compromised and were sending out spam emails. Much more malicious attacks take place today, looking for passwords, financial details etc. Device security adds another layer of complexity to your product if you want to use internet connectivity to reduce the need for product testing.
Can automated testing recreate most user scenarios effectively?
For software products where there is no remote connectivity, there is clearly the need for exhaustive testing as there is no effective backup plan. Included in this category are:
- life safety systems under strict change control
- radio-based systems with very limited available bandwidth, and more.
Exhaustive testing can be accelerated and made more effective by automating testing where possible. The product requirements can often be distilled into a test specification that covers all of the required functionality. This is best done by a different engineer or team than the coding team. During the project, they will have made certain assumptions; were they the right ones to make? The assumptions taken by the coding team need to be tested against the requirements.
Automated testing requires development of both test hardware and test software, making a custom hat for a Raspberry Pi can be an effective shortcut to making the hardware platform. Automated testing can run many iterations of use case testing and flush out where the implementation has bugs or misses dealing with corner cases.
Are you confident the electronics are right?
Many embedded device development projects include customised electronics hardware for the unique combinations of sensors, outputs, communications, power sources etc. These also include software or firmware design and development.
Electronics cannot be remotely updated. Before shipping product, the hardware must be fully tested and verified for functionality, design tolerances, operating conditions and compliance (EMC etc). It is unlikely that hardware issues discovered in the field can be satisfactorily resolved with a remote firmware update.
Hardware testing depends upon the product application, the following should be at least considered
- Power consumption for the major use cases, including the “off” state
- Power supply behaviour – power up and down sequencing, over and reverse voltage protection, noise filtering, supply regulation and margining, short circuit protection and dynamic load response
- Battery charge, discharge and safety
- Antenna impedance matching for maximum power transfer
- Watchdog performance
- Serial and parallel bus timings
- Sensor measurements vs calculated or simulated values
- Environmental performance (temperature, humidity, shock & vibration)
- Thermal hotspots from high power devices
Any hardware not tested during bring up and verification must be assumed to not work and should not be relied upon. Field trials of hardware in representative environments is time well spent both for user feedback and to build confidence in the developed solution.
After proving the fundamental hardware functionality is correct, the next most important aspect is to ensure a robust and reliable method of recovery from a failed firmware update exists. This should cope with power cuts and internet outages during receipt and programming of new software. Where possible it is prudent to include methods of recovering where the wrong software image has been released. This requires thought at the outset about defensive strategies for erroneous images.
Can your brand withstand the risk?
The examples mentioned above did not significantly damage either brand. Both brands and big enough and complex enough to withstand setbacks. Can your business say the same? In today’s media-driven world, an issue can quickly be in front of millions of people around the world. If that issue could have been prevented by more product testing, was the saving more than the cost of the brand damage? Better to delay (and possibly announce the delay due to further testing) and get it right than launch and run the risk is our opinion.
There is always some requirement for product testing prior to release. Customers have a right to expect that the product will do what it was sold to them to do. In the cases where there is no possibility of remote update there must be a robust process from requirements through to verification. Automated testing, especially that developed by a separate engineer or team can help both with robustness and completeness testing.
Where there is the possibility of remote updates there should still be as much thorough testing as can be accommodated. The main use cases and the software update mechanisms should be the primary focus.
If there has been any hardware development then this should be thoroughly tested and verified prior to any volume rollout as errors and issues here are difficult, if not impossible, to work around once released.
Of course, if we can help you with your product testing, or you would simply like to talk this article through, give us a call on 0115 772 2825, or use our contact form
What to read next