The Economics of Software Testing
Speech on the Eurostar 2000 Conference, December 2000, Copenhagen, by Les Hatton, Oakwood Computing, Surrey, U. K., and the Computing Laboratory, University of Kent. mailto:email@example.com. Minutes by Ebbe Munk.
Risk and Uncertainty
- Risk is when you don’t know what will happen but you do know the probabilities
- Uncertainty is when you don’t even know the probabilities
It is fundamentally impossible to quantify risk
- "If a guy tells me that the probability of failure is 1 in 105, I know he’s full of crap." (Richard P. Feynmann, Nobel Laureate commenting on the NASA Challenger disaster).
People want to take risks
- 500 motorcyclists a year are killed in accidents in the U.K., but motorcycles are not banned. If they were banned, the motorcyclists would start skydiving etc.
Everybody has a propensity to take risk
- The propensity varies between people
- Risk-taking is influenced by the rewards
- Perceptions of risk are influenced by experience of losses - one’s own and others
- Risk-taking involved a balancing between the propensity to take risk and the perceived risk
Conclusions about risk
- Everyone else is seeking to manage risk too
- Everybody is guessing. If they knew, it's not risk.
- Guesses are extremely influenced by beliefs
- The behaviour of others and the behaviour of nature are your risk environment
- Unless people’s propensity to take risk is reduced:
- Safety intervention simply leads to responses which re-establish the level of risk
- Safety intervention redistributes risk but does not reduce it
- Science will continue to invent new risks
- In the dance of the risk thermostats, the music never stops
(Source: The risk thermostat, J. Adams 1984)
Examples and Sources of Risk and Failure
Increased risks by large usage in embedded control systems:
- In July 1999, General Motors has to recall 3.5 million vehicles because of a software defect. Stopping distances were extended by 15- 20 meters. Federal investigators received reports of 2,111 crashes and 293 injuries.
- In September 2000, Production of year 2001 models of Ford Windstar, Crown Victoria, Mercury Grand and Lincoln stopped because of software defect causing airbags to deploy on their own and seatbelts to tighten suddenly. This stopped production for several days at Ford of Canada and other sites.
J. Adams found in 1984, that 1/3 of all faults only failed less than once every 5000 execution years. In an embedded control system in a car with say 1,000,000 copies around the world, every such error will appear in one car every four days.
Web sites, which are heavily used, exhibit exactly the same kind of behavior. There is clear evidence with both embedded control systems and web development that the increased risks produced by unusually large usage are being ignored.
Computer Languages are not improved over Time
Language standardisation disobeys control process feedback in several important ways:
- It is characterised by often unconstrained creativity
- It completely ignores measurement
- The ‘must not break old code’ rule means feedback is crippled so although things are continually added, little gets taken out in practice. Backward compatibility will cripple any language.
Programming languages are designed by individualists. Control process feedback is designed by the hierarchy.
The problem is that we are trying to use a hierarchy technique on an individualist technology.
Conclusion: In general, reduced risk is compensated by adding more features. We must reduce the propensity of software managers and engineers to take risks:
- By making managers more aware of the cost of failure
- By making engineers and managers more aware of the ability of testing technology to reduce the cost of failure
A prediction: Improvements in software testing will not in general lead to improved reliability. They will simply lead to more features.