MYTH - Software reliability doesn't apply to us because are doing Agile, Spiral, Incremental or Continual development.
FACT - The models can and are used in any incremental development model. You can either use the models on each increment and combine the results or you can use the models on the final increment. The models can and should be used on several sequential releases to ensure that there there isn't "defect pileup" caused by having releases scheduled too close together (with too many new features).
MYTH - The requirements for our software change too often to predict reliability.
FACT- 100% of the organizations in our database had changing requirements. Changing requirements is a fact of life.
MYTH - Our company doesn't sell software so software reliability models won't work for us.
FACT - 100% of the organizations in our database are not in the "software" business. They were selling systems which have software in them.
MYTH - Software engineers should have "X" years of experience with a particular language to be considered for a job position.
FACT - Experience with the domain application type correlated significantly more strongly than the number of years with the language. Our results show that if a software engineer can write good software in one language, they can learn to do it in another much easier than they can learn a new application type.
MYTH - The depth of nesting in a function of code should be limited to 3-5.
FACT - Our benchmarking results concluded the exact opposite. When people try to reduce the complexity of complex logic, they only make it LOOK less complex. The key advantage of having it look MORE complex is that it's more visible to every software engineer. Also, one fast way to reduce the depth of nesting is to remove exception handling - which is not generally a good thing.
Myth - Software can't fail
Fact - "Fail" applies to the ability for a system to perform it's required function. The definition of "failure" is not specific to any particular failure modes or mechanisms. Over the years, reliability engineers have incorrectly interchanged the words "fail" with "wear out". Software does not wear out. But it does fail.
Myth - There will never be a universal software reliability prediction method -
Fact - The framework for performing a software reliability prediction is standardized. That framework is that you predict defect density, multiply that number by normalized size to get predicted defects, apply a transformation ratio to the predicted defects to get failure rate and MTTF. This framework has been used for decades. The prediction models vary in how they predict defect density but now how that defect density is used within the framework. All of the defect density prediction models use some prior correlation to development practices and fielded defects as a basis for the model.
The executive summary is that when it comes right down to it - industry does agree on how to predict software reliability.
The question is whether there will ever be exactly one model for predicting defect density.
The answer is no for one simple reason. Why do we need one? An analyst should be able to use the defect density prediction model that best fits the industry type and development practices employed on the current project. Hardware reliability analysts have this flexibility when predicting hardware failure rates.
Finally, there is not ONE method for predicting hardware reliability. There is significantly less variation in the industry accepted framework for predicting software reliability then there is between the various methods for predicting hardware reliability. So, why be concerned with having exactly one model for predicting defect density?
Myth - If we are doing a system allocation, software reliability can be assumed to be 1 and failure rate can be assumed to be 0.
Fact - Unless your system has absolutely no software, this would not be an accurate allocation. In this decade just about everything has some amount of software.