Requirements Tools
Planning
IBM: Overview of the requirements types
IBM Rational DOORS
THE PHILOSOPHY OF COMPOSITION
Laws of Computer Programming
Managing Project Creep
Nine Deadly Sins of Project Planning
 
NASA Defect Classification
The following describes the severity and defect type classification to be used when recording defect information


NASA Defect Classification
NASA Defect Classification

Defect Classification
The following describes the severity and defect type classification to be used when recording defect information:
a.
Severity of Defect: Each defect in the inspected product is classified according to its
severity as one of the following:
(1)
Major Defect: A defect in the product under inspection which, if not corrected, would
either cause a malfunction which prevents the attainment of a primary mission objective or system safety, or would result in a significant budget or schedule impact.

(2)
Minor Defect: A defect in the product under inspection which, if not fixed, would not prevent the attainment of a primary mission objective or system safety, or would not result in a significant budget or schedule impact, but could result in difficulties in terms of operations, maintenance, and future development.

(3)
Clerical Defect: A defect in the product under inspection at the level of editorial errors, such as spelling, punctuation, and grammar.

b.
Types of Defects: Defects are further classified according to a pre-defined defect
taxonomy
.
This defect taxonomy would be defined as part of developing the inspection procedure
.
Headings on the checklists used for the inspection can be used to derive the defect taxonomy
.
The following is an example of error taxonomy for code related defects:
(1)
Algorithm or method: An error in the sequence or set of steps used to solve a particular problem or computation, including mistakes in computations, incorrect implementation of algorithms, or calls to an inappropriate function for the algorithm being implemented.
(2)
Assignment or initialization: A variable or data item that is assigned a value
incorrectly or is not initialized properly or where the initialization scenario is
mishandled (e.g., incorrect publish or subscribe, incorrect opening of file, etc.)
(3)
Checking:
Software contains inadequate checking for potential error conditions, or an inappropriate response is specified for error conditions.
(4)
Data: Error in specifying or manipulating data items, incorrectly defined data structure, pointer or memory allocation errors, or incorrect type conversions.
(5)
External interface: Errors in the user interface (including usability problems) or the
interfaces with other systems.
(6)
Internal interface: Errors in the interfaces between system components, including
mismatched calling sequences and incorrect opening, reading, writing or closing of files and databases.
(7)
Logic: Incorrect logical conditions on if, case or loop blocks, including incorrect
boundary conditions ("off by one" errors are an example) being applied, or incorrect
expression (e.g., incorrect use of parentheses in a mat
hematical expression).
(8)
Non-functional defects: Includes non compliance with standards, failure to meet non -
functional requirements such as portability and performance constraints, and lack of
clarity of the design or code to the reader - both in the comments and the code itself.

The inspection should include participants representing the following stakeholders:
a.
Software engineers
b.
Systems Engineers
c.
Software Assurance personnel
d.
Safety engineers (when appropriate)
e.
Software Safety personnel (when appropriate)
f.
Configuration management personnel (when inspecting configuration management plan)
Note: When necessary, other experts may be included as participants to provide
important technical expertise. This can include the project management for both
the software and the system, although in this case, please review the caveats in Sect
ion 5.2, “The Inspection Team”


7.4
Architectural Design Inspection (I0) Checklists for architectural (preliminary) design inspections should contain items which:
a.
Check that the design meets approved requirements.
b.
Address the validation of all interfaces among modules within each component.
c.
Address the completeness of the list of modules and the g
eneral function(s) of each
module.
d.
Address the validation of fault detection, identification, and recovery requirements.
e.
Check that the component structure meets the requirements.
f.
Address the validation of the selection of reusable components.
g.
Address the traceability of the design to the approved requirements.
h.
Address the validation of the input and output interfaces.
i.
Check that each design decision is a good match to the system’s goal.
j.
Check that the content of the design description fulfills the NPR 71
k.
Check that safety controls and mitigations are clearly identified in the design document,
when a safety critical system is under inspection (Review system safety analyses in
supporting documentation).
l.
When inspecting object oriented or other design models:
(1)
Check that the notations used in the diagram comply with the agreed upon model
standard notation (e.g., UML notations).
(2)
Check that the design is modular.
(3)
Check that the cohesion and coupling of the models are appropriate.
(4)
Check that architectural styles and design patterns are used where possible
.
If design patterns are applied, validate that the selected design pattern is suitable.

Check the output of any self or external static analysis tool outputs.
The inspection should include participants representing the following stakeholders:
a.
Persons responsible for defining requirements for the software components.
b.
Persons who are responsible for detailed design of the software.
c.
Persons who understand the quality needs of the software component (e.g., quality
assurance, software assurance, and, reliability).
d.
Persons who are responsible for verifying and validating systems interfaces (e.g., system test engineers)

7.5
Detailed Design Inspection (I1)
Checklists for detailed design inspections should contain items which:
a.
Check that the design meets the approved requirements.
b.
Address the validation of the choice of data structures, logic
algorithms (when specified), and relationships among modules.
c.
Check that the detailed design is complete for each module.
d.
Address the traceability of the design to the approved requirements.
e.
Check that the detailed design meets the requirements and is traceable to the architectural
software system design.
f.
Check that the detailed design is testable.
g.
Check that design can be successfully implemented within the constraints of the selected
architecture.
h.
Check output from any static analysis tools available.
The inspection should include
participants representing the following stakeholders:
a.
Persons who are responsible for defining the software component’s requirements.
b.
Persons who are responsible for defining the software’s requirements.
c.
Persons who are responsible for implementing the
software’s requirements.
d.
Persons who understand the quality needs of the software component (e.g., quality
assurance, software assurance, safety, and, reliability).
e.
Persons who are responsible for implementing Software Safety (if the inspection
involves a
safety critical system).
f.
Persons who are responsible for designing tests for the software (e.g, Test Engineers).
The inspection should include the following materials as reference documents:
a.
Work products documenting the software architectural and preliminary designs.

For safety critical software, the system hazard report and any software portions therein.
c.
Any software reliability analyses available along with their critical items list and
software design recommendations for meeting fault tolerance.
The moderator should limit the amount of work product to be inspected in order to maintain an
acceptable inspection rate
.
Prior data and experiences suggest a starting metric for this type of inspection of at most 20 pages per hour
.
7.6
Source Code Inspections (I 2)
Checklists for source code inspections should contain items which:
a.
Address the technical accuracy and completeness of the code with respect to the requirements.
b.
Check that the code implements the detailed design.
c.
Check that all required standards (including coding standards) are satisfied.
d.
Check that latent errors are not present in the code, including errors such as index out of range errors, buffer overflow errors, or divide by zero errors.
e.
Address the traceability of the code to the approved requirements.
f.
Address the traceability of the code to the detailed design.
g.
When static or dynamic code analysis is available, check the results of these tools.
Since the correctness of automatically generated code may be difficult to verify, it is recommended that such code should be subject to inspection, especially if the code is part of a safety critical system
.
In addition to addressing the above checklist items, inspections of automatically generated code should also:
a.
Address the correctness of the model used to generate the code.
b.
Check that the code generator is correctly configured for the target environment.
c.
Check that the interface between the generated code and the rest of the code base is consistent and correct.
d.
Check that any known problems with the code generator are avoided or mitigated.
e.
Check that any known issues or existing problems with the code generator are documented.
The inspection should include participants representing the following stakeholders:
a.
Persons who are responsible for defining
the software’s requirements.
b.
Persons who are responsible for implementing the software’s requirements.
c.
Persons who are responsible for designing tests for the software (e.g., Test engineers)
d.
Persons who understand the quality needs of the software component (e.g., quality
assurance, software assurance, and, reliability), when appropriate.
e.
Persons who are responsible for implementing Software Safety (if the inspection
involves a safety critical system).

The moderator should limit the amount of work product to be inspected in order to maintain an
acceptable inspection rate
.
Prior data and experiences suggest a starting metric for this type of inspection of at most
10 pages per hour
.
The inspection should include the following materials as reference documents:

a.
Work products documenting the detailed software design.
b.
Output from static analysis of the code, if available.

7.7
Test Plan Inspection (IT1)
Checklists for test plan inspections should contain items which:
a.
Check that the purpose and objectives of testing are identified in the test plan and they
contribute to the satisfaction of the mission objectives.
b.
Check that all new and modified software functions will be verified to operate correctly
within the intended environment and according to approved requirements.
c.
Check that the resources and environments needed to verify software functions and
requirements correctly are identified.
d.
Check that all new and modified interfaces will be verified.
e.
Address the identification and elimination of extraneous or obsolete test plans.
f.
Check that each requirement will be tested.
g.
Check that tester has determined the expected results before executing the test(s).
h.
For safety critical software systems:
(1)
Check that all software safety critical functions or hazard controls
and mitigations will be tested
.
This testing should include ensuring that the system will enter a safe state when unexpected anomalies occur.
(2)
Check that safety and reliability analyses have been used to determine which failures
and failure combinations to test for.
i.
Check that the content of the test plan fulfills NPR 7

.
The inspection should include participants representing the following stakeholders:
a.
Persons who are responsible for defining the software component’s requirements
b.
Persons who are responsible for implementing the software’s requirements.
c.
Persons who understand the quality needs of the software component (e.g., quality assurance, software assurance, and, reliability).
d.
Persons who are responsible for designing tests for the software (e.g., Test engineers)
e.
Persons who are responsible for implementing Software Safety (if the inspection involves a safety critical system).
f.
Persons who are responsible for verifying and validating systems interface (e.g., system test engineers).