Software Review-Fagan Inspection (Best Tutorial 2019)

Software Review-Fagan Inspection

Software Review-Fagan Inspection Tutorial 2019

Software review and inspection is considered a best practice and has been practiced over four decades. There are many flavors of reviews and inspections. We focused on the classic Fagan inspection and active design review, as well as their various extensions.


Fagan inspection is not only the first formal inspection method but also the foundation of many other modified versions. This tutorial explains the Software Review with Fagan Inspection using best techniques used in 2019.


Many factors impact the performance of software reviews and inspections, chief among which is an individual factor. It is well established that software review and inspection is primarily an individual, not a group, activity.


How the reviewer reads and extracts information from the software artifact under review impacts his or her performance.


Inspection is “A visual examination of a software product to detect and identify software anomalies, including errors and deviations from standards and specifications,” and review is “A process or meeting during which a software product, set of the software product.


Or a software process is presented to project personnel, managers, users, customers, user representatives, auditors or other interested parties for examination, comment or approval.”


The standard defines five types of reviews (management reviews, technical reviews, inspections, walkthroughs, and audits), and software inspections are a kind of review.


We use the review as a general term. This blog describes a generic procedure for software review, then treats Fagan inspection and active design review in more detail.


A Generic Software Review Procedure

A Generic Software Review Procedure

All software review procedures share some commonalities. IEEE Std-1028 listed the following five steps for all the five types of reviews: (1) planning the review, (2) overview of the procedures, (3) preparation, (4) examination/evaluation/recording of results, and (5) rework/follow-up.


They used a reference model for software inspection processes, which has six phases: planning, overview, defect detection, defect collection, defect correction, and follow-up.


Tian abstracted a generic software review procedure with three steps, and various software review procedures can be considered as extensions to or specialization of this generic one. We discuss Tian’s abstraction in this section.


The generic software review procedure has three stages of activities: planning and preparation, conducting the review, and corrections and follow-up.


In the planning and preparation stage, one typically defines objectives of a review and decides what artifacts are subject to review, who will create them, who will review them and who else will be involved and in what capacity, when will the review happen, and what are the overall process and follow-up activities if needed.


Before conducting a review, the document authors will assemble the material, decide the avenue for the review, and handle the review logistics, etc.


In the reviewing stage, people get together as a team face-to-face or online, synchronously or asynchronously. The team goes through the material under review in some pre-determined manner, discusses issues reported before the meeting or spotted in session, agrees on the observation or dismisses false positives.


The focus of this stage is to uncover and collate issues in the document under review, and hence it is often called collection. In the end, the review team agrees whether a follow-up review session is warranted.


In the correction and follow-up stage, the author corrects issues that have surfaced during the review. The dispositions of the issues shall be agreed by the review team and the corrections or fixes shall be verified.


A follow-up review can be conducted for that purpose if the extent of changes is large; otherwise, a lightweight follow-up shall suffice.


We discuss two classical software review procedures, namely Fagan inspection and active design review, which can be considered as an extension to this generic review. We use the term Fagan inspection for historical reasons.


Software review or inspection is independent of software development models, like a waterfall or agile. Software review can be applied to a software artifact as soon as it is ready for review. A software review is considered a best practice and it is an important activity of software quality assurance.


Fagan Inspection and Extensions

Fagan Inspection and Extensions

The earliest and most influential software review procedure was proposed by Fagan. The method was initially intended for design and code inspection and later adapted to inspect virtually any software artifacts such as requirements, user documentation, and test plans and test cases as long as such artifacts can be made visible and readable. 


Fagan inspection has been so influential that it is almost synonymous with the term inspection.


 Fagan Inspection

 Fagan Inspection

Fagan inspection consists of six steps or operations as originally called: planning, overview, preparation, inspection, rework, and follow-up.

We discuss these six steps in the context of design and code inspection. The principal ideas behind Fagan inspection can be applied to inspecting any software artifacts.



The objectives of the planning step are to define inspection entry criteria for the materials subject to inspect, to arrange the availability of the appropriate participants, and to arrange the meeting place and time.



The objectives of the overview step are communication and education, as well as assigning the inspection roles to participants. This step involves the whole inspection team.


Typically a meeting is held, during which the project overview and the specifics of the artifact to be inspected are given. The inspection materials are distributed at the end of the meeting.



The objective of the preparation step is for participants to study the material individually to fulfill their respective roles. One of the key ideas in the inspection is to assign different roles to the individual participants based on their respective expertise. The roles of the participants are discussed below.


To facilitate the preparation, a checklist of recent error types can be used, or other kinds of reading techniques can be adopted. Software reading techniques are discussed in the rest of the blog.



The objective of the inspection step is to find the errors in the material under inspection. A formal meeting is held and the entire team participates in the discussion. At the beginning of the meeting, if code files are under inspection, then the implementer (author) can show the implementation of the design.


In the course of the meeting, errors are discussed; false positives are dismissed and true errors are recognized and noted, with possible error type classification and severity identification.


It is important to note that the team should not hunt for a solution nor discuss alternatives. After the inspection has been held, a written report of the findings is released in a timely manner.



The objective of the rework step is to fix all errors or provide other responses. The author of the software artifact is responsible for the rework and responses.



The objective of the follow-up step is to ensure all fixes are effective and there are no newly introduced problems. The moderator decides if another round of inspection is needed.


For example, if the errors are minor and the changes are limited, then he can declare there is no need for another round of inspection. Regardless of whether there is another round of inspection, the team needs to pay attention to “bad fixes.” Empirical data show that almost one of every six fixes are incorrect or create other defects.


We can view Fagan inspection in the framework of the generic review. The first three steps of the Fagan inspection—planning, overview, and preparation—fit into the “planning and preparation” step in the generic review diagram; the inspection step of the Fagan inspection directly maps to the “review” block in the generic review;


And the last two steps of the Fagan inspection—rework and follow-up—map to the “correction and follow-up” step in the generic review. Both the generic review and the Fagan inspection allow an optional iteration.


As mentioned above, the Fagan inspection defines the participant roles that each participant plays. There are four roles: moderator, author, reader, and tester. The moderator leads the inspection team and takes care of logistics; the other roles represent the viewpoints of those with their respective expertise during the inspection.


The moderator is the key person in a successful inspection. He or she possesses strong interpersonal and leadership skills, coaches and guides the inspection team, and handles meeting logistics, including scheduling the meeting and publishing the outcome of the inspection. The moderator must be neutral and objective.


The author is the person who created the software artifacts under inspection. The author is responsible for producing the artifacts and fixing the errors in the artifacts, with possible help from others.


The reader is an experienced peer who can be a subject matter expert on the software artifact under inspection. The tester is responsible for writing and/or executing test cases for the software module or the product.


The Fagan inspection team typically consists of four people, large enough to allow group interaction to detect errors in software artifacts but small enough to allow individual voices to be heard. To have a healthy group dynamic, an ideal mix of participants can include people with different background and experience.


The Fagan-style reviews have a few noticeable drawbacks. One of them is the heavy process involved, which requires a series of formal meetings and documentation.


This limitation is overcome by the introduction of modern lightweight reviews. The other drawback is that the quality of review varies widely, since the participants may be passively engaged in the review. This latter shortcoming is remediated by the active review, which is discussed in the next subsection.


 Extensions to Fagan Inspection

It has been 40 years since Fagan published the Fagan inspection. Fagan inspection has been studied by researchers and embraced by practitioners. The inspection procedure has been extended in different ways to further improve its efficiency or customize to unique situations. We summarize a few important extensions below.


Meeting or No Meeting


Proposed improvements to Fagan inspection often center on the importance and cost of group meetings, particularly the defect collection meeting. Fagan insisted on having a defect collection meeting, but other researchers questioned the importance of meetings. Reasons often cited to support a team meeting include:


  • Synergy: More people working together will find more defects than they would work alone.
  • Learning and knowledge sharing: Meetings are a good opportunity for beginners to gain domain knowledge from experienced participants.
  • Milestone: Meetings serve as project milestones.


However, it takes time and effect to schedule a group meeting, particularly when it involves many people.

Researchers questioned whether the meeting creates synergy reported that most defects were found during the individual preparation stage, which was confirmed by many others, that also reported that meeting-based reviews were significantly more costly than non-meeting-based reviews, and meeting-based reviews did not detect significantly more detects.


However, meeting-based reviews were significantly better at reducing false positive defects, and reviewers preferred meeting-based reviews over non-meeting-based reviews.


The general consensus now is to not have a large group meeting or at least not emphasize it. A few alternatives are proposed to replace full-team meetings. As an example, a few experts can go through defects reported by individual reviewers during their preparation and decide the nature of defects (true vs false positives).


Meeting-less inspections further evolved into modern lightweight inspections, which are discussed later.


What Is the Right Team Size?


Fagan suggested a team size of 4 people. A large team presumably allows a different kind of defects to be found since each reviewer has different expertise and experience.


The argument of cost-effectiveness favors a smaller team. They proposed a two-person inspection involving just the author and reviewer, which makes the inspection accessible to teams or organizations that don’t have access to larger team resources.


They conducted code inspection experiments in situ by varying the number of inspectors on each inspection team (1, 2, or 4) and concluded that, while 1 inspector was significantly less effective than 2 or 4 inspectors, there was little difference in the inspection effectiveness of 2 or 4 inspectors.


Instead of a single large team, they split a large team into N smaller teams for critical projects, let the N smaller teams to inspect requirements documents on parallel and independently, and aggregated defects from each smaller team at the end, which is known as N-fold inspection. They reported that independent teams found more defects than a single team.


Perhaps there is no optimal team size. The right team size will depend on the artifacts under review (types and complexity), organizational environment (whether it has access to large resources), etc.


For important documents such as a requirements specification, more points of view are certainly beneficial. It is also a good idea to have more people to review the design than code. Complex artifacts also warrant more independent reviewers with different expertise.


 Other Extensions

Gilb and Graham introduced a process brainstorming meeting right after the inspection meeting. This meeting’s function is root cause analysis so that similar defects can be prevented from happening in future projects or activities in the same project.


Knight and Myers studied a phased inspection (for code inspections), which consists of multiple phases or mini-inspections, each focusing on detecting one class of defects such as issues with language, code layout, programming constructs, etc. Defects have to be fixed before the next phase can start.


Many software artifacts are generated in the course of project development. It might be infeasible to inspect all documents due to resource constraints.


To address this concern, That developed a sampling-driven inspection, which utilizes a pre-inspection to identify a partial list of documents that can benefit from a focused inspection.


The decision can also be based on the historical defect data and the characteristics of the document itself, e.g., code complexity metrics.


Active Design Review and Extensions

Active Design Review and Extensions

The active design review was introduced by Parnas and Weiss. Although the publication came much later than Fagan’s, according to Weiss, they conceived the idea independently about the same time as Fagan published his work.


 Active Design Review

The purpose of design review is to find errors in design and its documentation. There are many kinds of design errors—e.g., inconsistency (different assumptions), inefficiency (inefficient to implement or use), ambiguity (allows different interpretation or lack of clarity), and inflexibility (does not accommodate change).


Conventional reviews such as Fagan inspection tend to be incomplete and shallow. Parnas and Weiss noticed that reviews have variable quality and many factors contributed to it:


  • The amount, quality, and time of delivery of the design documentation varied widely
  • The time that the reviewers put in preparation varied widely
  • The participation of the reviewers varied widely
  • The expertise and roles of the reviewers varied widely


The active design review was proposed to reduce the variability and promote a consistent review quality. The key part of the active design review is the use of questionnaires to define the reviewer’s responsibilities and to ensure they play a more active role. The main ideas behind the active design review, when compared to the Fagan-style review, include:


  • The required knowledge and skills reviewers possess are explicitly identified before selecting the reviewers.
  • Reviewers focus their efforts on the design aspects related to their experience and expertise.


The designers pose questions to the reviewers rather than the reviewers asking questions. Each question is carefully designed such that its answer requires careful study of the design under review or some aspects of the design.


Reviewers are actively involved in the review and make positive assertions on the design instead of merely skimming over the design for obvious or trivial errors.

Reviewers and designers meet in a small group to resolves issues. An active design review has five phases:



1. Make the design reviewable.

A good design shall be well structured, simple, efficient, adequate, flexible, practical, implementable, and standardized. Design assumptions shall be made explicit.


The design document can include redundant information for error and consistency checking. The document shall be structured in such a way that modules and submodules can be reviewed separately.


2. Identify the review types.

A design review shall be focused and have a well-defined purpose. It is thus easy to identify expertise needed to support the review. Different reviews can concentrate on detecting different error types such as assumption validity, sufficiency, consistency between assumptions and functions, and adequacy.


3. Classify reviewers.

Reviewers shall be specialists, potential users, those familiar with the design methods and technologies used, or those skilled at finding issues. The review shall exploit the skills and knowledge of the reviewers to detect as many errors as possible.


4. Design questionnaires.

The questions shall not be trivial. They ensure the reviewers take an active role and use the design document to answer the questions. The questions shall be rephrased to avoid yes/no answers.


5. Conduct the review.

There is no big meeting. Instead, the designer and reviewer have 1:1 or small group discussion. This phase has three stages:


  • a. An overview meeting to discuss the material under review and how the process works.


  • b. Assign reviewers to specific sections of the document, with a deadline of when the questions shall be answered and returned to the designer. The designer/reviewer meeting is also scheduled.


  • c. The designer collects and reads the completed questionnaires and meets with reviewers individually to understand and resolve questions. The designer also updates the design document afterward.


The active design review has a few challenges. It is usually hard to find subject matter experts to serve as reviewers and get their time commitment, as everyone has a busy schedule and other commitments in the same or different projects.


There is no big meeting, and review is managed as individual tasks, thus it takes diligence to keep the review on track and complete on time. Lastly, it takes significant effort to design a set of questions whose answers are not obvious and easy to find.


The questions for the reviewers to answer shall be carefully designed and non-trivial, which forces reviewers to play the roles of users of the design and write a program to implement the design.


Answering the questions makes the reviewer active. The questions shall also be tailored to the reviewer’s expertise and the aspects of the document under review.


[Note: You can free download the complete Office 365 and Office 2019 com setup Guide for here]


 Extensions to the Active Design Review

 Extensions to the Active Design Review

Britcher (1988) took the active design review one step further by incorporating correctness arguments into inspection. The artifact author and inspectors collaborate in the pursuit of correctness, developing questions and answers together centering on four program attributes: topology, algebra, invariance, and robustness.


The purpose for this extension is not to improve inspections but to improve programming.


The active design review was espoused the architecture tradeoff analysis method and the new method is called active review for intermediate designs.


The hybrid method fills a niche that only a portion of a system, not a complete architecture, is available, but the designer would like to get early feedback on the design approach. The ideas behind the active design review have been used in scenario-based reading.


Other Types of Reviews

In addition to the Fagan Inspection and active design review and their extensions, many different kinds of software review exist in practice with different levels of discipline and flexibility. There are, however, no commonly agreed-upon definitions of terms.


The usages are generally inconsistent, which causes confusions in many cases. Even for the same type of reviews, the levels of procedural details can vary significantly.


Fagan, in his seminal paper, recognized that walkthroughs were practiced in many different places with varying regularity and thoroughness. It was also reported that one person’s walkthrough can be another person’s inspection.


Wiegers organized different types of reviews based on their levels of formality and rigor. He discussed, in the decreasing order of formality, inspection, team review, walkthrough, pair programming, peer desk-check or pass-around, and ad hoc review.


He discussed five types of review: formal inspections, over-the-shoulder reviews (screen or desk reviews), e-mail pass-around reviews, pair programming, and tool-assisted reviews.


IEEE Standard for Software Reviews and Audits defined five types of software reviews (management reviews, technical reviews, inspections, walkthroughs, and software audits) (IEEE 1028). In terms of formality, walkthroughs are least formal, inspections less formal, followed by management and technical reviews, with audits being most formal.


These types of reviews and audits are considered systematic, with the following attributes: team participation, documented results of the review, and documented procedures for conducting the review.


The review or inspection procedures mentioned above are applicable to any software artifacts, with the exception of pair programming, which is most applicable to the source code.


There are specific review procedures for particular kinds of software artifacts as well. Given the importance of software architecture in the software life cycle, architecture review and evaluation methods were an active area of research and now have become mature.


We use techniques (or procedures) and methods as synonyms, although some authors distinguish them deliberately. The ways to assess software architectures are packaged as “methods” in literature. Interested readers can find more information in blogs on architecture evaluation.


We do not intend to define various types of reviews, rather suggest adopting the definitions in the IEEE standard. We do want to point out the trend in industry practices.


A lightweight process is strongly favored by practitioners due to their busy schedule and heavy workload, and therefore the synchronous, real-time, face-to-face meeting in Fagan inspection can be impractical.


The synergy of the defect collection meeting has been questioned and the community generally agrees that its value is marginal.


Thus software reviews can be conducted without meetings. The IEEE standard permits reviews to be held without a physical meeting in a single location.


Given the advancement of telecommunication and telepresence, physical meetings can be replaced with a telephone conference, video conference, web conference, or other groupware and group electronic communications.


Software tools to support software reviews also have made progress. Web-based tools allow authors to upload software artifacts to web servers, invite reviewers, and set up a review online.


 Reviewers can enter review comments online and see each other’s comments instantly. Issues raised during the review are tracked by tools for closure.


Synchronous and asynchronous notifications keep the author and reviewers in the same loop on progress. Review metrics are automatically collected to provide input for future process improvement. With these features and capabilities, the overhead of classic inspections is alleviated.


At least for code review, many organizations, including Facebook, Google, and Microsoft, are adopting lightweight, tool-assisted reviews. Bacchelli and Bird called this modern code review.


Referring to Fagan’s inspection, only preparation, collection (inspection), and rework stages are present in the lightweight review, and defect collection is facilitated with tools. There is empirical evidence that supports the efficiency and effectiveness of the lightweight, tool-assisted modern practice.


Factors Impacting Software Reviews

Factors Impacting Software Reviews

When the purpose of software reviews is to detect defects in the software artifacts, one is interested in how many defects are detected and how quickly they are detected. The number of detected defects is related to the effectiveness of software reviews.


The more defects are detected, the more effective a software review is. How quickly defects are detected (number of detected defects per unit of time) is related to the efficiency and cost of software reviews.


The quicker the defects are detected, the lower the cost of software review. Many factors impact the effectiveness and efficiency of software reviews, and individual performance, meetings, preparation, the amount of inspected materials, team size, tools, and training are frequently mentioned factors. Meetings and team sizes have been discussed earlier.


Both commercial and research tools exist to support software inspection, mostly in the areas of asynchronous communication, artifact comprehension and visualization, and defect tracking. We discuss the remaining factors here.


Individual performance. A large difference (more than 10 times) in individual performance in software engineering was observed a long time ago. Hatton recently reported the same in code inspection.


A few attributes contributed to the difference, including expertise on the programming language and software reading expertise. The rest of the blog is devoted to software reading techniques.


Preparation. The importance of individual preparation to defect detection was reported by many authors. The more time a reviewer spent on preparation, the more defects are typically reported.


It is generally agreed that most defects are found during an individual’s preparation phase before defects are aggregated.


Amount of materials. Due to people’s limited attention span, when the amount of materials to be inspected is large, readers become overwhelmed and fatigued, which negatively impacts the review effectiveness.


For code inspection, the rate of inspection often suggested in the literature is 100-200 lines of code per hour. If readers are not given enough time to examine the material, then the inspection loses its rigor and readers tend to report trivial findings.


Training. The level of training readers receives impacts the review effectiveness. This should not be a surprise, given the importance of the individual’s skills in the inspection. Researchers found that practical training on defect finding skills was more impactful than the training on the process.


Software development is a highly personal endeavor. Without care, software review or inspection can easily cause anxiety and tension among participants. The social psychological effects of computer programming were recognized from the early days of programming. Weinberg published his famous book.


The Psychology of Computer Programming, in 1971, before Fagan inspection was introduced and practiced. Authors should have a thick skin and have their egos checked when participating in the reviews. Reviewers should recognize the IKEA effect on the artifact authors.


The IKEA effect is a cognitive bias in which the artifact creators place a disproportionately higher value on the artifact they created. More labors lead to deeper affection. Successful inspection depends on the individual capability of each team member and how well individuals work in teams.


Before we close this section, let’s discuss the eight maxims Kelly and Shepard compiled based on their observations, which focus on people forces at work in inspections.


1. Use structured inspection techniques. By inspection techniques, Kelly and Shepard meant reading techniques. The techniques shall be appropriate for the inspection goals and inspectors’ experience.


2. Set standards of acceptability. Inspections are expensive. There should be entrance criteria to start the inspection. Artifacts shall be cleaned up and superficial issues shall be rooted out, preferably with tools, before inspections start.


3. Match skills to tasks. If skills and tasks are matched, both effectiveness and the comfort level of the inspector improve. Inspectors who are familiar with the artifacts under review shall be chosen first. Skills here also include soft skills such as verbal and written communication skills, which are needed to interact with artifact authors.


4. Find the physical, mental, and schedule space. Inspections are mentally demanding and require concentration for an extended period of time. The inspection shall be conducted at a quiet place without interruption, and inspectors shall be given enough time to complete the inspection.


5. Encourage an inspection-based process. At a minimum, the inspection shall be planned, and the project schedule shall reflect that. Roles and responsibilities shall be clearly specified ahead of time.


6. Promote responsibility, ownership, and authority. Responsibility and ownership lead to improved inspection efforts. Inspectors and authors jointly own the artifact and are responsible for its quality. Inspectors shall be granted the authority to access needed documents and resources to complete the inspection.


7. Ensure clear inspection goals are set. Clear goals affect the scope of the inspection. It shall be clear to all participants if alternative solutions shall be proposed or not. Also, ensure terminologies are defined and used consistently to avoid potential confusion.


8. Use metrics cautiously. Metrics can be used and interpreted in different ways. There is no commonly agreed-upon definition of “defect,” nor its granularity and severity. The number of defects is strongly related to the complexity of the task itself. Metrics shall not be used to evaluate an individual’s job performance.




The detection cost is the cost of verification or evaluation of a product or service during the various stages of the development process. One of the detection techniques is conducting reviews.


Another technique is conducting tests. But it must be remembered that the quality of a software product begins in the first stage of the development process, that is to say when defining requirements and specifications.


Reviews will detect and correct errors in the early phase of development while tests will only be used when the code is available. So we should not wait for the testing phase to begin to look for errors.


In addition, it is much cheaper to detect errors with reviews than with testing. This does not mean we should neglect to test since it is essential for the detection of errors that reviews cannot discover.


Unfortunately, many organizations do not perform reviews and rely on testing alone to deliver a quality product. It often happens that, given the many problems throughout development, the schedule and budget have been compressed to the point that tests are often partially, if not completely, eliminated from the development or maintenance process.


In addition, it is impossible to test a large software product completely. For example, for software that has barely 100 decisions (branches), there are more than 184,756 possible paths to test and for software with 400 decisions, there are 1.38E + 11 possible paths to test.

Informal reviews

In this blog section, we present reviews. We will see that there are many types of reviews ranging from informal to formal.

Informal reviews are characterized as follows:

  • There is no documented process to describe reviews and they are carried out in many ways by different people in the organization;
  • Participants’ roles are not defined;
  • Reviews have no objective, such as fault detection rate;
  • They are not planned, they are improvised;
  • Measures, such as the number of defects, are not collected;
  • The effectiveness of reviews is not monitored by management;
  • There is no standard that describes them;
  • No checklist is used to identify defects.

Formal reviews will be discussed in this blog as defined in the following text box.


In this section, we present two types of review as defined in the IEEE 1028 standard: the walk-through and the inspection. We will also describe two reviews that are not defined in the standard: the personal review and the desk-check. These reviews are the least formal of all of the types of reviews.


They are included here because they are simple and inexpensive to use. They can also help organizations that do not conduct formal reviews to understand the importance and benefits of reviews in general and establish more formal reviews.


Peer reviews are product activity reviews conducted by colleagues during development, maintenance, or operations in order to present alternatives, identify errors, or discuss solutions. They are called peer reviews because managers do not participate in this type of review.


The presence of managers often creates discomfort as participants hesitate to give opinions that could reflect badly on their colleagues and the person who requested the review may be apprehensive of negative feedback from his own manager.


Note the presence of phase-end reviews, document reviews, and project reviews. These reviews are used internally or externally for meetings with a supplier or customer. 


It should be noted that Each type of review does not target all of these objectives simultaneously. We will consider what the objectives are for Each type of review in a subsequent section.


The types of reviews that should be conducted and the documents and activities to be reviewed or audited throughout the project are usually determined in the software quality assurance plan (SQAP) for the project, as explained by the IEEE 730 standard


To produce a document, that is, a software product (e.g., documentation, code, or test), source documents are usually used as inputs to the review process.


For example, to create a software architecture document, the developer should use source material such as the system requirements document, the software requirements, a software architecture document template, and possibly a software architecture style guide.


A review of just the software product, for example, a requirements document, by its author is not sufficient to detect a large number of errors. As illustrated in Figure, once the author has completed the document, the software product is compared by his or her peers against the source documents used.


At the end of the review, peers who participated in the review will have to decide if the document produced by the author is satisfactory as is, if significant corrections are required or if the document must be Identify defects


  • Assess/measure the quality of a document (e.g., the number of defects per page)
  • Reduce the number of defects by correcting the defects identified
  • Reduce the cost of preparing future documents (i.e., by learning the type of defects Each developer makes, it is possible to reduce the number of defects injected in a new document)


  • Estimate the effectiveness of a process (e.g., the percentage of fault detection)
  • Estimate the efficiency of a process (e.g., the cost of detection or correction of a defect)
  • Estimate the number of residual defects (i.e., defects not detected when software is delivered to the customer)
  • Reduce the cost of tests
  • Reduce delays in delivery
  • Determine the criteria for triggering a process
  • Determine the completion criteria of a process
  • Estimate the impacts (e.g., cost) of continuing with current plans, e.g. cost of delay, recovery, maintenance, or fault remediation
  • Estimate the productivity and quality of organizations, teams, and individuals
  • TEach personnel to follow the standards and use templates
  • TEach personnel how to follow technical standards
  • Motivate personnel to use the organization's documentation standards
  • Prompt a group to take responsibility for decisions
  • Stimulate creativity and the contribution of the best ideas with reviews
  • Provide rapid feedback before investing too much time and effort in certain activities
  • Discuss alternatives
  • Propose solutions, improvements
  • Train staff
  • Transfer knowledge (e.g., from a senior developer to a junior)
  • Present and discuss the progress of a project
  • Identify differences in specifications and standards
  • Provide management with confirmation of the technical state of the project
  • Determine the status of plans and schedules


Confirm requirements and their assignment in the system to be developed corrected by the author and peer reviewed again. The third option is only used when the revised document is very important to the success of the project.


As discussed below, when an author makes many corrections to a document, it inadvertently creates other errors. It is these errors that we hope to detect with another peer review.

peer review

The advantage of reviews is that they can be used in the first phase of a project, for example, when requirements are documented, whereas tests can only be performed when the code is available.


For example, if we depend on tests alone and errors are injected when writing the requirements document, these will only become apparent when the code is available.


However, if we use reviews, then we can also detect and correct errors during the requirements phase. Errors are much easier to find and are less expensive to correct at this phase.


For illustration purposes, we used an error detection rate of 50%. Several organizations have achieved higher detection rates, that is, well over 80%. This figure clearly illustrates the importance of establishing reviews from the first phase of development.



This section describes two types of reviews that are inexpensive and very easy to perform. Personal reviews do not require the participation of additional reviewers, while desk-check reviews require at least one other person to review the work of the developer of a software product.


Personal Review

A personal review is done by the person reviewing his own software product in order to find and fix the most defects possible. A personal review should precede any activity that uses the software product under review.

The principles of a personal review are:

personal review

  • find and correct all defects in the software product;
  • use a checklist produced from your personal data, if possible, using the type of defects that you are already aware of;
    • REQ = requirement
    • HLD = high-level architecture design
    • LLD = detailed design
    • CODE = coding and debugging
    • UT = unit testing
    • IT = integration testing
    • ST = system testing
    • SHIP = delivery to customer
    • KLOC = thousand lines of code
  • follow a structured review process;
  • use measures in your review;
  • use data to improve your review;
  • use data to determine where and why defects were introduced and then change your process to prevent similar defects in the future.




A checklist is used as a memory aid. A checklist includes a list of criteria to verify the quality of a product. It also ensures consistency and completeness in the development of a task. An example of a checklist is a list that facilitates the classification of a defect in a software product (e.g., an oversight, a contradiction, an omission).


The following practices should be followed to develop an effective and efficient personal review:

  • pause between the development of a software product and its review;
  • examine products in hard copy rather than electronically;
  • check Each item on the checklist once completed;
  • update the checklists periodically to adjust to your personal data;
  • build and use a different checklist for Each software product;
  • verify complex or critical elements with an in-depth analysis.

As we can see, personal reviews are very simple to understand and perform. Since the errors made are often different for Each software developer, it is much more efficient to update a personal checklist based on errors noted in previous reviews.



  • None
  • A software product to review


  • Checklist for the software product to be reviewed
  • Standard (if applicable)
  • A software product to review
  • Review the software product, using the first item on the checklist and cross this item off when the review of the software product is completed
  • Continue review of the software product using the next item on the checklist and repeat until all the items in the list have been checked
  • Correct any defects identified
  • Check that Each correction did not create other defects.


  • Corrected software product


  • Corrected software product


  • The effort used to review and correct the software product measured in person-hours with an accuracy of +/–15 minutes.


Desk-Check Reviews

A type of peer review that is not described in standards is the desk-check review, sometimes called the Pass around. It is important to explain this type of peer review because it is inexpensive and easy to implement. It can be used to detect anomalies, omissions, improve a product, or present alternatives.

Desk-Check Reviews

This review is used for low-risk software products, or if the project plan does not allow for more formal reviews. According to Wiegers, this review is less intimidating than a group review such as a walk-through or inspection.


As shown in Figure, there are six steps. Initially, the author plans the review by identifying the reviewer(s) and a checklist. A checklist is an important element of a review as it enables the reviewer to focus on only one criterion at a time.


A checklist is a reflection of the experience of the organization. Then, individuals review the software product document and note comments on the review form provided by the author. When completed, the review form can be used as “evidence” during an audit.


Here is a list of some important features of checklists:

Checklist review

  • Each checklist is designed for a specific type of document (e.g., project plan, specification document);
  • Each item of a checklist targets single verification criteria;
  • Each item of a checklist is designed to detect major errors. Minor errors, such as misspellings, should not be part of a checklist;
  • Each checklist should not exceed one page, otherwise, it will be more difficult to use by the reviewers;
  • Each checklist should be updated to increase efficiency;
  • Each checklist includes a version number and a revision date.



The document is ready for a review


  • A software product to review
  • Plan the Desk-Check


  • Identifies reviewers
  • Chooses the checklist(s) to use
  • Completes the first part of the review form. Send documents to reviewers Author:
  • Provides the following documents to the reviewers: o Software product to review
  • Review form o Checklist(s)
  • Review the software product


  • Check the software product against the checklist
  • Complete the review form with
  • comments
  • the effort to conduct the review
  • Call a meeting (if needed) 


  • Reviews the comments
  • If the author agrees with all the comments, they are incorporated into the software product
  • If the author does not agree with all the comments or believes some comments have a significant impact, then the author:

Convenes a meeting with the reviewers

  • Leads the meeting to discuss the comments and determine the course of action:
  • Incorporate the comment as is
  • Ignore the comment
  • Incorporate the comment with modifications
  • Correct the software product
  • The author incorporates the comments received.
  • Complete the review form


  • Completes the review form with:
  • Total effort (i.e., by all the reviewers) required to review the software product o Total effort required to correct the software product
  • Signs the review form


  • Corrected software product


  • Corrected software product
  • Completed and signed review form




The effort required to review and correct the software product (person-hours).

The author of the document collects these data and adds the time it took him to correct the document. The forms will be retained by the author as “evidence” for an audit by the SQA of the organization the author belongs to, or by the SQA of the customer.


As an alternative to the distribution of hard copies to reviewers, one can place an electronic copy of the document, the review form and the checklist in a shared folder on the Intranet.


Reviewers are invited to provide comments as annotations to documents over a defined period of time. The author can then view the annotated document, review the comments, and continue the Desk-Check review as described above.



In this section, we present the ISO/IEC 20246 standard on work product reviews, the Capability Maturity Model Integration (CMMI) model, and the IEEE 1028 standard, which lists requirements and procedures for software reviews.

ISO/IEC 20246 Software and Systems Engineering:


Work Product Reviews

The purpose of ISO/IEC 20246 Work Product Reviews is: “to provide an International Standard that defines work product reviews, such as inspections, reviews, and walk-throughs, that can be used at any stage of the software and systems lifecycle.


It can be used to review any system and software work product. ISO/IEC 20246 defines a generic process for work product reviews that can be configured based on the purpose of the review and the constraints of the reviewing organization.


The intent is to describe a generic process that can be applied both efficiently and effectively by any organization to any work product. The main objectives of reviews are to detect issues, to evaluate alternatives, to improve organizational and personal processes, and to improve work products.


When applied early in the life cycle, reviews are typically shown to reduce the amount of unnecessary rework on a project. T


Capability Maturity Model Integration

The CMMI for Development (CMMI-DEV) is widely used by many industries. This model describes proven practices in engineering. In this model, a part of the “Verification” process area is devoted to peer reviews. Other verification activities will be considered in more detail in a later blog.



A process area in the engineering category of Maturity Level 3



The purpose of the process area "Verification" is to ensure that selected work products meet their specified requirements. Peer reviews are an important part of the verification and are a proven mechanism for effective defect removal.


An important corollary is to develop a better understanding of the work products and the processes that produced them so that defects can be prevented and process improvement opportunities can be identified.


Peer reviews involve a methodical examination of work products by the producers’ peers to identify defects and other changes that are needed.

Example of Peer review methods include the following:

  • Inspections;
  • Structured walk-through
  • Deliberate refactoring


Pair programming

  • Specific Objective 2 - Perform Peer Reviews
  • Specific practice 2.1 Prepare for Peer reviews
  • Specific practice 2.2 Conduct Peer reviews
  • Specific practice 2.3 Analyze Peer review data


The process and product quality assurance process areas provide the following list of issues to be addressed when implementing peer reviews:

  • Members are trained and roles are assigned to people attending the peer reviews.
  • A member of the peer review who did not produce this work product is assigned to perform the quality assurance role.
  • Checklists based on process descriptions, standards, and procedures are available to support quality assurance activity.
  • Non-compliance issues are recorded as part of the peer review report and are tracked and escalated outside the project when necessary.


According to the CMMI-DEV, these reviews are performed on selected work products to identify defects and to recommend other changes required. The peer review is an important and effective software engineering method, applied through inspections, walk-through or a number of other review procedures.


The IEEE 1028 Standard

The IEEE 1028-2008 Standard for Software Reviews and Audits [IEE 08b] describes five types of reviews and audits and the procedures required for the completion of Each type of review and audit.


Audits will be presented in the next blog. The introductory text of the standard indicates that the use of these reviews is voluntary. Although the use of this standard is not mandatory, it can be imposed by a client contractually.


The purpose of this standard is to define reviews and system audits for the acquisition, supply, development, operation and maintenance of software. This standard describes not only “what to do” but also how to perform a review. Other standards define the context in which a review is performed and how the results of the review are to be used.


Examples of Standards that Require the Use of Systematic Reviews

  • Standard identification Title of the standard
  • ISO/IEC/IEEE 12207 Software Life Cycle Processes
  • IEEE 1012 IEEE Standard for System and Software Verification and Validation.


IEEE 730 IEEE Standard for Software Quality Assurance Processes


The IEEE 1028 standard provides minimum acceptable conditions for systematic reviews and software audits including the following attributes:

  • team participation;
  • documented results of the review;
  • documented procedures for conducting the review.


Conformance to the IEEE 1028 standard for a specific review, such as inspection, can be claimed when all mandatory actions (indicated by “shall”) are carried out as defined in this standard for the review type used.


This standard provides descriptions of the particular types of reviews and audits included in the standard as well as tips. Each type of review is described with clauses that contain the following information:


Introduction to review: describes the objectives of the systematic review and provides an overview of the systematic review procedures;

Responsibilities: defines the roles and responsibilities needed for the systematic review.

Input: describes the requirements for input needed by the systematic review;

Entry criteria: describes the criteria to be met before the systematic review can begin, including the following:

  • Authorization,
  • Initiating event;
  • Procedures: details the procedures for the systematic review, including the following:
  • Planning the review;
  • Overview of procedures;
  • Preparation;
  • Examination/evaluation/recording of results;
  • Rework/follow-up;


Exit criteria: describe the criteria to be met before the systematic review can be considered complete;

Output: describes the minimum set of deliverables to be produced by the systematic review.


Application of the IEEE 1028 Standard

Procedures and terminology defined in this standard apply to the acquisition of software, supply, development, operation, and maintenance processes requiring systematic reviews. Systematic reviews are performed on a software product according to the requirements of other local standards or procedures.


The term “software product” is used in this standard in a very broad sense. Examples of software products include specifications, architecture, code, defect reports, contracts, and plans.


The IEEE 1028 standard differs significantly from other software engineering standards in that it does not only enumerate a set of requirements to be met (i.e., “what to do”), such as “the organization shall prepare a quality assurance plan,” but it also describes “how to do” at a level of detail that allows someone to conduct a systematic review properly.


For an organization that wants to implement these reviews, the text of this standard can be adapted to the notation of the processes and procedures of the organization, adjusting the terminology to that which is commonly used by the organization and, after using them for a while, improve the descriptions of the review.


This standard concerns only the application of a review and not their need or the use of the results. The types of reviews and audits are :


Management review:

A systematic evaluation of a software product or process performed by or on behalf of the management that monitors progress, determines the status of plans and schedules, confirms requirements and their system allocation, or evaluates the effectiveness of the management approaches used to achieve fitness for purpose;


technical review: a systematic evaluation of a software product by a team of qualified personnel that examines the suitability of the software product for its intended use and identifies discrepancies from specifications and standards;


inspection: a visual examination of a software product to detect and identify software anomalies including errors and deviations from standards and specifications;


walk-through: a static analysis technique in which a designer or programmer leads members of the development team and other interested parties through a software product, and the participants ask questions and make comments about any anomalies, violation of development standards, and other problems;


audit: an independent assessment, by a third party, of a software product, a process or a set of software processes to determine compliance with the specification, standards, contractual agreements, or other criteria.


In the following sections, walk-through and inspection reviews are described in detail. These reviews are described to clearly demonstrate the meaning of a “systematic review” as opposed to improvised and informal reviews.




“The purpose of a walk-through is to evaluate a software product. A walk-through can also be performed to create a discussion for a software product”. The main objectives of the walk-through are:

  • find anomalies;
  • improve the software product;
  • consider alternative implementations;
  • evaluate conformance to standards and specifications;
  • evaluate the usability and accessibility of the software product.


Other important objectives include the exchange of techniques, style variations, and the training of participants. A walk-through can highlight weaknesses, for example, problems of efficiency and readability, modularity problems in the design or the code or non-testable requirements.


The figure shows the six steps of the walk-through. Each step is composed of a series of inputs, tasks, and outputs.


Usefulness of a Walk-Through

There are several reasons for the implementation of a walk-through process:

  • identify errors to reduce their impact and the cost of correction;
  • improve the development process;
  • improve the quality of the software product;
  • reduce development costs;
  • reduce maintenance costs.


Identification of Roles and Responsibilities


Four roles are described in the IEEE 1028: leader, recorder, author, and team member. Roles can be shared among team members.


For example, the leader or author may play the role of the recorder and the author could also be the leader. But, a walk-through shall include at least two members.

The standard defines the roles as follow (adapted from IEEE 1028 [IEE 08b]):

  • Walk-through leader
  • conduct the walk-through;
  • handle the administrative tasks pertaining to the walk-through (such as distributing documents and arranging the meeting);
  • prepare the statement of objectives to guide the team through the walk-through;
  • ensure that the team arrives at a decision or identified action for Each discussion item;
  • issue the walk-through output.



  • note all decisions and identified actions arising during the walk-through meeting;
  • note all comments made during the walk-through that pertain to anomalies found, questions of style, omissions, contradictions, suggestions for improvement, or alternative approaches.


  • present the software product in the walk-through.
  • adequately prepare for and actively participate in the walk-through;
  • identify and describe anomalies in the software product.


The IEEE 1028 standard lists improvement activities using data collected from the walk-through. These data should:

  • be analyzed regularly to improve the walk-through process;
  • be used to improve operations that produce software products;
  • present the most frequently encountered anomalies;
  • be included in the checklists or in assigning roles;
  • be used regularly to assess the checklists for superfluous or misleading questions;
  • include preparation time and meetings; the number of participants should be considered to determine the relationship between the preparation time and meeting and the number and severity of anomalies detected.


To maintain the efficiency of walk-through, the data should not be used to evaluate the performance of individuals.

IEEE 1028 also describes the procedures of walk-through.




This section briefly describes the inspection process that Michael Fagan developed at IBM in the 1970s to increase the quality and productivity of software development.


The purpose of the inspection, according to the IEEE 1028 standard, is to detect and identify anomalies of a software product including errors and deviations from standards and specifications.


Throughout the development or maintenance process, developers prepare written materials that unfortunately have errors. It is more economical and efficient to detect and correct errors as soon as possible. Inspection is a very effective method to detect these errors or anomalies.


According to the IEEE 1028 standard, inspection allows us to

  • verify that the software product satisfies its specifications;
  • check that the software product exhibits the specified quality attributes;
  • verify that the software product conforms to applicable regulations, standards, guidelines, plans, specifications, and procedures;
  • identify deviations from provisions of items (a), (b), and (c);
  • collect data, for example, the details of Each anomaly and effort associated with their identification and correction;
  • request or grant waivers for violation of standards where the adjudication of the type and extent of violations are assigned to the inspection jurisdiction;


use the data as input to project management decisions as appropriate (e.g., to make trade-offs between additional inspections versus additional testing).


The IEEE 1028 standard provides guidelines for typical inspection rates, for different types of documents, such as anomaly recording rates in terms of pages or lines of code per hour.


As an example, for the requirements document, IEEE 1028 recommends an inspection rate of 2–3 pages per hour. For source code, the standard recommends an inspection rate of 100–200 lines of code per hour.



Project Launch Review

In the SQAP of their projects, many organizations plan a project launch or kick-off meeting as well as a project assessment review, also called a lessons learned review.


Project Launch Review

The project launch review is a management review of the milestone dates, requirements, schedule, budget constraints, deliverables, members of the development team, suppliers, etc.


Some organizations also conduct kick-off reviews at the beginning of Each of the major phases of the project when projects are spread over a long period of time (as in several years).


Before the start of a project, team members ask themselves the following questions: who will the members of my team be? Who will be the team leader? What will my role and responsibilities be?


What are the roles of the other team members and their responsibilities? Do the members of my team have all the skills and knowledge to work on this project?


The following text box describes the kick-off review meeting used for software projects at Bombardier Transport.


A project launch session is usually done at the beginning of a new project or at the beginning of a project phase. It can also be done for an iterative development project to prepare for the next iteration.


In this case, it is called a project relaunch session. This type of session is also well suited in cases where the performance of a project and/or process must be improved and when a project has to be rectified.


Depending on the size, complexity, and type of project (e.g., new development or re-use of critical software development), a typical project launch session will last 1–2 days in one location. During a project launch session, it is important that all team members are fully dedicated to this activity.


To reduce disturbances (e.g., telephone calls), the project launch meeting may be held outside of the project office or building. The following table shows a typical schedule for a one-day project launch session.


As the table shows, the theme of the Software Management Project (SMP), processes, roles, and responsibilities are first discussed in agenda item 4 and then in item 8. Roles and responsibilities (R&R) are discussed under the heading Software Quality Assurance and Verification & Validation.


A project launch review is a workshop, usually led by a facilitator, during which the project team members define the project plan, including activities, deliverables, and schedule.


The project launch workshop can last between one and three days. But for a typical project at Bombardier Transport, a one-day session is normally sufficient.


To illustrate the roles of team members, an example of project planning performed during the project launch session is described below. The objectives of the project launch review at Bombardier Transport are:

  • define the project plan using an integrated team approach;
  • ensure a common understanding of objectives, processes, deliverables, and the role and responsibilities (R&R) of all team members;
  • facilitate the exchange of information and provide “just in time” training to project members.
  • Typical Agenda of a Project Launch Meeting at Bombardier Transport


Time Agenda

08h30 Welcome, agenda review, and discussions regarding participant expectations
Meeting roles to assign: secretary and time-keeper

09h00 Overview of the software engineering process at Bombardier Transport

10h30 Software project management process:

1. Project inputs

2. Project scope, constraints and assumptions

3. Project iterations and their associated objectives

4. Structure of the project team and role assignment

5. High-level architecture

6. Tailoring of deliverables (e.g., for Each iteration)

7. Personnel requirements

8. Relationships with other groups and associated roles/responsibilities

9. Identification and analysis of risks

12h00 Lunch break

13h00 Software project management process (continued)

14h30 Break

14h45 Software development process:

Define requirements and their attributes

Description of traceability

15h00 Software Configuration management process:

Process overview

Identification of configuration items

Identification of the Baseline for Each iteration

Audits and version control

15h45 Software Quality Assurance and Verification and Validation processes:

Identification of roles and activities

16h00 Software infrastructure and training: Development environment

Test and validation environment Qualification environment

Project training needs

16h30 Summary and conclusion

17h00 Adjournment


Project Retrospectives

Project Retrospectives

If the poor cousin of software engineering is quality assurance, the poor cousin of quality assurance reviews is the project retrospective.


It is ironic that discipline, such as software engineering, which depends as much as it does on the knowledge of the people involved, dismisses the opportunity to learn and enrich the knowledge of an organization’s members.


The project retrospective review is normally carried out at the end of a project or at the end of a phase of a large project. Essentially, we want to know what has been done well in this project, what has gone less well and what could be improved for the next project. The following terms are synonymous: lessons learned, post-mortem, after-action-review.


This approach, called Experience Factory, where experience is gathered from software development projects, is packaged and stored in a database of experience. The packaging refers to the generalization, adaptation, and formalization of the experience until it is easy to reuse.


In this approach, the experience is separate from the organization that is responsible for capturing the experience.


A post-mortem review, conducted at the end of a phase of a project or at the end of a project, provides valuable information such as:

  • updating project data such as length, size, defects, and schedule;
  • updating quality or performance data;
  • a review of performance against plan;
  • updating databases for size and productivity;

adjustment of processes (e.g., checklist), if necessary, based on the data (notes taken on the proposal process improvement (PIP) forms, changes in design or code, lists of default controls indicated and so on).


There are several ways to conduct project retrospectives;

Some techniques focus on creating an atmosphere of discussion in the project, others consider past projects, still, others are designed to help a project team to identify and adopt new techniques for their next project, and some address the consequences of a failed project.


This section presents a less stringent and less costly approach to capturing the experience of project members.


Since a retrospective session may create some tension, especially if the project discussed has not been a total success, we propose rules of behavior so that the session is effective. The rules of behavior at these sessions are:

  • respect the ideas of the participants;
  • maintain confidentiality;
  • not to blame;
  • not to make any verbal comment or gesture during brainstorming;
  • not to comment when ideas are retained;
  • request more details regarding a particular idea.


The following quote outlines the basis of a successful assessment session.

The main items on the agenda during a project retrospective review are:

  • list the major incidents and identify the main causes;
  • list the actual costs and the actual time required for the project, and analyze variances from estimates;
  • review the quality of processes, methods, and tools used for the project;

make proposals for future projects (e.g., indicate what to repeat or reuse (methodology, tools, etc.), what to improve, and what to give up for future projects).


A retrospective session typically consists of three steps: first, the facilitator explains, along with the sponsor, the objectives of the meeting; second, he explains what a retrospective session is, the agenda and the rules of behavior; lastly, he conducts the session.

A retrospective session takes place as follows:


Step One

  • presentation of the facilitators by the sponsor;
  • introduction of participants;
  • presentation of the assumption:
  • Regardless of what we discover, we truly believe that everyone did the best job, given his qualifications and abilities, resources, and project context.
  • Presentation of the agenda of a typical retrospective session lasting approximately three hours:
  • introduction;
  • brainstorm to identify what went well and what could improve;
  • prioritize items;
  • identify the causes;
  • write a mini action plan.


Step Two—Introduction to the retrospective session

  • what is a retrospective session?
  • when is a lesson really learned?
  • what is individual learning, team learning?
  • what is learning in an organization?
  • why have a retrospective session?
  • potential difficulties of a retrospective session;
  • session rules;
  • what is brainstorming? The rules of brainstorming are:
  • no verbal comments or gestures;
  • no discussion when ideas are retained.


Step Three—Conducting the retrospective session

  • chart a history (timeline) of the project (15–30 minutes);
  • conduct brainstorming (30 minutes);
  • individually, identify on post-it notes:
  • what went well during the project (e.g., what to keep)? what could be improved?
  • were there any surprises?
  • collect ideas and post them on the project history chart
  • clarify ideas (if necessary);
  • group similar ideas;
  • prioritize ideas;
  • find the causes using the “Five Why” technique:
  • what went well during the project?
  • what could be improved?
  • for this project, name what you would have liked to change?
  • for this project, name what you wish to keep.
  • write a mini action plan;
  • what, who, when?
  • end the session;
  • ensure the commitment to implement the action plan;
  • thanks to the sponsor and the participants.


Even if logic dictates that conducting project retrospective or lessons learned sessions are beneficial for the organization, there are still some factors that affect these types of sessions:

  • leading lessons learned sessions takes time and often management wants to reduce project costs;
  • lessons learned benefit future projects;
  • a culture of blame (finger pointing) can significantly reduce the benefits of these sessions;
  • participants may feel embarrassed or have a cynical attitude;
  • the maintenance of social relationships between employees is sometimes more important than the diagnosis of events;
  • people may be reluctant to engage in activities that could lead to complaints, to criticism, or blame;
  • some people have beliefs that predispose them to the acceptance of lessons learned; beliefs such as “Experience is enough to learn” or “If you do not have experience, you will not learn anything”;
  • certain organizational cultures do not seem able or willing to learn.




For several years, agile methods have been used in industry. One of these methods, “SCRUM,” advocates frequent short meetings. These meetings are held every day or every other day for about 15 minutes (no more than 30 minutes).


The purpose of these meetings is to take stock and discuss problems. These meetings are similar to management meetings described in the IEEE 1028 standard but without the formality.


During these meetings, the “Scrum Master” typically asks three questions of the participants:

  • What have you accomplished, in the “to do” list of tasks (Backlog), since the last meeting?
  • What obstacles prevented you from completing the tasks?
  • What do you plan to accomplish by the next meeting?


These meetings allow all participants to be informed of the status of the project, its priorities, and the activities that need to be performed by members of the team.


The effectiveness of these meetings is based on the skills of the “Scrum Master.” He should act as a facilitator and ensure that the three questions are answered by all participants without drifting into problem-solving.



An entire blog is devoted to measures. This section describes only the measures associated with reviews. Measures are mainly used to answer the following questions:

  • How many reviews were conducted?
  • What software products have been reviewed?
  • How effective were the reviews (e.g., number of errors detected by a number of hours for the review)?
  • How efficient were the reviews (e.g., number of hours per review)?
  • What is the density of errors in software products?
  • How much effort is devoted to reviews?
  • What are the benefits of reviews?


The measures that allow us to answer these questions are:

  • number of reviews held;
  • identification of the revised software product;
  • the size of the software product (e.g., number of lines of code, number of pages);
  • number of errors recorded at Each stage of the development process;
  • effort assigned to review and correct the defects detected.



  • Custom systems are written on contract: The organization makes profits by selling tailored software development services for clients.
  • Custom software is written in-house: The organization develops software to improve organizational efficiency.
  • Commercial software: The company makes profits by developing and selling software to other organizations.
  • Mass-market software: The company makes profits by developing and selling software to consumers.
  • Commercial and mass-market firmware: The company makes profits by selling software in embedded hardware and systems.


Each business model is characterized by its own set of attributes or factors: criticality, the uncertainty of needs and requirements (needs versus expectations) of the users, the range of environments, the cost of correction of errors, regulation, project size, communication, and the culture of the organization.


Business models help us understand the risks and the respective needs in regards to software practices. Reviews are techniques that detect errors and thus reduce the risk associated with a software product.


The project manager, in collaboration with SQA, selects the type of review to perform and the documents or products to review throughout the life cycle in order to plan and budget for these activities.

The following section explains the requirements of the IEEE 730 standard with regard to project reviews.




Reviews are central when it comes time to assess the quality of a software deliverable. For example, product assurance activities may include SQA personnel participating in project technical reviews, software development document reviews, and software testing.


Consequently, reviews are to be used for both product and process assurance of a software project. IEEE 730 recommends that the following questions be answered during project execution:


  • Have periodic reviews and audits been performed to determine if software products fully satisfy contractual requirements?
  • Have software life cycle processes been reviewed against defined criteria and standards?
  • Has the contract been reviewed to assess consistency with software products?
  • Are stakeholder, steering committee, management, and technical reviews held based on the needs of the project?
  • Have acquirer acceptance tests and reviews been supported?
  • Have action items resulting from reviews been tracked to closure?


The standard also describes how reviews can be done in projects that use an agile methodology. It states that “reviews can be done on a daily basis,” which reflects the agile culture of conducting a daily activity.


We know that SQA activities need to be recorded during the course of a software project. These records serve as proof that the project did the activities and can provide these records when asked.


Review results and completed review checklists can be a good source of evidence. Consequently, it is recommended that project teams keep a record of the meeting minutes for all technical and management reviews they conduct.


Finally, an organization should base process improvement efforts on the results of in-process as well as completed projects, gathering lessons learned, and the results of ongoing SQA activities such as process assessments and reviews.


Reviews can play an important role in organization-wide process improvement of software processes. Preventive actions are taken to prevent the occurrence of problems that may occur in the future. Non-conformances and other project information may be used to identify preventive actions.


SQA reviews propose preventive actions and identify effectiveness measures. Once the preventive action is implemented, SQA evaluates the activity and determines whether the preventive action is effective. The preventive action process can be defined either in the SQAP or in the organizational quality management system.



Although reviews are relatively simple and highly effective techniques, there are several factors that can greatly help their effectiveness and efficiency.


Conversely, many factors can affect the review to the point of no longer being used in an organization. Some factors related to an organization’s culture, which can promote the development of quality software, are listed below.


Factors that Foster Software Quality

Visible management commitment

  • provide the resources and time to conduct reviews such as an inspection;
  • ensure that reviews are planned in the project plan or in the quality assurance plan;
  • maintain reviews (e.g., inspections) even when the schedule is tight;
  • occasionally revise the overall results of reviews and consider proposals to improve the process;
  • attend the training session;
  • conduct reviews with colleagues (e.g., inspect a project plan, a software quality assurance plan)
  • Good team spirit
  • reviews (e.g., inspections) are made by team members in order to help Each other and increase product quality.