Quality control terms in software engineering - Part 1

1.
Why should we put in more effort in reviews?

The aim of reviews is to detect as many defects as possible and fix them at the earliest when the cost of fixing the defect is the least. If these defects are not detected and slip through to subsequent phases, they affect more and more work products and cost of fixing the defects goes on increasing.

The real motivation for putting more effort in reviews is that this effort pays off in subsequent phases by minimising un-necessary rework.
2.
What are Milestone reviews?

Milestone reviews are phase-end reviews or reviews that take place at defined milestones. These reviews are conducted by following the regular review process using the Milestone review summary and Milestone review checklist. Depending on the outcome of the review, Project plan is updated. Milestone reviews ensure that the deliverables of that phase/milestone are complete and are as per requirements. 
3.
What is the difference between review & audit?

Normally it is a work product that is reviewed. During audits,what is really under test is the process.

The reviews are conducted by the review team by having domain experts and the audits are carried out by auditor(s).

The intention of the reviews is to identify as many defects as possible in the item being reviewed.
 

In audits, the premise is that each project is an instance of the process and if there are any weaknesses in the process, they will surface during execution of its instance i.e., project execution. Weaknesses of process manifest as non-compliance.

Just as correction of review defects may involve correction of one or more work items, correction of non-compliance observed may require correction within the project or may need correction in the process itself.
4.
What is Code Coverage, Test Case and Test Suite ?

Code Coverage is a set of measurements that tell us which lines of source code (Line Coverage) and which branch points (Branch Coverage) have been executed by one or more sets of input data. Line Coverage is the number of lines executed, expressed as a percentage of total lines of executable code. Branch Coverage is the number of branch statements executed, expressed as a percentage of the total number of branch statements. Each set of input data is called a test case or test vector. Each collection of test cases is called a test suite. The Code Coverage tells us how complete our test suite is. This allows us to work backwards and design test cases for better coverage and more comprehensive testing hence better Product Quality.
5.
When should testing start?

Testing should start once the developer has completed the implementation of the design segment he is working on.

In our case, code is handed over to test engineer, after the developers have completed the implementation and also the unit testing by the developers is over.

By handing over the code to test team the developers are saying:We have completed the implementation, this implementation is as per the design and we have completed our testing and we can uncover no more defects.' When this statement is true, it is time to start testing.
6.
When to stop testing?

The time to stop testing is when we are sure that all requirements have been met.

This is ensured by executing various test cases. If all the test cases that are identified, fail to uncover any defect, it is time to stop testing. But the key here is how exhaustively the test cases themselves been identified.

Initially testing uncovers many defects. But progressively it becomes more and more difficult to uncover new defects. It takes more time and effort to detect new defects. But the testing can not go on indefinitely. So many a times, project schedules also dictate when testing should be terminated.

In a Level 4 organization, this question will be answered quantitatively. i.e. Unless so many defects (in each severity category) are detected, product wont come out of testing phase.
7.
Why should testing be independent of development? How is it ensured?

As long ago as 1969, Dijkstra said that "program testing can be used to show the presence of bugs, but never their absence!".

So, any amount of testing will not ensure that your software product is totally defect free. In this situation, the question is not whether all the bugs have been found, but "whether the program is sufficiently tested". Within the time available for testing, the objective should be to find as many defects as possible and fix them. This is best achieved by performing testing independent of development.

According to Glen Myers, "It is impossible to test your own program". If the tester and developer are the same person, he/she would likely be myopic to the defects introduced by himself/herself. Also, the objective of testing is to detect a previously undetected defect. But if the test suite is developed by the programmer, there will be a strong inclination to design test cases to prove that the program works correctly. The mind-set required for a test engineer is always to think "how to break this program?". It would be extremely difficult for the programmer to have this mind-set while testing his/her own program. As an example: "All of us would have had the experience of not being able to detect some obvious mistakes in our work, which is easily pointed out by others, after just a glance."
8.
What are the different strategies of software testing?

Here are two strategies of software testing: 

(a) Black Box testing:
In using this strategy, the tester views the program as a black box, completely unconcerned about the internal behavior and structure of the program. So, tester is only interested in finding circumstances in which the program misbehaves. This strategy involves input/output-driven testing.
 

(b) White Box testing:
This is logic-driven testing strategy, in which the tester derives test data from an examination of the program's logic. So, white box testing is concerned with the degree to which test cases exercise the login(source code) of the program.
Further reading : "Art of Software Testing" by Glenford Myers
9.
What are the various black box test techniques?

Following are some of the black box test techniques:
(a)
Equivalence partitioning
(b)
Boundary-value analysis
(c)
Cause-effect graphing
(d)
Error guessing

(For more details, refer "The Art of Software Testing", by Glenford Myers)
10.
What are the various white box test techniques?

Following are some of the white box test techniques:
(a)
Statement coverage
(b)
Decision coverage
(c)
Condition coverage
(d)
Decision/condition coverage
(e)
Multiple-condition coverage

(For more details, refer "The Art of Software Testing", by Glenford Myers)
11.
What is the difference between unit test, module test, integration test, system test & acceptance test?

Unit test:
Unit testing is the process of taking a program module and running it in isolation from the rest of the software product by using prepared input and comparing the actual results with the results predicted by the specifications and design of the module.

Module test: Same as Unit test.

Integration Test : 
While unit testing is to determine that each independent module is correctly implemented, Integration testing is to determine that the interface between modules is also correct. ( Whether parameters match on both sides as to type, permissible ranges, meaning, and utilization )

System Test: 
The primary objective of system testing is to determine whether specified functionality is indeed present in the software. Typical aspects that are tested include: Compatibility with the operational environment, Performance specifications, Reliability, Behavior under stress, Test on various hardware configurations (if supported).

Acceptance Test :
During the acceptance test, customer will be involved. The purpose of acceptance test is to prove that the contractual responsibilities of the software organization to the customer are fulfilled. It is the process of comparing the program to its initial requirements and the current needs of its end users.
12.
What is regression testing?

During the maintenance phase of a product, developers change parts of the code, to fix the defects. These changes may introduce defects elsewhere in the product (on an average, it is said that for every 7 defects fixed, one new defect is introduced). Testing done on the product to ensure that these changes did not introduce any new defects is called "Regression testing".

So, before shipping out any maintenance release, it is essential to conduct regression testing.
13.
What is the difference between debugging & testing?

The objective of testing is to find out in WHAT aspects the product is deviating from the expected behavior(implicit and explicit requirements). The output of testing activity is a defect report. The objective of debugging is to find out WHY the product is misbehaving. The output of the debugging activity is the knowledge required to fix the defect.
14.
What is an estimation process? what is its purpose? what are estimation Techniques?

Software estimation process is the use of a set of techniques and procedures by the organization to arrive at a software estimate. The inputs to the process are generally system requirements etc. The outputs from the process are the estimates of effort, cost, schedule and manpower loading.

The senior managers make strategic decisions like whether to proceed with the project, how much to bid etc. The project managers use estimates to plan, monitor and control the implementation of a project. The estimates and re-estimates are performed throughout the software development process.

The estimation can be performed by following techniques:
(a)
Lines of code along with work breakdown structure.
(b)
Function point method
(c)
Feature point method
(d)
Work breakdown structure
(e)
Wideband delphi technique
15.
What is WBS ? How is it used ?

WBS stands for Work Breakdown Structure. It is one of the important methods for software estimation. Here the project tasks is split into sub tasks successively till the required granularity level, thus getting a tree structure where project is at the root and the tasks at the branch/leaf level. The efforts are estimated starting from leaf and summed up successively to arrive at the project estimate. A tabular approach, can be followed to document the estimate at root/branch level.

Estimation at leaf level can be done by any of the following approaches:
(a)
Estimating the activity in Simple/Average/Complex.
(b)
Estimating the activity in Lines of code (LOC).
(c)
Estimating the activity in person days effort(PD).
16.
What is wide band Delphi technique ?

It is an Estimation technique which uses experts judgment. A team is formed with expertise in estimation technique and the domain knowledge of the application area. The team is given details of phases/modules/activities of the project whose estimate is to be worked out. Also all of them have access to relevant document like SOW, RS etc. Each member estimates different modules. Then the estimates are discussed. The member having wide divergence explains the reasons for his estimates thus a detail discussion follows and the estimate is modified accordingly. The emphasis is on team consensus. This is one of the very popular estimation techniques.
17.
What is deviation? What is escalation?

When the project deviates from the defined process or set norms,deviation is said to have occurred.

When an issue is identified, which would potentially affect the schedule / quality of the project and the preventive action can be taken only by a higher authority, the issue qualifies for an escalation
18.
What is Defect prevention Plan?

Defect prevention focuses what in process needs to be corrected to prevent the defect from re-occurring. Defect Prevention involves analyzing defects that were encountered in the past and taking specific actions to prevent the occurrence of those types of defects in the future. The defects may have been identified on other projects as well as in earlier stages or tasks of the current project. Defect prevention activities are also one mechanism for spreading lessons learned from projects across organisation.
19.
What is Defect Removal Efficiency(DRE) ?

It is a quality metric which is a measure of filtering ability of quality assurance and control activities. When the project is considered as a whole the DRE is defined as:
DRE = E / E + D
Where E = number of errors found before delivery of the software to the end user
D = number of defects found after the delivery
DRE can be used at phase level to asses the team's ability to find errors before it can be passed to next software engineering task. When used in this context the DRE is defined as :
DRE(phase) = E(phase)/E(phase)+E(defect)
Where E(phase) = number of errors found during software engineering activity
E(defect) = number of errors found during next software engineering activity that are traceable to errors that were not discovered in previous software engineering activity. A quality objective for a software team is to achieve DRE(phase) that approaches 1 .That is errors should be filtered out before they are passed to the next activity.
20.
What is Configuration Management?

Configuration management is a planned activity dealing with identification of configuration items, control changes to baselined items, record and report change processing and baseline audits.These activities are described as part of CM plan section in plan document. Configuration management also includes identification of development platform, Operating System and Software Tools along with version numbers.

The purpose of configuration management is to establish and maintain the traceability and integrity of the work products throughout the project's life cycle.
21.
What is a Configuration Item (CI)?

The work products that are placed under CM and treated as a single entity are referred to as a configuration Item.
22.
What is Configuration Audits (Baseline Audit)?

Configuration audits (which is also known as Baseline Audits) are performed to:
- Ensure that Configuration Management activities are performed as given in the plan document.

- Bring out any issues related to Configuration Management so that necessary corrective actions can be taken.
At a minimum, configuration audits have to be performed at the end of each phase.

Examples of activities that can be carried out at the time of performing configuration audit are given below:

- Checking if the configuration items are baselined as specified in plan document. For illustration, if the configuration audit is performed during testing phase, it can be checked if the source code (implementation output) has been baselined.
 

- The time of baselining also can be checked during configuration audit. For example, it is necessary to baseline code (after unit testing) before test team begins testing it.
 

- Checking if baseline contains all the necessary configuration items. For example, design baseline should contain both the appropriate requirements document and design document. 

- Checking if the baselined items are tagged following the tagging notation adopted for the project.
- Person performing configuration audit can even check out a baselined item using the specified tag and verify that it is correct.
- At the time of release, it can be checked if the implementation output is consistent with work products which were used as the basis. For example, code has to be consistent with design document and requirements document. This can be done on a sample basis. 

- Checking if the software is installable at the time of release.
23.
How to record the results of Configuration Audit?

After performing configuration audit, identified issues have to be recorded. Optionally, activities performed during configuration audit, and procedure followed (whether all items were checked, how samples were chosen etc.) can also be recorded as a CM audit report. 

Issues found during configuration audit have to be analysed and appropriate action items have to be identified. The closure of identified action items have to be tracked during project monitoring reviews. For example, if the issue identified is that a section of code checked during configuration audit is not consistent with design, it might be necessary to either modify the design or the code.
24.
What is Functional Configuration Audit?

A Functional Configuration Audit (FCA) verifies that the software, as produced, meets the requirements   specifications. In other words, the software will pass FCA, if it works. 

An FCA should examine a piece of software with the following criteria:
 

- Was the software tested?
 
- Do the test plans for the software cover all of the requirements in the software specification documents?
 
- Did all of the tests pass?
 

The objective of the FCA is to verify the configuration item's and system's performance against its approved configuration documentation. For example, FCA involves verifying the software binary against the approved Requirements Specification and Design. If test data is being used to facilitate FCA, then the test data should be obtained by testing the software package that will physically be released to the customer for acceptance. This means that the package should be built from the baselined configuration tree. In other words, one should ensure that the FCA is done on the version of the software that is being delivered to the customer.
25.
What is Physical Configuration Audit?

A Physical Configuration Audit (PCA) is the verification that the software has been built correctly. The PCA examines the software's design documents and verifies the design against the implementation (i.e., code). 

A PCA should examine a piece of software with the following criteria:

- Was the software designed correctly (in accordance with the contract)?
 
- Was the software design matched to the original requirements?
- Does the software (code) match the software design?
 
- Is the software design process auditable? Are any changes that may have been made to the specification traceable to appropriate changes in the design? Are these changes then traceable to the software?
26.
Explain Risk Management as a part of Software project management?

In the context of Software engineering and management we are concerned about What might cause the software project to go awry. How will change in customer requirements, development technology, attrition, target environment and other entities connected to the project affect the timeliness and overall success. These are the risk factors. Though it is futile to try to eliminate the risk, it is wise to try to minimize it. Risk is an inherent part of all software projects therefore the project risks must be analyzed and managed. The analysis of risks begins with identification and is followed by estimation and assessment. These activities define each risks, its likelihood of occurrence, and its projected impact. Once this information is known, risk management and monitoring activities can be conducted to help control the risks that do actually occur.
27.
What is "Phase Containment " Metric?

It gives a measure of review effectiveness. It is defined as follows:
Phase Containment = (No. of errors captured during the review/ Total no. of errors & defects of that phase caught in the entire life cycle)

e.g.., If no. of design errors captured in design phase is 50, no. of design errors captured in coding phase is 20, then Phase Containment of design phase =

50/(50+20)*100 = 71.5% etc.

The organization norm for this metric is that it should be >85%. This metric is reported in monthly PSR. Initially it will be 100% when we do the review. Subsequently if we get the defects in subsequent phases then the Phase Containment value will reduce. If the value reduces below 85 it shows review effectiveness of the phase is low.
28.
What are Software Metrics? What is the difference between measurement & metrics?

Metrics is a combination of 2 or more measures(data items) used to manage processes or products or projects. 

It is generally a ratio which is used to determine the current status and to improve the process or project or a product.
 

Any metrics program involves data collection. Raw data by itself does not give any meaningful information. For this it is necessary to combine with one or more other data items, to derive meaningful information.

Eg : KLOC developed/person month
      defects/KLOC
 
      Effort/phase
 

Note that KLOC is a raw data. The ratio of total KLOC to Time taken is a measure of productivity. This can be expressed as KLOC/person month.

A question may arise "Why Metrics?". I would like to quote Drucker and Tom de Marco here.

"What you cannot measure, you can not manage"
and
 
"You cannot control what you can not measure"
 
Idea of collecting metrics is to know where we stand today and to decide what actions are necessary to be taken to improve further.
29.
What are software size measures?

Size measures are important in software engineering because the amount of effort required to do most tasks is directly related to the size of the program involved. Unfortunately, there is no generally accepted measure of program size that meets all the described criteria. This is not a serious problem as long as the measurement limitations are recognized and appropriately addressed.
One problem is lack of simplicity: there are no simple measures of software size because software size is not a simple subject. For example, one must consider new, changed, deleted, reused and modified code. There are also differences depending on whether one is dealing with system software, assembly code, object code, comments, data definition or screen presentations. Also, in addition to the products themselves, one must consider temporary patches, test programs, support programs, etc.
30.
What is 'Problem of translation ' in Software Engineering context?

The single major cause of software errors is mistakes in translating information. 

Software production, then is simply a number of translation processes, translating the initial problem into various intermediate solutions (like Requirements, Design, etc.) until a detailed set of computer instructions is produced (executable binary!).
 

Software errors are introduced whenever one fails to completely and accurately translate one representation of the problem or the solution into another more detailed representation. To ensure that all the items described in previous representation are translated to the later representation, we need traceability matrices and to ensure that this translation is correctly done, we need reviews and testing.
31.
What are some common misconceptions about the software process?

Some common misconceptions about software process are:
- we must start with firm requirements
- if it passes the test, it must be OK
- software quality can't be measured
- the problems are technical
- we need better people
- software management is different
From "Managing the software process"
32.
What are the basic principles for controlling the problems in software organizations?

Following are the basic principles :
(a)
Plan the work
(b)
Track and maintain the plan
(c)
Divide the work into independent parts
(d)
Precisely define the requirements for each part
(e)
Rigorously control the relationships among the parts
(f)
Treat software development as a learning process
(g)
Recognize what you don't know
(h)
When the gap between your knowledge and the task is severe, fix it before proceeding
(i)
When the gap between your knowledge and the task is severe, fix it before proceeding
(j)
Manage, audit and review the work to ensure it is done as planned
(k)
Commit to your work and work to meet your commitments
(l)
Refine the plan as your knowledge of the job improves
33.
What are the levels of Software process models?

The software process models can be defined at any of three levels. The U, or Universal process model provides a high-level overview. The W, or Worldly, process model is the working level that is familiar to most programmers and managers. The A, or Atomic process model provides more detailed refinements.
34.
What is Process Performance?

Process Performance is a measure of the actual results achieved by following a process. That is the actual results of the projects executed by following our organisational process. The process performance is measured during execution of the project and also at the end of the project. The actual performance results of the projects are used in two ways.
(a) 
To take corrective actions during the life cycle of the project and bring it inside the organisational norms and
(b) 
To compare the actual results with the organisation metrics norms and find the root cause, if it is beyond the metrics norms.

Based on the actual results of the project performances, the process capability is also revised.
35.
What is Project defined Software Process?

The project defined software process is the life cycle used for the project. Basically it is the project plan (plan documents) which describes the life cycle for the project. Project plan also identifies that the project is as per the Organisation process or the project life cycle is tailored from the organisational process.

Some of the groups have come up with processes specific to their group, which are tailored from the organisational process . These processes are applicable for the projects executed in that group.
36.
What is Software Crisis?

The original impetus to develop software engineering concepts came from a realization that we (software developers from the entire world) did not know how to manage large software projects. This situation is popularly known as "Software Crisis", and was characterized by late delivery of expensive, unsatisfactory and un maintainable software systems. At the same time, the inability to complete current work on time and within budget meant that needed maintenance and new development efforts were piling up. 

People have been worried about the software crisis for 25 years, but have successfully 'avoided' the actual clap of doom until now. But we could do this, as the average complexity of the product/project has not become unmanageable. But having exposed to crises that drag out for 25 years, dull our sense of danger. With the growing complexity of the products in software field, there is ample reason to demand careful engineering of software products. With ever increasing volume of literature on software engineering (project management, estimation techniques, risk management, code inspections, defect prevention etc.), and growing awareness and acceptance of the need for software engineering among the engineers, we are slowly inching towards the situation where 'software crisis' can be declared dead.
37.
How can we achieve continuous improvement through Process?

For any improvement, first we have to identify the current strengths and weaknesses and the areas where improvements are possible. Based on these findings, there should be an action plan drawn up and implemented. Once this step of implementation is nearing completion, it should be time to get back to identification of strengths and weaknesses. This activity should be repeated continuously. Unless there is a defined process in place, implementing this cycle in a large organization is almost impossible.

Continuous process improvement is achieved through small improvements; in small steps.
 

Primary inputs for continuous improvements are suggestions and complaints.

The other inputs are Audit and Assessment findings. Analysis of metrics collected are other important inputs.
38.
What is 'Prototyping'? What are it's advantages?

A 'prototype' is an incomplete product or subset of a product, built to simulate the essential aspects of the product. This technique is used when the customer is not very clear about the requirements. By looking at the prototype, customer would be better able to specify the requirements. Following are the advantages of prototyping:
(a)
Improved communication between customer and developer, since the communication will be based on the prototype built.
(b)
Helps in better requirements elicitation.
(c)
Increased ability on developer's part to satisfy product requirements
(d)
Rapid exploration of alternative solutions to complex problems.
(e)
A reduction in the number of changes to specifications, during the development of the product.

Since the main aim of prototype is to explore the possibilities, the code developed generally lacks robustness. So, it is always recommended that after the prototype is approved, product should be developed from scratch again, without using the code developed during prototyping.

The main disadvantage of prototyping technique is the difficult-to-resist temptation to productize the prototype built, by using the same code.
 

This technique is the default technique used in development of MIS application packages.
39
What is function point analysis ? Explain with an example.

Function point analysis is a standard method for measuring software development from the customer's point of view. Function point (FP) measures software by quantifying its functionality provided to the user based primarily on logical design. The objective of FP counting is to :
(a)
Measure functionality that the user requests and receives.
(b)
Measure software development and maintenance required independent of technology used for implementation.
Function point --- An Example : 

Description Low Average High Total
----------- ------ ------ ------- ------
External Inputs 3 (* 3) 9(* 4) 0 (* 6) 45
External Outputs 5 (* 4) 9(* 5) 0 (* 7) 65
External Enquiries 0 (* 3) 0(* 4) 0 (* 6) 0
Internal logical files 11(* 7) 15(* 10) 0 (*15) 227
External interface files 44(* 5) 5(* 7) 0 (* 10) 225
-----------------------------------------------------------------------------
Total unadjusted Function Points = 592
Total degree of influence = 18(calculated based on responses to 14 factors and their weightage)
Adjustment factor= 0.65 + 0.01 * 18(This is formula,18 is degree of influence) = 0.83
Final Functional point count = 592 * 0.83 = 491.36
This is one of the important Estimation technique for software.
40.
What is RCA?

The Root-Cause analysis is a structured meeting. Its objective is to examine defects/problems & generate suggestions that will prevent the occurrence in future. The methodology for RCA is same both for organizational & project level defect prevention.

Organisational level RCA is coordinated by MR and the team for organisational level RCA is constituted by MR. The implementations of preventive actions are monitored by MR.

Project level RCA is done by the project team and is coordinated by project manager.

Project Level RCA:
 
The Project Manager selects candidates for Root Cause Analysis based on the triggers identified in the project plan. After identifying the Root-Causes the project will identify counter measures to eliminate the Root Causes.
41.
What is CASE?

CASE stands for Computer Aided Software Engineering. 
CASE envisages use of tools and methods in each phase of software development to increase productivity.

There is a proverb cobblers children don't have footware. The situation in software development was no different. Even though software industry helped automate other industries, there were very few tools available for software developers themselves.
 

CASE sought to address this by providing tools for each phase of software development.
42.
What is a traceability matrix?

Traceability matrix is an aid to trace requirements across different phases of development. 

For example, there is a need to trace a requirement to its design. Or a code segment back to its requirement through its design.
 

Traceability matrix helps in addressing questions like:
   -
What test cases are used to test a requirement?
   -
Where (in which module) design of this requirement can be found?
   -
If a requirement changes, which portion of design, which test cases, which source file are to be modified?
43.
What is Process Asset?

Process Asset is a repository of information, on our organization processes, standards and guidelines, best practices, lessons learnt,information's on tools, tailored processes, model documents, organizational metrics norms and many more… 

In SEI CMM terminology it is referred as organisation's software process assets addressed in Level 3 KPA (Organisation Process Definition).

Project teams use process assets data for planning, developing, tailoring, maintaining and implementing the project processes during project execution.

Process asset grows rich by adding new information as and when it is evolved and will be provided in the respective sections.
44.
What is a Control chart?

Statistical techniques are used to analyze the performance of a process in quantitative manner. One of the popular tools for the statistical analysis is a control chart. Control chart aids in understanding the variations in the performance of a process over time. Control chart is used in identifying an abnormal variation by distinguishing variations due to assignable causes from those due to chance causes.

A control chart consists of a central line (CL), a pair of control limits,one each allocated above (UCL) and below (LCL) the central line and the data points are plotted on the chart, which represent the state of a process. The data points can fall within the control limits or outside control limits.

The data points falling within control limits without any particular tendency is said to be due to chance causes and is attributed variations inherent in the process. Such causes are unavoidable in nature and it occurs inevitably.

The data points falling outside the control limits are said to be due to assignable causes. When significant data points fall outside control limits, then process is said to be out of control.
45.
What is "c" chart?

There are different types of control charts, like X Bar -R chart, p chart, u chart, c chart etc., which are used for different applications. "c" chart is one of the types of control chart. "c" charts are used to control and analyze a process by number of defects found in the product. The control limits for "c" chart are calculated based on Poisson distribution.
46.
What is a Precontrol chart?

Precontrol charts are used to study and compare the characteristic of a parameter with the tolerance limits. In Precontrol charts the upper and lower control limits are created using the tolerance limits(specification limits). The center of the specification becomes the center line of the chart. In case of 'c' chart the control limits are drawn based on the data points derived from the actual results. Precontrol charts are useful for checking the process centering within the tolerance limits. Precontrol charts are simple to draw and use, for the data points. We have used the Precontrol charts to study the effort and schedule deviations.
47.
How to read and apply Precontrol charts for the projects?

Precontrol charts are used to study the effort and schedule slippages for the projects executed. Compute the schedule and effort deviations (against planned) for every milestone, or client deliverable or as planned in project's plan document. This will be the data point.

If the data point falls above the green zone look for assignable causes and isolate causes for data point condition. This can also be used to plan for corrective and preventive actions so as control effort and schedule deviations within the tolerance limits.
48.
What is Earned Value in a a software project context?

Earned Value is an object measurement of how much has been accomplished on a project.

Earned Value, Performance Measurement, Management by Objectives, and Cost Schedule control Systems are Synonymous in terms. The use of either project process, or a Line of balancing methodology for measuring accomplishment on the project is an earned value process.
 

Earned value improves on the "normally used" spend plan concept (budget versus actual incurred cost) by requiring the work in process to be quantified.

Using the Earned process, members of the management can readily compare how much work has actually been completed against the amount of work planned to be accomplished. Earned Value requires the project manager to plan, budget and schedule the authorized work scope in a time-phased plan. The time phased plan is the incremental "planned Value" culminating into a performance measurement baseline. As work is accomplished, it is "earned" using the same selected budget term. Earned Value compared with the planned value provides a work accomplished against plan. A variance to the plan is noted as a schedule or cost deviation. Actual cost is compared with the earned value to indicate a over or under run condition.

Planned value, Earned value, and cost data provides an objective measurement of performance, enabling trend analysis and evaluation of cost estimate at completion within multiple levels of the project.