Testing
Testing is the
activity consisting of the
cohesive collection of all
tasks that are primarily
performed to determine the quality of executable
work
products (e.g., model, software, and application) by
attempting to cause them to
fail under controlled conditions so that any existing and
underlying
defects may be identified, corrected, and avoided in the
future.
The typical goals of testing are to:
- Determine the quality of the executable work product.
- Provide input regarding the readiness of the application
for launch.
- Help identify defects (via associated failures).
- Provide input to improve the development process.
- Provide input to improve the effectiveness of staff
training.
The typical objectives of testing are to:
- Cause failures in the executable work products under test
so that any underlying defects can be identified, removed, and
avoided.
- Help identify defects that would be more costly to identify
using other verification and validation techniques.
- Prevent the introduction of defects by providing known
hurdles for the application to pass (a.k.a., the “test
first” philosophy).
- Thereby increase the quality of executable work
products.
- Enable defect analysis so that the human errors causing
these defects can be avoided in the future (e.g., via process
iteration and staff training).
- Maximize the productivity of the testing teams (e.g., reuse
of test plans, test harnesses, and test suites of test
cases).
The testing activity typically includes several of the
following testing subactivities (i.e., kinds of testing):
-
Small Component Testing:
-
Integration Testing is the testing of a partially integrated application to
identify defects involving the interaction of collaborating
components:
-
Commercial Component Integration Testing is the
integration testing of two or more commercial-off-the-shelf
(COTS) software components to determine if they are not
interoperable (i.e., if they contain any interface
defects).
-
Database
Integration Testing is integration testing to determine if
the application software components interface properly with
the database(s).
-
Hardware
Integration Testing is the integration testing of two or
more hardware components on a single platform to produce
failures caused by interface defects.
-
Prototype Usability
Testing is the usability testing of a user interface
prototype.
-
Software
Integration Testing is the integration testing of two or
more software components on a single platform to produce
failures caused by interface defects.
-
System Integration
Testing is the integration testing of two or more system
components. Specifically, system integration testing is the
testing of software components that have been distributed
across multiple platforms to produce failures caused by
system integration defects (i.e., defects involving
distribution and back-office integration).
-
System Testing is the testing of the integrated, blackbox application
against its requirements during the
construction phase:
-
Availability Testing is
the system testing of an integrated system against its
operational availability requirements.
-
Configuration
Testing is the system testing of different variations of
the application against its configurability
requirements.
-
Contention Testing is the
system testing of an integrated application that attempts to
cause failures involving actual or simulated
concurrency.
-
Functional Testing is the
system testing of an integrated, blackbox application against
its operational (i.e., functional) requirements.
-
Internationalization Testing is the system testing of an
application against its internationalization
requirements.
-
Load Testing is the system
testing of an application that attempts to cause failures
involving how its performance varies under
normal conditions of utilization (e.g., as the load
increases and becomes heavy).
-
Operations Manual
Testing is the system testing of an integrated blackbox
[partial] application against the procedures in the
operations manual to see if it can be operated by the
operator.
-
Performance Testing is
the system testing of an application against its performance
requirements under
normal operating circumstances in order to identify
inefficiencies and bottlenecks.
-
Portability Testing is
the system testing of an integrated [partial] system against
its portability requirements.
-
Reliability Testing is
the system testing of an application against its reliability
requirements.
-
Robustness Testing is the
system testing of an application that attempts to cause
failures involving how it behaves under
invalid conditions (e.g., unavailability of dependent
applications, hardware failure, and invalid input such as the
entry of more than the maximum amount of data in a
field).
-
Security Testing is the
testing of the [integrated] application against its security
requirements and the implementation of its security
mechanisms.
-
Stress Testing is the system
testing of an application that attempts to cause failures
involving how its performance varies under
extreme but valid conditions (e.g., extreme
utilization, insufficient memory inadequate hardware, and
dependency on over-utilized shared resources).
-
System Usability
Testing is the system testing of an integrated, blackbox
application against its usability requirements.
-
Launch Testing is the testing of the completed system in the
production environment(s) during the
delivery
phase:
-
Acceptance Testing is the
formal customer-observed testing of an application in its
production environment to determine if it is acceptable to
its customer.
-
Alpha Testing is the launch
testing consisting of the development organization’s
initial internal dry runs of the application’s
acceptance tests in the production environment.
-
Beta Testing is the launch
testing of the application in the production environment by a
select few users prior to acceptance testing and the release
of the system to the entire user community.
-
Usage Testing is the testing of the accepted production system in the
production environment(s) during the
usage
phase:
-
Operational Testing is
the usage testing of the launched application in its
production environment(s) to determine if it meets the true
needs of its stakeholders (primarily its users).
-
Security Testing is the
testing of the [launched] application against its security
requirements and the implementation of its security
mechanisms.
The typical testing activity typically involves producers
performing the following testing tasks:
-
Test
Planning is the task of planning the testing activity that
will take place on a project.
-
Test Reuse is
the task of reusing reusable test work products on the
project.
-
Test Design is
the task of designing the test work products.
-
Test
Implementation is the task of implementing the test work
products.
-
Test
Execution is the task of running the test scripts and
executing the test suites of test cases.
-
Test
Reporting is the task of reporting the results of test
execution to the relevant stakeholders.
Although typically initiated by testing, the following are not
part of the testing activity:
The testing activity involves the following producers
performing the following kinds of testing in an incremental,
iterative, parallel, and time-boxed manner:
The testing activity results in the production of all or part
of the following work products:
Testing tasks are typically performed during the following
phases:
- Do not skip testing at one level of scope in the hope that
later testing at higher level will identify the same defects
and same time and effort. The testing that occurs at one level
makes the testing at higher levels more efficient and
effective.
- Tests can also have defects causing:
- False negative results - the test fails to report
failures that occur.
- False positive results - the test reports failures that
have not occurred.
- There is controversy as to whether or not the prime goal of
testing is to discover existing defects by causing failures or
to prevent those defects from being introduced in the first
place. The answer seems to be based on the relative importance
given to the testing tasks. If test execution and reporting are
considered most important, then they can only help descover
existing defects because the defects will already have been
introduced by the time that test execution and reporting occur.
On the other hand, if test planning and design are considered
most important, then they can help prevent defects from being
introduced if they occur early and preceed or coincide with
application/component requirements, design, and implementation.
The latter approach is typically called “Test
First”.
- A good test is one that has a high probability of:
- Causing failures due to as yet undiscovered defects.
- Preventing defects from being introduced by providing
known hurdles to be passed.
- A highly successful test is one that causes one or more
failures due to one or more as yet undiscovered defects and
that provides enough information to localize these
defects.
- “Test early and test often. If it is worth creating,
it is worth testing.” Scott Ambler
- Problems associated with testing usually fall into one of
the following categories:
- Inadequate amount of testing:
- Inadequate schedule allocated to testing.
- Regression testing not automated in an iterative,
incremental development cycle.
- Testing put off until too late in the project
schedule.
- Test completion criteria not defined.
- Antitesting message given by management (e.g., “I
don't care how you do it, but the system has to be done on
time and within budget.”)
- Testers inadequately trained in testing theory and
practice.
- Use of ineffectual testing techniques.
- Amount and type of testing not based on risk
reduction.
- “Too little testing is a crime; too much testing is
a sin.” William Perry