* MORE on testing...
* concepts and terms!
* reliability - the chances of software failing
more defects -> more chances of failure -> lesser reliability
* reliability is one of that group of aspects of quality --
and often, one of the Quality goals of software is frequently:
* Have as few defects as possible in the delivered software
* faults and failure
* failure - a software failure occurs if the behavior of the
software is different from that expected/specified
* fault - a potential cause of software failure
* sometimes considered synonyms:
fault == bug == defect
* remember that the definition of a defect can be
environment, project specific
* failure implies the presence of defects
...a defect has the potential to cause failure
* role of testing
* *reviews* tend to be human processes -- can NOT catch all
of the defects
* so, WILL be defects at a variety of levels --
requirements defects, design defects, coding defects
* TESTING, then, is another means for identifying defects
* SUT - software under test - the software currently being tested
in a test
* during testing, SUT is executed with a set of test cases
* if there is a failure during testing,
that implies defects/faults/bugs are present
* what if no failure occurs during testing?
* our confidence grows in the software, BUT this does NOT
imply that no defects are present!!!
* we can't (correctly) say "Our tests succeeded, so our software
has no defects!"
* odd but true: the ideal testing attitude is to want to
TRY to break the software, to try as hard as possible to
cause failures during testing...!
* test oracle - how one knows if a test case has succeeded,
if a failure has occurred;
(test oracle can be a human, can be checklist, might be
automated...)
* test case - a set of test inputs and execution conditions
designed to exercise SUT in a particular manner
* test case SHOULD ALSO SPECIFY the expected output/behavior!!
(test oracle uses this to detect failure)
* test suite - group of related test cases generally executed together
* test harness - or test framework -
* during testing, for each test case in a test suite,
conditions have to be set, SUT has to be called with
particular inputs, and outputs/behaviors have to be checked
against what is expected to declare if the test has passed or
failed, and tear down has to happen
* a test harness or test framework might be used to automate
that kind of testing process
* levels of testing:
* nature of defects varies for different injection stages
* one level of testing is generally NOT able to find
all of the defects
different levels of testing have a better chance of
uncovering different levels of defects
* user needs? acceptance testing especially useful here
requirement specs? system testing especially useful here
design? integration testing especially useful here
code? unit testing especially useful here
* unit testing tests individual modules separately;
* integration testing on INTERACTION of modules in a
subsystem
* unit-tested modules are combined to form subsystems
* test cases are used to "exercise" the interaction of
modules in different ways
* may be omitted if the system is not too large
* system testing is testign the entire software system
* focus is often: does the software implement the
requirements?
* sometimes also viewed as a validation exercise for
the requirements;
* acceptance testing - focus: does the software satisfy user
needs?
* performance testing - will need tools/scaffolding to measure
performance;
* stress testing - load the system to peak/extreme conditions,
somehow measure how it performs
* regression testing - ideally, want to rerun all tests every
time the software is changed;
(or, the highest-priority tests need to be rerun, at least,
if complete retesting is not feasible)
* remember: testing only reveals the presence of defects --
(can not PROVE there are no defects;
a failure implies there ARE defects,
but lack of failures in a test suite does not PROVE lack of defects)
* testing process at a high level might include:
test planning,
test case design,
test execution
* let's talk about test case design a little bit:
* several approaches to this:
* black box testing
* white box testing
* (and yeah, there's stuff in between too --
e.g., grey-box testing...!)
^ complementary, ideally you'll determine the
entire mix that will work well for a test suite
* black box testing: treat the SUT as a "black box" --
* specification for the black box is given;
the expected behavior of the system is used to
design test cases
internal code structure is not used, here, for
test case design
* focuses on functionality
* white box testing: NOW you are using the internal code structure
for test case design;
* aim is to exercise different program structures with the
intent of uncovering errors
* desired coverage criteria is used, here, for
test case design
* more on these after fall break!