CS 435 - Week 14 Lecture 1

Schedule changes!!
*   CS 435 - Final Exam Review Suggestions handout
    posted by TONIGHT 

*   CS 435 - Homework 8 posted by TONIGHT
    due by 11:59 pm next Tuesday, MAY 6

*   ETS CS MFT in BSS 416 on THURSDAY!!!!!!!!!!!!!!!!!!!
    *-----------------------------
    * SHOW UP, TAKE ETS CS MFT, or, ah, you can't pass CS 435
    *-----------------------------

    *******************************
    *   20 "bonus" points on a HOMEWORK
        IF you are IN BSS 416 by 2:57 pm <-- by my cell phone
    *******************************

*   TUESDAY, MAY 6 - still MMM Ch 19 reading quiz,
    and additional review for Final (yes, with clicker
    questions)
    *   Homework 8 problem solutions posted right after
        11:59 pm...

*   Final exam on THURSDAY, MAY 8 - 3-4:50
*   Project presentation on TUESDAY, MAY 13, 3-4:50, SH 002

    *   all project pieces for final iteration
        must be pushed to github team repo by
	11:59 pm on TUESDAY, MAY 13

***********************
Jalote Ch. 8 - Testing
***********************

*  reliability: the chances of software failing

   *   more defects -> more chances of failure -> 
          lesser reliability

   *   so, if quality is a goal,
       we'd LIKE as few defects as possible in the
       delivered software

*   failure: a software failure occurs if the
    behavior of the sw is different than
    expected/specified

*   fault: cause of software failure
    sometimes synonymous: fault == bug == defect

*   a failure implies the presence of defects
*   a defect has the POTENTIAL to cause failure

    *   or...
        a failure implies the presence of one or more faults --
        but a fault may not necessarily lead to a failure.

*   we've talked about a number of means for
    attacking software defects --
    we can try to design in ways to prevent
       their injection,
    we can use processes like code review
       to find them,
    testing is YET ANOTHER means of trying
       to identify defects.

*   Testing is NOT exact, and NOT absolute --
    *   if a test results in a failure, there IS at least one fault --

    *   but if it doesn't, that cannot prove that there are NO faults.

    *   (testing can increase one's confidence in the absence of faults,
        but it cannot prove the absence of faults)

*   SUT - software under test
    *   during testing, SUT is executed with
        a set of test cases

	failure during testing implies 
	that defects are present

    *   IMPORTANT!!!!!!!!  
        If you run a set of test cases,
	and don't observe a failure,
	that can be said to increase our
	confidence in the software --
	but you can NOT say "defects are ABSENT"

*   test case - a set of test inputs and execution
    conditions designed to exercise SUT in a 
    particular manner
    *   A TEST CASE NEEDS TO SPECIFY THE EXPECTED
        OUTCOME

    *   (and someone or some program needs to compare a test case's
        actual results to the expected results -- sometimes this
	someone/something is called a test oracle)

*   test suite: group of related test cases generally
    executed together

*   test harness/test framework - these automate some part of the
    testing process

levels of testing:
*   unit testing - code-level
    *   testing a particular code module
    *   focus - defects injected during coding
    *   frequently done by the programmer

*   integration testing - testing the design

*   system testing - testing the system against the
       requirements specifications

*   acceptance testing - testing the system against
       the users' needs

*   Testing process (at a very high level)
    *   test planning
    *   test case design
    *   test execution

*   two classic categories/approaches to
    test case design:
    black box - functional 
    white box - structural - "clear box"

    *   NOT one-or-the-other --
        ideally, they complement each other,
	and it's good to include some of both

BLACK BOX (functional)
*   SUT is to be treated as a black box
    test cases are determined solely from
    the specificaton

    *   examples:
    *   equivalence class partitioning
        *   divide the input space into equivalence
	    classes -- if the SUT works for 1
            test case from such a class, it should
	        work for all of them

    *   boundary value analysis
        *   recognizing that faults often near
	    on or near boundaries of equivalence
	        classes
  
        *   here, recommending testing AT the
	    boundary, AND a little above,
	        AND a little below

*   (also discussed white-box testing -- structural!
    *   including the difference between statement coverage
        and branch coverage)
        ^ these are NOT the same!