Introduction

I describe some of the testing techniques I encountered.


Some of these techniques were for hardware testing, but might give ideas for software testing.

Back to Back Testing

Mitel built PBXs.[1]  


At the very end of testing, we took two "finished" PBXs and pointed them at each other.  


The PBXs would self-test by calling each other many times.


One should be able to do this with a class.  Make two instances, then have the instances talk to one another.


If this sounds hard and undoable, your organization needs a Test Engineer.  Someone whose job it is to kibitz designers on how to make their classes more testable.


If this becomes a UX issue, look at sikulix, http://sikulix.com/.

Regression Testing (HP Trace Analyzer)

We used regression testing on hardware.


CI tools do this now.[2]


For hardware, we took a "golden" known-to-be-working board and used an HP Trace Analyzer on it.  It recorded repeatable signatures (collapsed down to some kind of hash value) for certain input streams.


Boards/circuits-under-test were deemed "good" if they generated the same signatures (for the same inputs).  If the signatures were wrong, the circuit-under-test would be put aside for further testing.

Coverage Testing

When alpha-testing software, we used coverage testing.


We created an input dataset that hit every piece of code at least once.


This got rid of blunders and typos.  Designers also discovered dead code with this test.[3] 


Using SCLs makes this option particularly attractive.  Since the code is generated, we could modify the transpilers to insert anything into the generated code and have it automagically appear.


NorTel

NorTel used to have a policy about bug fixes.


Bug fixes did not remove code, they just added edge-cases that were specialized to find and shunt only the conditions that made the bugs appear.  The fixes went into the shunt.  The original code was mostly left alone.


Telecoms, like NorTel, Mitel and Bell used to pride themselves on four nines guarantees.  They guaranteed that their systems would have uptimes of 99.99%. 

Banking Y2K

We worked on the Y2K problem at some big banks.


They had a policy that they would not test during the work week.


Testing could only be done on Saturdays, leaving Sundays for reverting back to the previous code if problems occurred.


Furthermore, one weekend was reserved for month-end consolidation.


Another weekend was reserved for "maintenance" upgrades.


I forget what the third weekend was reserved for, but we were left with one day every month (1 Saturday for new code, 1 Sunday for reverting if needed).


Then, there was year-end.


Data spread was uneven, because month-ends came at different times.  We couldn't just generate fake data in a repetitive manner, we had to fake the calendar and all month-ends.


Generating the fake data needed as much compute-power as was used for live banking.


Starting in, say 1996, the Banks needed to generate 4 years worth of fake data but could only test it on one day of every month.  


Most of the code was written in COBOL.  Some in assembler.  Not all source code could be found.


Some of the "date" affected code had names like "Nancy", "Jane", etc.[4]


The Banks were set up to touch maybe 5% of their code per year.[5]


The Y2K problem affected something like 30% of the code.


The testing problem turned out to be harder than the actual fixing problem.


We started by trying to auto-fix the code.  This affected too much of the code.  In the end, we just generated reports and let the maintainers fix the code.  Their wetware made the effort possible — they could tell which things really needed to be fixed and which were red herrings.




[1] Fancy computer-controlled telephone switches.  Telephone lines were mostly analogue at the time.

[2] This is not a new idea.

[3] Testers would come back to designers and ask how to push the code so that it hit certain routines.  After scratching their heads, designers would - sometimes - realize that certain code was unreachable and would lance such unreachable code from the codebase.  It was always a surprise when dead code was found - it looked useful, but couldn't be reached and the compilers couldn't detect the problem.

[4] The original programmer's female dates.  Names that were suggestive enough to raise the eyebrows of the auto-detection software.

[5] Only about 200 programmers were allocated to the job of fixing bugs.