On the Saturday following the panel discussion, we received feedback from James Bach by email, with the friendly note that he is willing to share this email with all attendees of the Dutch Test Day. The email follows below.


From: James Bach [SMTP:Dit e-mailadres wordt beveiligd tegen spambots. JavaScript dient ingeschakeld te zijn om het te bekijken.]
Sent: Saturday, October 09, 2004 1:53 PM
To: 'Henk van Dam (Collis)'; Dit e-mailadres wordt beveiligd tegen spambots. JavaScript dient ingeschakeld te zijn om het te bekijken.; 'Chris Schotanus'; 'Harry Vroom'
CC: Dit e-mailadres wordt beveiligd tegen spambots. JavaScript dient ingeschakeld te zijn om het te bekijken.
Subject: RE: Your panel participation on the 10th Dutch Testing Day

Hi Guys,

The panel was an interesting experience for me, and frustrating, too, since I have long and extremely detailed answers to your questions and challenges, yet had no significant opportunity to share them.

So, I want to make sure you have access to some of those details.

I and my colleagues, mainly Dr. Cem Kaner, have been studying and experimenting seriously with exploratory testing as a discplined process since 1990 or so. We are serious about our studies, which I am confident you will discover if you look closely at what we are doing. We also invite criticism, although you must be careful to criticise what we are actually saying and doing, rather than what you might imagine, before reading or encountering our work, what we are saying and doing.

The following is a link to a formalized and focused version of exploratory testing that I developed for Microsoft. It was designed as part of (one third of) the Microsoft logo program, and later was adopted by the Microsoft compatibility testing group as the means by which they quickly retest some 1500 applications whenever a new build of Windows comes out. This procedure cannot be followed properly without training. A three-day training process certifies (or flunks) testers to follow the process. The process was created because Microsoft discovered that it was not remotely feasible to use scripted means alone to perform the testing.
<http://www.satisfice.com/tools/procedure.pdf>

This is an especially important example because it illustrates, implicitly, the difference between merely having your "brain turned on" (a necessary but insufficient condition for excellent testing) and doing systematic and defensible exploratory testing.

DOING VS. DOING WELL

One of the confusions during the panel was the distinction between doing ET and doing it well. This is an important distinction.

When someone says "I don't do ET" then I like to stop them and remind them what ET *is* and ask if they genuinely are not allowing themselves to evolve their test ideas (to do new test design) throughout the course of the project. That is ET.

Being able to label ET and talk about it is an important first step.
Part of that is to stop confusing it with "testing without a plan" or "testing unsystematically". I constantly am planning while I test. I am testing systematically. Among other things, I systematically move through my 34 guideword heuristics (that big long mnemonic).

I could point to incompetent testers writing absurd scripts, but that would not itself be an indictment of scripted testing. Similarly it is no indictment of ET to cite examples of obviously untrained people playing around at random. However, I suspect that untrained exploratory testers find far more problems than untrained scripted testers do.

Still, doing ET is not the same as doing it well. I am not really an advocate of ET, so much as I am an advocate for testing done well. The fact is there is no research that justifies the belief that any testing is better just because it is "scripted." Scripted testing is not necessarily more repeatable, nor does it necessarily cover the product better. These are simply unexamined myths. I encourage you to begin
examining them. Part of examining them is to look at whether people who
follow scripts are actually following them, and what those scripts cost to create and maintain, and how that compares to the value of additional testing that can be done when not encumbered by scripts. There is no research to support the scripted testing mythology, but there sure are a lot of textbooks that treat it as self-evidently good.

I can certainly cite research supporting *my* point of view. My first source is The Sciences of the Artificial. Basically Herbert Simon devoted his career to reworking our view of design and decision-making. Most of his work directly applies to exploratory testing. As does the work of another recent Nobel Laureate Daniel Kahneman, whose research into natural reasoning processes provides part of the syllabus of exploratory testing (methods of de-biasing).

Here is a selection of papers that I have found useful to ground my
approach:

Andre, M., Borgquist, L., & Molstad, S. (2003). Use of Rules of Thumb in the Consultation in General Practice-- An Act of Balance Between the Individual and the General Perspective. Family Practice, 20(5), 514-519.
[this paper discusses the essentially exploratory and heuristic processes doctors use in general practice]

Folkes, V. S. (1988). The Availability Heuristic And Perceived Risk.
Journal of Consumer Research, 15(1). [Experiments that examine biases in risk perception]

Heineman-Pieper, J., Tyson, K., & Heineman Pieper, M. (2002). Doing Good Science Without Sacrificing Good Values: Why the Heuristic Paradigm is the Best Choice for Social Work. Families in Society, 83(1), 15-28.
[Discusses the "heuristic paradigm", a "metatheory of research that starts from the realization that there are no priviliged realities or ways of knowing, and therefore that there is no way to include all relevant information in data gathering and analysis." This relates to the basis of my testing methodology.]

Hinds, P. J. (1999). The Curse of Expertise: The Effects of Expertise and Debiasing Methods on Predictions of Novice Performance. Journal of Experimental Psychology: Applied, 5(2), 205-221. [Why testers have difficulty understanding difficulties users go through]

Newell, A., & Simon, H. A. (1976). Computer Science as Empirical
Inquiry: Symbols and Search. Communications of the ACM, 19(3), 113-126.

Quigley, E. J., & Debons, A. (1999). Interrogative Theory of Information and Knowledge. Paper presented at the SIGCPR '99. [theory of questioning]

Sriraman, B. (2004). Discovering a Mathematical Principle: The Case of Matt. Mathematics in School, 33(2), 25-31. [An example of how a model is induced in a mind from examples]

Stacy, W., & MacMillan, J. (1995). Cognitive bias in software engineering. Commun. ACM, 38(6), 57--63.

How many more would you like me to cite?

REGARDING METHODS

The issue of methods is important, because ANY method of testing can-- in principle-- be performed in an exploratory fashion or a scripted fashion. Therefore, focusing on methods is a distraction from the real issue. What is the real issue? Skills. It isn't just minds turned on, but *trained* minds turned on.

Skills are not methods. A couple of weeks ago I played about one hundred games of chess with my father over a two-week period. I lost the first ten games or so, and then I started winning some. Overall I won about half the games we played. We agreed that my chess skills improved with practice, even though I'm not aware of any new or different method in my play.

There *are* some methods that are especially a part of excellent exploratory testing. The heuristic test model is one (those 34 letters). Another is forward-backward thinking (see chapter 2 of Daniel Solow's book How to Read and Do Proofs). Another is "Plunge in and Quit" which is a way of tackling a problem when you have no plan. The HICCUPP oracle heuristic helps people test without a spec. There are numerous "quick test" methods including shoe tests, playing writer sez, doing complexity and variability tours, invoking sample data, input constraint attacks, using "crazy configs."

But I would say that the methods pale next to the skills. Just as chess methods are paltry compared to chess skills.

Ed made an interesting statement when he said "it's just science." Yes, it is ordinary scientific thinking applied to software testing. What's new about that? Not a whole lot since Socrates, Descartes, Hume, and Popper. But I wager that few people with scientific degrees are actually able to think scientifically about software quality on demand. I base my wager on my extensive experience teaching people how to test. Most people are pretty bad at it, even if they are quite experienced and otherwise well educated. I base my wager, as well, on the prevalence of unscientific and in my opinion, quite absurd, claims about how testing should be done. For example, the claim that a test is not valid unless it has a specific pre-scripted expected result is just absurd. I can sit any of you down in front of a product and you will easily find valid problems in it even though I give you no specification and you can have no specific expectations about it before operating it.

The important innovation in the discipline of ET is that we have discovered how to identify and transfer the constituent skills of testing. This makes ET manageable and controllable. Are there systematic methods/skills of improving testing skills? Yes. Here are some ideas from an article I wrote that is not yet on my website:

The outer trappings, inputs and outputs of exploratory testing are worth looking at, but it is the inner structure of ET that matters most-the part that occurs inside the mind of the tester. That's where ET succeeds or fails; where the excellent explorer is distinguished from the amateur. This is a complex subject. Here are some of its key elements:

Test Design: An exploratory tester is first and foremost a test
designer. Anyone can design a test accidentally. The excellent exploratory tester is able to craft tests that systematically explore the product. Test design is a big subject, of course, but one way to approach it is to consider it a questioning process. To design a test is to craft a question for a product that will reveal vital information.

To get better at this: Go to a feature (something reasonably complex, like the table formatting feature of your favorite word processor) and ask thirty questions about it that you can answer, in whole or part, by performing some test activity, by which I mean some test, set of tests, or task that creates tests. Identify that activity along with each question. If you can't find thirty questions that are substantially different from each other, then perform a few tests and try again.
Notice how what you experience with the product gives you more questions.

Another aspect of test design is making models. Each model suggests different tests. There are lots of books on modeling (you might try a book on UML, for instance). Pick a kind of model, such as a flowchart, data flow diagram, truth table, or state diagram, and create that kind of model representing a feature you are testing. When you can make such models on napkins or whiteboards in two minutes or less, confidently and without hesitation, you will find that you also are more confident at designing tests without hesitation.

Careful Observation: Excellent exploratory testers are more
careful observers than novices, or for that matter, experienced scripted testers. The scripted tester need only observe what the script tells him to observe. The exploratory tester must watch for anything unusual, mysterious or otherwise relevant to the testing. Exploratory testers must also be careful to distinguish observation from inference, even under pressure, lest they allow preconceived assumptions to blind them to important tests or product behavior.

To get better at this: Try watching another tester test something you've already tested, and notice what they see that you didn't see first.
Notice how they see things that you don't and vice versa. Ask yourself why you didn't see everything. Another thing you can do is to videotape the screen while you test, or use a product like Spector that takes screen shots every second. Periodically review the last fifteen minutes of your testing, and see if you notice anything new.

Or try this: describe a screen in writing to someone else and have them draw the screen from your description. Continue until you can draw each other's screens. Ideally, do this with multiple people, so that you aren't merely getting better at speaking to one person.

To distinguish observation from inference, make some observations about a product, write them down, and then ask yourself, for each one, did you actually see that, or are you merely inferring it? For instance, when I load a file in Microsoft Word, I might be tempted to say that I witnessed the file loading, but I didn't really. The truth is I saw certain things, such as the appearance of words on the screen that I recall being in that file, and I take those things to be evidence that the file was properly loaded. In fact, the file may not have loaded correctly at all. It might be corrupted in some way I have not yet detected.

Another way to explore observation and inference is to watch stage magic. Even better, learn to perform stage magic. Every magic trick works in part by exploiting mistakes we make when we draw inferences from observations. By being fooled by a magic trick, then learning how it works, I get insight into how I might be fooled by software.

Critical Thinking: Excellent exploratory testers are able to
review and explain their logic, looking for errors in their own thinking. This is especially important when reporting the status of a session of exploratory tests, or investigating a defect.

To get better at this: Pick a test that you recently performed. Ask what question was at the root of that test. What was it really trying to discover? Then think of a way that you could get a test result that pointed you in one direction (e.g. program broken in a certain way) when reality is in the opposite direction (e.g. program not broken, what you're seeing is the side effect of an option setting elsewhere in the program, or a configuration problem). Is it possible for the test to appear to fail even though the product works perfectly? Is it possible for the product to be deeply broken even though the test appeared to pass? I can think of three major ways this could happen: inadequate coverage, inadequate oracle, or tester error. Inadequate coverage means that your test doesn't touch enough of the product to fulfill its goal. (Maybe you have tested printing, but not enough different printing situations to justify confidence that the print function works.) Inadequate oracle means you used a weak method of determining whether a bug is present, and that led either to reporting something that isn't a problem or failing to notice something that is a problem. (Maybe you printed something to a file, and you verified that the file was created, but you didn't check the contents of the file.) Tester error means that your test design was fine, but you simply didn't notice something that happened, or used the wrong data, failed to set up the system properly for testing, etc. (Maybe you saw that the print-out looks correct, but it later turned out that you were looking at the results of a different test).

Since testing is basically an infinite process, all real life testing involves compromises. Thus, you should be able to find many ways your tests could be fooled. The idea is to maintain awareness about the limitations of your testing. For a typical complex product, it takes lots of different tests to answer any given question with high confidence.

Diverse Ideas: Excellent exploratory testers produce more and
better ideas than novices. They may make use of heuristics to accomplish this. Heuristics are mental devices such as guidelines, generic checklists, mnemonics, or rules of thumb. The Satisfice Heuristic Test Strategy Model (http://www.satisfice.com/tools/satisfice-tsm-4p.pdf) is an example of a set of heuristics for rapid generation of diverse ideas. James Whittaker and Alan Jorgensen's "17 attacks" is another (see How to Break Software). 

To get better at this: Practice using the Heuristic Test Strategy Model.
Try it out on a feature of some product you want to test. Go down the lists of ideas in the model, and for each one think of a way to test that feature in some way related to that idea. Novices often have a lot of trouble doing this. I think that's because the lists work mainly by pattern matching on past experience. Testers see something in the strategy model that triggers the memory of a kind of testing or a kind of bug, and then they apply that memory to the thing they are testing today. The ideas in the model overlap, but they each bring something unique, too.

Another exercise I recommend is to write down, off the top of your head, twenty different ways to test a product. You must be able to say how each idea is unique among the other ideas. Because I have memorized the heuristic test strategy model, when I am asked this question, I can list thirty-three different ways to test. I say to myself "CIDTESTDSFDPOCRUSPICSTMPLFDSFSCURA" and then expand each letter. The second letter stands for information, which represents the idea "find every source of information I can about this feature and compare them to each other and to the product, looking for inconsistencies." The "O"
stands for operations, which represents the idea "discover the environment in which the product will be used, and reproduce that environment as close as I can for testing."

Rich Resources: Excellent exploratory testers build a deep
inventory of tools, information sources, test data, and friends to draw upon. While testing, they remain alert for opportunities to apply those resources to the testing at hand.

To get better at this: Go to a shareware site, such as Download.Com and review the utilities section. Think about how you might use each utility as a test tool. Visit the web sites related to each technology you are testing and look for tutorials or white papers. Make lots of friends, so you can call upon them to help you when you need a skill they have.

Self-Management: Excellent exploratory testers manage the value
of their own time. They must be able to tell the difference between a dead end and a promising lead. They must be able to relate their work to their mission and choose among the many possible tasks to be done.

To get better at this: Set yourself a charter to test something for an hour. The charter could be a single sentence like "test error handling in the report generator" Set an alarm to go off every fifteen minutes.
Each time the alarm goes off. Say out loud why you are doing whatever you are doing at that exact moment. Justify it. Say specifically how it relates to your charter. If it is off-charter, say why you broke away from the charter and whether that was a well-made decision.

Rapid Learning: Excellent exploratory testers climb learning
curves more quickly than most. Intelligence helps, of course, but this, too, is a matter of skill and practice. It's also a matter of
confidence- having faith that no matter how complex and difficult a technology looks at first, you will be able to learn what you need to know to test it.

To get better at this: Go to a bookstore. Pick a computer book at random. Flip through it in five minutes or less, then close the book and answer these questions: what does this technology do, why would anyone care, how does it work, and what's an example of it in action? If you can't answer any of those questions, then open the book again and find the answer.

Status Reporting: Tap an excellent exploratory tester on the
shoulder at any time and ask, "what is your status?" The tester will be able to tell you what was tested, what test techniques and data were used, what mechanisms were used to detect problems if they occurred, what risks the tests were intended to explore, and how that related to the mission of testing.

To get better at this: Do a thirty minute testing drill. Pick a feature and test it. At the end of exactly thirty minutes, stop. Then without the use of notes, say out loud what you tested, what oracles you used (mechanisms or principles for recognizing a problem if it occurred), what problems you found, and what obstacles you faced. In other words, make a test report. As a variation, give yourself 10 minutes to write down the report.

 

Huidige Editie

Ook een keer De Nederlandse Testdag organiseren? Neem contact op met de stuurgroep.

Wat is De Nederlandse Testdag?

'De Nederlandse Testdag' is een conferentie over alle aspecten van software testen waarbij wetenschap, onderwijs en bedrijfsleven elkaar ontmoeten om nieuwe ideeën en inzichten te delen.

Contactgegevens

Organisatie Testdag 2015: testdag2015@testdag.nl
Website administrator: webmaster@testdag.nl