This is an unordered repository of a few of James Bach's (and one of Jonathan's) conference presentations. Introductions written by James Bach.
You may be an experienced tester, but being an expert tester is more than that. In this presentation, I lay out some of the issues, as I see them.
First presented: STAR Conference
There are no best practices in the world of software development. This is easy to prove. So, every time you say "best practices" in a non-ironic and non-critical context, your credibility drops a little bit. Don't do that.
There are good practices in context, and in very limited and strictly specified contexts you might be able to make a best practice claim if you have a theorem to back it up. But you can't claim to be a responsible methodologist and say to someone else, whose context you have not investigated, "here is a best practice." I realize a lot of people do it. And there is no law against sloppy speech, but this kind of sloppy speech gives process improvement a bad name.
I've been told by one industry guru that he simply means "damn good idea" when he talks about best practices. A moment of reflection should suffice for to realize that saying damn good idea has all the disadvantages of best practice except that the use of the word "damn" indicates a reckless and emotional attitude, which is a bit more honest, at least.
Remember what Mark Twain said about hyperbole: "Substitute 'damn'every time you're inclined to write 'very;' your editor will delete it and the writing will be just as it should be."
First presented: British Computer Society SIGST, 12/03
A lot of test automation is a quagmire. To fix the quagmire, get agile. Here's one vision of how to use an agile toolsmith(s), paired with tester(s), to get test automation real test automation results really fast.
First presented: STAR '00 West, 10/00
This talk is about how to explain software testing to influential people who aren't testers.
First presented: STAR '00 West, 10/00
This presentation was given by Jonathan Bach. It covers a new method of measuring exploratory testing (sometimes called ad hoc testing). Jonathan and I invented this method, then Jon tried it on a real project for four months. We're still experimenting with it.
First presented: STAR '00 West, 10/00
I've been talking about exploratory testing for years. In this talk, I present my best ideas about the topic, as of October 2000.
First presented: 3rd Software Practitioners Conference, 3/96
A hero in the mythic sense is someone who accepts the call to adventure, leaves the world of the known, discovers something valuable in the world of the unknown, and returns with that prize (Joseph Campbell's "hero's journey"). In the software world, I think a hero is someone who, for the general good, takes initiative to solve ambiguous problems. The reason why heroism is so important is that no software project (or any other intellectual endeavor) can do without it, yet no manager can compell it. Heroic acts are always optional. In concrete terms: just because your process says that a particular review should be done does not mean it will be done well, unless the people involved choose to do it well.
This talk covers some of the implications of heroism for software quality assurance processes.
First presented: STAR '99 East, 5/99
This talk presents a device I used at Aurigin Systems, Inc. to explain test cycle status to management and development. It's based, in turn, on earlier incarnations I developed at Borland. I have since introduced this method to several other companies, and seen variations of it in use in still other places.
Dashboards are a widespread tool in project management. What I present here is a particular formulation of a testing dashboard that meets a particular set of needs. Chief among them: the need to educate project staff and management on the basic structure and dynamics of a test process; the need to communicate actionable information, not merely a bunch of numbers.
A practical technique, such as a dashboard is shaped in response to a variety of forces. Here's a puzzle for you. Email me if you think you know the answer: The original dashboard used disks of red, green, or yellow, drawn on the whiteboard, to express quality status. Why did we switch to using red frowny faces, green happy faces, or yellow straight faces? (Hint: it has nothing to do with colorblindness, although that's also a good reason)
First presented: CSEE&T, 03/00
This presentation is a response to the notion that our field is "mature" enough to define and license itself. I don't think so. I think we don't know who we are and we aren't talking to each other. As a field, we're still in our early teens.
First presented: Client Site, 11/98
This presentation is actually a 2.5 hour class in creating a test strategy. I use a commercial application, called DecideRight, as an exercise. DecideRight is a product that helps you make decisions by analyzing factors and options you supply to it. It's interesting enough, yet simple enough, to make it a good classroom example.
Test strategy is one of the top ten most basic ideas in software testing: how to cover and application and assess quality. Yet, it's usually given short shrift in test plans that otherwise wax loquacious about staffing, schedule, and equipment needs. The ability to design and articulate test strategies is a skill that must be developed, rather than an innate skill that all humans have (e.g. the ability to build bad software).
First presented: ICTCS '97, 6/97
This presentation is my attack on the dangerously simplistic way that some GUI test tool companies peddle their wares. They lead their clients down a red carpet to waste and frustration by making reckless claims about the wonders of test automation, while downplaying all the problems associated with it. They give test automation a bad name.
I believe in responsible and useful test automation, so this presentation (and its companion article, not yet online) debunks the common "rah rah" surrounding the subject in the hope that you won't be played for a chump.
One example: Contrary to the implication of typical marketing literature for GUI test tools, automated testing is not the same as automatic manual testing. What observant humans do when they go through a test process is in no way duplicated or replaced by test automation, because automation cannot be aware of all the hundreds of different failure modes that a human can spot easily. I have to explicitly program automation to look for suspicious flickers and performance problems, but with humans I can say "be alert for anything strange." To paraphrase Yogi Berra, people can observe lots of problems just by looking. My automation typically finds few problems (I like automation for other reasons than that it finds problems, but the fact remains that somehow we need to find those problems, and that somehow is generally not through GUI test automation).
First presented: PC Database Summit 8/95
This presentation is five basic ideas, not commonly understood, that all developers need to understand if they are going to work productively with testers, or otherwise produce consistently good products.
First presented:STAR Conference, 2008
Our craft is obsessed with test cases. I think it's time that we push back. Testing is the activity that testers do. It is not a bunch of files, documents, or any other artifacts. When we think of testing only in terms of test cases, we cheapen our craft.
First presented: STAR East Conference, 2007
This presentation covers how I would really, truly, and actually fake a test project, if I wanted to look productive, while doing the minimum amount of useful work. I'm not recommending that anyone do this, unless they were kidnapped by terrorists and forced to test a doomsday device. However, the point is, I would fake testing mainly by following the advice that many consultants give about how to do testing well!
First presented: PNSQC Conference, 2009
People keep talking about rigor. Rigorous engineering is supposed to be a good thing. But I think rigor is terribly misunderstood. Rigor is meaningless unless you actually know what is right and wrong. Rigor is what comes after research and exploration of many possible ways of working. I propose that many of us would be better off applying rigor to our learning processes, instead.
First presented: SAST 15th Anniversary Conference, 2010
This is my review of the last ten years of the testing field and looking forward to the next ten years.
First presented:Let's Test, 2013
This is my detailed explanation and analysis of Context-Driven Testing principles.