Tuesday, March 17, 2015

When there's testers and testers

Last week, I delivered a talk at Scan Agile Conference (you can see it here, the online timing of editing slides / videos into view isn't very good though). From discussions that followed delivering that talk, I learned something.

There's testers and testers

In an agile conference, I talk mostly to an audience who are not testers. Most of them though, have experiences of testers. Experiences of testers who are as far from what I talk as testing as they can imagine.

There majority of experiences of "testers" people seem to describe are very script and plan oriented. They may create the scripts themselves or expect the answers to be handed out to them as ready made scripts. They have a list of "test cases" they run and an idea of expected results to monitor. There's a small degree of exploration in the sense of pointing out other problems than what the script mentions, should they run into those. And some even take steps away from the script, finding a little more. But essentially, scripts still drive them.

Them these types of "testers" enter agile and the expectation of fast feedback, the experience is that they struggle. They always run out of time, starting their pile of scripts from start, middle or end. Their ways of choosing what to run first may seem disconnected from what is actually happening, mostly because those who look at their work, "the developers", feel these people never come and ask what is going on, what was changed. They may or may not have other means of knowing how to adjust, but the feeling is they don't. They just keep repeating same manual checks. And they deliver feedback too slow to be useful in the agile pace.

And then there's the other kinds testers. Testers I like to call exploratory testers. Testers who put together testing and checking in a smart package on the level of the team, and are able to deliver useful feedback and compensate for the lack of technical practices a team may have to enable delivering quickly, even continuously without automation.

What should we call ourselves then?

I associate very few testers that I personally know of, who are truly in the first category. I think a lot of the negative feelings towards these "testers" come from the fact that organizations still don't understand how to enable good testing, and culturally drive great people to stupid scripting and avoiding communication. When you've absorbed a culture like that for decades, changing may require effort some people no longer cope with. But a lot of the "testers" I know of,  just wait for the permission to do great work.

There are forces driving "testers" into dumbing down their work: ISTQB certification scheme in particular. Old-fashioned ways of managing. Ideas of test cases, testing as creation of artifacts instead of seeing it as performance it is.

To distinguish how testers like me (liberated from the old style management as I can explain why and how things should be done) are different from "testers", I've chosen to call myself skilled exploratory testers.

Renaming the others to checkers?

I talked about the idea that my community (context-driven) separates checking and testing, where checking is the fact-checking with programmable decision rules. That leads to the idea that a lot of the "testers" people seem to have met, are manual checkers.

However, when you have an idea in your mind of what is testing, me changing your word to checking just won't work. Instead, offering a new concept for what I'm adding to the common world view seems to help more with communication, in my experience.

Deprecation of the term Exploratory testing

I love the article James Bach and Michael Bolton just posted on Exploratory Testing 3.0 - where in context of Rapid Software Testing terminology, Exploratory Testing is just testing. But instead of trying to transform the words people around me use into this coherent set from a testing point of view, I'm still inclined towards going with Exploratory Testing 2.0. There's more non-testers around software, and I find trying to fight for the testing craft's chosen vocabulary a battle I might not want to spend time on.

I would use this:
"We now recognize that by “exploratory testing”, we had been trying to refer to rich, competent testing that is self-directed." - James Bach and Michael Bolton
I can't (perhaps yet, perhaps ever) define all testing as exploratory. I still see the pace of adapting a major difference. The more we have scripted, the more likely we seem to be on relying on those scripts when working under schedule pressure of releasing.

I see too much of ISTQBish non-exploratory testing all around me, creation and maintenance of artifacts that don't drive focus in a positive manner (opportunity cost...). And I see a lot of smart automated checks around me. The latter makes the first obsolete, I hope. But there's still the exploratory approach of learning while working on the software, that needs to stay. Mixing up that with what people think they know is testing done by "testers" does not seem to help me with what I'm trying to get across.

So what?

I started this post by saying I learned something. I learned two things:

  • There's more than "finding unknown unknowns" to what I do as a tester. I model the system and communicate with the developers (even spying through version control) to adapt and redesign all my tests every time I test to provide useful results quickly. With this, we are able to have a team without great agile technical practices and still safely deliver continuously (daily) to production. 
  • While what I do is really just skilled testing, majority of people have a different idea. Giving what I do a different name helps see the difference from what they've come to know as "testers" work.