Saturday, December 24, 2016

Throw one away to learn

I sit in front of a computer, testing Alerts. It's a feature where something happens on the client side that triggers a dialog for the user but it's not enough that the dialog is presented to the user, there's also an admin somewhere supporting the user that needs to know and understand what went on without showing up behind the users back just to understand. I keep triggering the same thing and as I move back and forth between my points of observation, I make notes of things I can change. I test the same thing to learn around it, to keep all my mental energies focused on learning.

Around the same time, my test automation engineer colleague creates a script that does pretty much the same thing without human interference. Afterwards she has only this one script, I have 200 tests and I call her test shallow offending her. It's a good start, but there's more to testing of the feature than the basic flow.

The main purpose of the testing I do (exploratory testing) is learning. The main purpose my test automation engineer colleague does is long-term execution of checks on previous learning. I believe that while keeping track of previous learning is relevant, the new learning is more relevant. Bugs "regression" does not just happen. It happens for a reason and I love the idea of building models that make my developers think I have magical superpowers finding problems they (or I) couldn't imagine before.

Last night, I listened to a great podcast interview on Hanselminutes by Angie Jones,  I got triggered with Scott Hanselman repeating the idea that you should automate everything you do twice. With that mindset, me repeating same thing hundred times to feel bored enough to pinpoint the little differences I want to pay attention to would never get done. Angie saying "we can automate that" and facing resistance from people like me has so many more layers than the immediate fear of being replaced. It has taken me a lot of time to learn that you can automate that and I can still do it manually for a very different purpose, even with people around me saying I don't need to do it manually when there is automation around.

My rule is: automate when you are no longer learning. Or learn while automating and learn more whatever frame you need for it. My biggest concern on test automation engineers is their focus on learning - about the automation toolset over the product, and the gap in quality information that can create if not balanced.

This morning, I continued by listening to Brian Marick on useful experimentation with with two programming examples. He talked about Mikado method where you throw away code while refactoring with tests, with the idea of learning by doing before doing the final version. And he talked about Corey's technique of rewriting code for user stories of 3-days size until a point the code writing fits, through learning from previous rounds, into one day of coding. Both methods emphasize learning while coding and making sure the learning that happens ends up in the end result that remains.

I can immediately see that this style of coding has a strong parallel to exploratory testing. In exploratory testing, you throw your tests away only to do them again, to use the learning you had acquired through doing the same test before, except that it is not the same when the learning comes to play. Exploratory testing is founded on deliberate learning, planning to throw one (or more) away while we learn. So before suggesting we should "automate that", consider if the thing you think of automating is the thing I think I'm doing.

I have two framing suggestions. For the people who propose "we can automate that" - make it visible that you understand that you can automate some part of what we are doing, show appreciation for the rest of it too. For the people who reach negatively to "we can automate that", consider welcoming the automation and still doing the same thing you were about to do. For the people who actually think they are doing the same thing by hand without understanding what they are learning in the manual process: learn more about learning while you're testing. The keyword for that is exploratory testing.