Monday, October 16, 2017

Innocent until proven guilty

I read a post Four Episodes of Sexism at Tech Community Events, and How I Came Out of the (Eventually) Positive and while all the accounts are all too familiar, there is one aspect that I feel strongly about. Story #3 recounts:
It takes me two years to muster the confidence to go to another tech event.
The lesson here is that it is ok to remove yourself from situations where you don't feel comfortable. There is a very real option for many people that we don't show up because someone can make us feel uncomfortable in ways that matter.

I hate the ways people report being made feel uncomfortable. And I particularly hate when someone reports a case where they were made uncomfortable being dismissed or belittled by the organizers of conferences because there is a belief that the "offenses" are universally comparable. That alleged perpetrators are always innocent until proven guilty. This idea is what makes people, word against word in positions of unequal power, allow for the bad behaviors to continue.

There will not be clear cut rules of what you can and cannot do in conferences to keep the space safe. Generally speaking, it is usually better to err on the side of safe. So if you meet someone you like beyond professional interests in a professional conference, not expressing the interest is on the safe side.

Some years ago, I was in a conference where someone left half-way though the conference for someone else's bad behavior. I have no clue what the bad behavior was, and yet I side with the victim. For me, it is better to err on the side of safe again, and in professional context reports like this don't get made lightly. Making false claims is not the common way of getting rid of people, even if that gets recounted with innocent until proven guilty.

We will need to figure out good ways of mediating issues. Should a sexist remark cost you a place in the conference you've paid for - I think yes. Should a private conversation making others overhearing it cost you a place in the conference you've paid for - I think yes. On some occasions, an apology could be enough of a remediation, but sometimes protecting the person who was made feel unsafe takes priority and people misbehaving don't have the automatic access to knowing who to get back to for potential retaliation. It's a hard balance.

The shit people say leave their marks. I try not to actively think of my experiences, even forget them. I don't want to remember how often saying no to a personal advance has meant losing access to a professional resource. I don't want to remember how I've been advised on clothing to wear while speaking in public. I don't want to remember how my mistakes have been attributed to whole of my gender. There's just so much I don't want to remember.

Consequences of bad behaviors can be severe. Maybe you get kicked out of 2000 euro conference. Maybe you get fired from the job. Maybe you get publicly shamed. Maybe you lose a bunch of customers.

Maybe you learn and change. And if you do, hopefully people acknowledge the growth and change.

If you don't learn and change, perhaps excluding one to not exclude others is the right thing to do.

In professional settings we don't usually address litigation, just consequences of actions and actions to create safer spaces. Maybe that means taking the person stepping forward feeling offended seriously, even when there is no proof of guilt.

I don't want people reporting years of mustering the confidence to join the communities again. And even worse, many people reporting they never joined the communities again, leaving the whole industry. I find myself regularly in the verge of that. Choosing self-protection. Choosing the right to feel comfortable instead of being continuously attacked. And I'm a fighter. 

Saturday, October 14, 2017

Caring for Credit

Last three years have been a major source of introspection, as I've taken upon the journey of becoming (more) comfortable with pairing and mobbing. Every time someone hints that I want to pair to "avoid doing my own work", I flinch. I hear the remark echoing in my head, emphasizing my own uncertainties and experiences. Yet, I know we do better together and fixing problems as they are born is better than fixing them afterwards. 

A big part of the way I feel is the way I was taught to feel while at university. As one of the 2% women minority studying computer science, I still remember the internal dialogue I had to go through. The damage the few individuals telling me that I probably "smiled my way through programming classes", making me avoid group work and need of proving my contribution in a group being anything more than just smiling. And I remember how those experiences enforced the individual contributor in me. Being a woman was different and I took every precaution I could to be awesome as much by myself as I could. If I ever did not care for doing more than others, that would immediately backfire. And even if I did care, I could still hear the remarks. I cared then, I still do. And I wish I wouldn't. 

My professional tester identity is a way of caring for credit. It says about what of all the things I do are so special that I would like it to be separately identified. It isn't keeping me in a box that makes me do just testing, but it says that that is where I shine. That is where I contribute the most of my proud moments. Yet it says that I'm somehow a service provider, probably not in the limelight of getting credit, and I often go back to liking the phrase:
Best ideas wins when you care about the work over credit.
I want to create awesome software, and be recognized for my contributions to it.Yet I know that my need of recognition is a way of not getting the best ideas to win - nor anyone else need of recognition.

As a woman, attribution need can get particularly powerful. If you're asked of the great names in software, most people don't start listing women - even if that has recently changed in my bubble that talks about awesome people like Ada Lovelace, Margaret Hamilton, and Grace Hopper. And looking a little beyond into science, listing women becomes even less of a thing.

The one man we generally tend to think of first in science is Einstein. Recently I learned that he had a wife, who was also a physicist and contributed significantly to his groundbreaking science. He did not raise her significant contributions to general public.  Meanwhile, Marie Curie is another name we'd recognize and the reason recognition is tied to her is due to her (male) colleagues actively attributing work to her. 

Things worth mentioning are usually a result of group work, yet we tend to attribute them to individuals. When we eat a delicious cake, we can't say if it was great because of the sugar, the eggs or the butter. All were needed for the cake to become what it is. Yet in creating software products, we tend to have one ingredient (programming) take all the credit. Does attribution matter then? 

It matters when someone touts "no women of merit" just for not recognizing the merited woman around them. It matters when people's contributions are assessed. Reading a recent research saying that women researchers must publish without men to get attributed and thus tenure made me realize how much the world I was taught in school still exists. 

People are inherently attribution seeking - we wish to be remembered, to leave our mark, to make a difference. A great example of this is the consideration of why there are no baby dinosaurs - leading to a realization that 5/12 identified species are actually just adolescent versions of the adult counterparts. 

From all of my talks, the bit that always goes viral is adaptation of James Bach's saying: 
I've lived this for years and years, and built a whole story of illusions I've broken, driving my tester identity through illusion identification. Yet, I will always be the person popularizing past sayings. 

Caring for credit, in my experience, does more harm than good. But that is what humanity is built around. Take this as a call of actively sharing the credit, even when you feel a big part of credit should belong to you. We build stuff together. And we're better off together. 

Friday, October 13, 2017

What a Product Owner Does?

As an awesome tester, I find myself very often pairing with a product owner on figuring out how to fix our ways of working so that we could have best chances of success when we discover the features while delivering them. My experience has been that while a lot of the automation-focused people pair testers up with developers, the pairing on risk and feedback with the product owner can be just as (if not more) relevant.

Over the years, I've spent my fair share shaping up my skills of seeing illusions on a business perspective, and dispelling them hopefully before they do too much damage. Learning to write and speak persuasively is part of that. I've read tons of business books and articles, and find that lessons learned from those are a core to what I still do as a tester.

I find that a good high-level outline of the skills areas I've worked on is available with the Complete Product Owner poster. Everything a product owner needs to know is useful in providing testing services.
Being a Product Owner sure is a lot of work! - William Gill

In preparation of a "No Product Owner" experiment, I made a list of some of my expectations on what a product owner might do (together with the team).

What a Product Owner Does?
  • has and conveys a product vision
  • maintains (creates and grooms) a product backlog - the visible (short) and the invisible (all wishes)
  • represents a solid understanding of the user, the market, the competition and future trends
  • allows finishing things started at least for a preagreed time-box
  • splits large stories to smaller value deliveries
  • prepares stories to development ready (discovery work)
  • communicates the product status and future to other stakeholders
  • engages real customers and acts as their proxy
  • calculates ROI before and after delivery to learn about business priorities
  • accepts or rejects the work done
  • maintains conceptual and technical integrity of the product 
  • defines story acceptance criteria
  • does release planning
  • shares insights about the product to stakeholders through e.g. demos 
  • identifies and coordinates pieces to overall customer value from other teams
  • ensures delivery of product (software + documentation + services) 
  • responds to stakeholder feedback on wishes of future improvements and necessary fixes
  • explains the product to stakeholders and drives improvement of documentation that enables support to deal with questions
These people are busy, and can use help. How much of your testing is making the life of a product owner easier? 

Thursday, October 12, 2017

Run the code samples in docs

I was preparing for a session on exploratory testing in a conference, wanting to make a point of how testing an API is just like testing a text field. Even the IDE you use just gives you keyword recognition based on just a few letters, and whatever values you pass in are a guided activity. The thinking around those fields is what matters. And as I was preparing, I went to my favorite test API and was delighted to notice that since the public testing sessions pain, there was now some basic getting started documentation with examples.

I copypasted an example around approving arrays into the IDE, and things did not go as I would have expected. Compiler was giving me errors, and I could extend my energy barely to see what the error message was about. There were simple errors:
  • a line of code was missing semicolon
  • a variable a was introduced, yet when it was used it got renamed to x
As a proper tester, I was more happy than sad with the lack of quality in the documentation, and caused a bit of pain to the poor developer asking not to fix in for a few hours so that I could have other testers also see how easy finding problems in a system is because documentation is part of your system. I think the example worked nicely around encouraging anyone to explore an API with its documentation.

The cause of the problem I saw was that the code sample was never really executed. And over time even if it was executed once, it could break with changes as it wasn't always executed with the code.

A lot of times, we think of (unit) tests as executable documentation. They stay true to functionality if we keep on making them pass as we change the software. Tests work well to document drivers. But for documenting frameworks, you need examples of how it calls you. It makes sense to do the examples so that you can run them - whether they are tests or other form of documentation.

Documentation is part of your API. Most of us like to start with an example. And most of us choose something else if possible if your documentation isn't helpful. Keep it running.

Calling bs on the pipeline problem

Yesterday was a day of Women in Tech in Finland. After the appropriate celebrations and talks, today my feeds are filled with articles and comments around the pipeline problem. I feel exhausted.

The article I read around getting girls into the industry quotes 23% of women in the industry now. Yet, look at the numbers in relevant business and technical leadership positions. One woman among 5-6 people in the leadership groups isn't 23%. No women as head technical architects is even further from 23%. And don't start telling that there are no women of merit. Of course there are. You might just not pay attention.

In the last week, I've personally been challenged with two things that eat away my focus of being amazing and technical.

First of all, I was dodging a "promotion" into dead end middle management position. How would that ever make me a head technical architect I aspire to be? Yes, women with emotional intelligence make strong managers. But we also make excellent technical leaders.

Second, I was involved in a harrassment getting someone fired case in the community. It has been extremely energy draining even if I was just in a support role.

Maybe having to deal with so much of the extra emotional labor is what makes some people think again less of my merits. And I'm getting tired of it.

We talk of the pipeline problem, on how little girls don't take interest in computers and programming. If they look forward into their role models, they see women fighting for their advancement and mere existence. The pipeline leaks, and almost everyone who is in it is regularly considering exit just to get rid of the attitudes the ones with more privilege don't have to deal with.

How about improving things for the future generations on focusing on the current one so that we can honestly tell our little girls that this is the industry worth going for? It is the industry of the future, and we're not going to leave it, but a little bit of support for the underdogs would be nice.

When I do keynotes in conferences, I get the questions of  "are you here to watch your kids while your husband speaks" from the other female keynoter's husband. I get the questions of  "you're one of the organizers" when most of the organizers are women. And yet in the same places I get men telling that there is no difference in how we are treated.

Just pick 50% of women of potential into the relevant groups we all want to reach. Those 50% of women are not going to be worse than the men you're selecting now. Those positions help them realize their full potential. And showing this industry is more equal might just help with the beginning of the pipeline too. Because the little girls don't only have a dad who makes sure they get interested in math and STEM, they have a mom who could be more too.

Sunday, October 1, 2017

Machine Learning knows it better: I’m a 29-year old man

Machine Learning (ML) and Artificial Intelligence (AI) are the hit theme in testing conferences now. And there’s no entry criteria on who gets to speak on how awful or great that is for the future of testing, existence of testers and the test automation we are about to create. I’m no different. I have absolutely no credentials other than fascination to speak on it. 

The better talks (to me - always the relative rule) are ones where we look into how to decompose testing problems around these systems. If the input to the algorithm making decisions changes over time, the ideas we have on deterministic testing for same inputs just won’t make sense unless we control the input data. And if we do, we are crippling what the system could have learned, and allowing it to provide worse results. So we have a moving target on the system level.

My fascination to these systems has lead me to play with some available. 

I spent a few hours on Microsoft Azure cognitive services API for sentiment analysis inspired by a talk. As usual, I had people work with me on the problem of testing, and was fascinated on how different people modeled the problem. I had programmers who pretty much refused to spend time testing without getting a spec of the algorithm they could test against. I had testers who quickly built models of independent and dependent variables in input and output, and dug in deeper to see if their hypotheses would hold, designing a test after another to understand the system. Seeing people work to figure out if the teaching data set is fixed or growing through use was fascinating. And I had testers who couldn’t care less on how it was built but focused on whether it would be useful and valuable given different data. 

I also invested a moment of my time to learn that I’m a 29-year old man based on my twitter feed. This result was from University of Cambridge Psychometric Centre’s service http://applymagicsauce.com. The result is obviously off, and just a reminder on the idea that “6 million volunteers” isn’t enough to provide an unbiased data set a system like this would learn from. And 45 peer-reviewed scientific articles add only the “how interesting” to the reliablility of the results. 

My concern on ML/AI isn’t on whether it will replace testers. Just for the arguments sake, I believe it will. Since I started mob testing with groups, my perspective into how well testers actually perform in the large has taken a steep dip, and I base my hope for the testers future usefulness in their interests to learn, not on what they perform today. The “higher function thinking” in testing exists, but is more rare than the official propaganda suggests. And the basic things won’t be that impossible to automate. 

My concern on ML/AI is that people suck. With this I mean that people do bad things given the opportunities. And with ML/AI systems, as the data learned over time changes the system, we can corrupt it both in the creation of the original model and in using it while it is learning. We need to fix people first.

The people with time and skills use their time sometimes on awful problem domains. There must be a great idea other than “we could do this” to create a system that quite reliably guesses people’s sexual orientation from pictures, opening a huge abuse vector. 

The people who get to feed data into the learning systems take joy in making the systems misbehave, and not in the ways we would think testing does. Given a learning chat bot, people first teach it to swear, spout out racist and sexist slurs and force the creators to shut them down. 

Even is the data feeding was openly available for manipulation, these systems tend to multiply our invisible biases. The problems I find fascinating are focused around how we in the tech industry will first learn about the biases (creating systems in diverse groups, hint hint), fix them in ourselves to a reasonable extent and then turn the learnings into systems that are actually useful without significant abuse vectors. 


So if there is a reason why the tester role remains is that figuring out these problems requires brain capacity. We could start by figuring out minorities and representation, kindness and consideration without ML/AI. That will make us better equipped for the problems the new systems introduce. 

Saturday, September 30, 2017

Time to update threat modeling

Working in a security company, there is an activity we try to routinely do, at least when anyone hints on not having done it. That activity is security threat modeling. Getting smart people together, supported by a heuristic (STRIDE) we explore what could possibly go wrong on security with the design we’ve made by now. Purpose of threat modeling is to learn and change when we learn. And for someone trying to drive forward better quality, there’s not a more powerful argument than connecting a bug somehow to security. 

Heuristics are used to keep the discussion both focused and multifaceted. People easily would focus on one type of problems, and heuristics like STRIDE help with thinking from a few more perspectives. It’s far from complete, but it has been the basic approach for good enough to get started with. The acronym opens up to words Spoofing, Tampering, Repudiation, Information disclosure, Denial of service and Elevation of privilege. 

Security is about not letting people do bad stuff with systems, hopefully while allowing the right people to do what they were trying to achieve with the use of the system. All of the perspectives easily map to the idea of the users and attackers. 

But with many modern systems, there is one often dismissed theme I would bundle with security, while I realize that for me as a tester it has long been a separate concern. That is one of abuse. I’m exploring extending STRIDE with an A. 

Abuse vectors are often unique. They are ideas of how we could unintentionally open up ways for targeting misbehaviors against a group with use of the systems. And thinking of the abuse threats is getting increasingly important.

Let’s explore a few ideas around this.

A prominent tech woman ended up with targeted abuse with Github. At a time when Github allowed people to be added to projects without their consent someone thought it was a fun thing to add her to projects she would by no means associate with. All those projects then end up being a part of her profile. We would want to make sure our features don’t enable high visibility cyber bullying. 

A group of people built a learning bot, which turned into a monster in a matter of hours. We would want to make sure that with the learning systems, we can control the quality of the data the system learns from.

A face recognition software was built, and it did not recognize faces of people of color. The sample set the system was built on did not do a good job at being representative, and the biases of the creators got coded into the functionality. We would want to make sure we don’t make a group of people invisible with systems intended for wide use. 

A phone had a feature of facial recognition for logging in. It worked perfectly for images of the owner’s face. We would want to make sure that if we use faces as credentials, gaining access to our personal data is not one picture away. 


Abuse as an attack vector is connected with STRIDE, but different. And we need to start actively thinking of it as we create more sophisticated systems that include our assumptions of privilege and biases.