Monday, May 30, 2022

The fundamentals have gone for a toss

 25 Fundamental Mantras for Good Testing

  1. Why are you testing - Know it
  2. Ask it if you don't know it
  3. Confirm it if you are not sure about it
  4. Test it out before believing anything
  5. Listen to everyone but you take the final decision
  6. Save as soon as you see it
  7. Try for Autosave wherever possible
  8. Take backups
  9. Pay attention to the Context
  10. Pincode can have letters, names can have special characters. Study the domain well
  11. Document well, read documentation well
  12. Write clearly, Think deeply, Read widely
  13. Use tools wherever it helps
  14. Know limitation of tools
  15. Have a large network of friends and fieldstones
  16. Learn to connect the dots across fields
  17. If you can model well, you can test well
  18. Know mnemonics and heuristics
  19. Pay attention to keywords - always, never, must, should, obvious
  20. Learn Safety Language
  21. Keep collecting fieldstones
  22. Start recording and then testing. Never waste time unless it is part of a test
  23. There is no one good way of testing. 
  24. Testers can get bored easily if you keep doing mundane stuff. Add variety to your questions, ideas, routines.
  25. There is already a lot of work done by the community. Learn to search well.
  26. Organize well.

Leia Mais…

Monday, May 9, 2022

Can you convert art to algorithms?

In a series of experiments, I tried my hand at a toy piano. 

The first song was easy - Hot cross buns... I and Moganesh mastered it in a few minutes. Then, I wanted to try out "Happy Birthday to you". It was a little tough and Abi took up the challenge, learned it soon enough. I and Moganesh were happy with Hot cross buns tune (Lazy guys).

Then, I realized that there are many more such songs that can be played on the piano. So, the hunt began for the song to be learned. Remember that it is a toy piano and not the actual piano shown in the YouTube videos.

Think of this toy piano as a model of the actual piano and the mapping of the keys had to be done accordingly. The fun of learning from the YouTube video started. Copying every step, the configuration was setup. Instead of the notes, they were translated to numbers and I had to press the key corresponding to the number. 

Here is the image highlighting the order of keys. There were multiple lines and each line had its own order of keys. So, you see two strips of paper one above the other. Later, I realized that there were few keys which had to be pressed and I had to attach extra bit of paper to accommodate those keys.

With all that done, the output was decent, if not great.
Can you guess the song?

After this experiment, I couldn't help think how testing is treated like the above experiment.
If you can run a set of test cases, is that testing? Yes, it helps you confirm few things just like it helps play a tune to an extent.

Can someone finding few bugs or automating few test cases claim to be a tester?
Testing is much more than test cases or automation. If you saw what I did, I also played a tune within minutes but ask me anything around the tune or change few parameters, I will be exposed. Same is the case with many testers who claim to know testing just because they saw early success and are able to replicate certain steps again and again.

Testing is much more than bug hunting or automation. It reminds me of this post I wrote few years ago:

If you want to sharpen your skills in testing software and mindset, I am happy to engage you in exercises and feedback. Email me at 

Till then, it is fun to play around but if you want to master something, work hard. 

Leia Mais…

Wednesday, May 4, 2022

Testing Mistakes that might be hard to spot

Testing Mistakes that might be hard to spot

Hundreds of test cases but not understanding the business use case
Unless you test the main use case, your 100s of test cases add no value. Do not go by numbers alone. Ask what those test cases cover. Is there a test case for every variation of test data and hence the inflated number? Even with the variation in test data, is it within the same equivalence class? Ask deeper questions.

Automating everyday but no one uses it
We must be doing good because our automation % is increasing. Do not fall for the trap. Again, ask the question - why are we automating, who is using it, how frequently? 
Check out the costs of automation.

Attributing a missed bug to lack of test case
Many teams add a test case as soon as a bug is missed. If your testing is 100% relying on test cases (which cannot be, even if you claim otherwise), the general tendency is to add a test case as soon as a bug is missed. How about asking the questions:
- Was it expected?
- Was it a known bug?
- Was it a result of the strategy used?
- What else could be missed?
- How will we capture those bugs?
- Why did these bugs come in the first place?
- Was there a possibility of catching them earlier - What would be the trade-off?

We will test everything every time
We can get into this situation if we don't understand the overall application model in depth. Agreed that there might be cases where every case is critical and thoroughly tested every time. Other than that, why not optimize, go through the impact analysis, analyze better?
Have you heard of RCRCRC mnemonic by Karen N Johnson? There are more here:

One dimensional coverage
Quality being multi-dimensional, it makes sense to think on all the perspectives and stakeholders. One shouldn't need a separate nudge to cover performance, security, accessibility, learnability, usability, compatibility and so on along with functionality. 

Incorrect usage of testing techniques
When was the last time you consciously thought about a testing technique while testing? Many are not even aware of testing techniques, forget about using them. Without knowing the techniques, either an incorrect usage or not using them is hard to find. 

How many of the above are you guilty of committing or ignoring?

Leia Mais…