Thursday, October 7, 2021

The game of right questions

You always say that testing is all about asking questions. 
How do I know if I am asking the right questions?

Many testers have asked me the above question. Recently in my 1-1 coaching too, this question was raised. I thought it is now time for writing the detailed answer. 

Testing is all about asking questions about the product and evaluating the answers to see if there is a problem or not. Testing is also a sampling activity. Due to the limited amount of time, effort, it will be great if the right questions are asked early.  The problem comes when we need to figure out which are the right questions and which are not. It is similar to asking how do I find the critical bugs first compared to the minor ones. The answer to both the questions is similar - CONTEXT.

The more we know about the Context, our questions and our test ideas have a greater chance of being more pointed and useful. Let us work through an example. 

Let us consider two systems at work.

A web application that acts as the dashboard for the temple administration to view the darshan slots and count of devotees. Mobile application for the devotees to book the slots for the darshan. 

Though we can start from anywhere, I would like to focus on the below questions for a start:

Mission: 
Why are we testing? 
Is it a new release or upgrade? 
Based on the answers, your questions should change. 
If it is a new release, more focus is on what the customer has been promised. 
If it is an upgrade, the things that are already released should not break at any cost. 
The focus shifts drastically at just one answer. 

Deadline: 
How much time do we have for the release? 
How much time do we have for the testing? 
Is the development complete or still in progress? 

Users/Customers: 
Who are existing customers? 
Who will be our target customers? 
In this case, will the temple board be our first customer or individual temples manage the website on their own? 
Will devotees be only the senior citizens or will the apps be managed by agents who will help the devotees do the slot booking?

Key Features: 
Is there an existing list of features, documented in any form (Design docs, Code, Requirement documents)? 
Can we use the application to learn about the features? 
Are there test ids available?

Then start thinking about the four components of a test

Configuration
Which platforms will the applications support?
Supported OS Versions? 
Which are supposed to be supported?
How much of load can we expect? 
Which other components are in play?
How are the website and the mobile app connected? 
How frequently or how soon the data is synced?
Any specific time when the load of booking slots will be high?
Are there backup systems?
Where is the data saved? For how long?
How will we know if any of the systems go down? Is the scenario handled?
How is the temple onboarded?
How are the users onboarded?
Which other systems will take the load (SMS, notifications)?

Operation:
How would one book the slot?
How many slots can be booked in one shot?
How quickly can the slots be booked?
How long is the session active?
Any other modes other than online mode? (SMS?)
What about cancellations?
Is there a hold period?

Observation:
What happens to the UI before, during, after booking?
What gets printed in the logs?
What gets stored temporarily and permanently in the DB?
Are there other mediums of communication (SMS, Notifications, Emails)
Any download options (thereby invoking other systems and applications) of tickets?
Error messages and information messages throughout the booking process?

Evaluation:
What is the oracle here - the requirement document? the product owner? the product? similar products? user expectations?
How do we know that whatever we see on the dashboard is right?
Should we trust the website or app or something else?

Now, which of these questions are important and which of these are not?
We don't know till we ask these and get the answers. 
Good testers will not just go through some list of questions but modify the list on the fly based on the answers to previous questions or information gained midway. 

What do you think? How to win the game of right questions?
The more we know about the application, context, the right questions will appear quickly.

Meanwhile, join me for TestFlix - https://www.thetesttribe.com/testflix-2021/ by registering with your teams.

Leia Mais…

Sunday, May 2, 2021

A growing problem to be solved soon


Photo by Alex Knight on Unsplash

Somehow the current batch of testers are more inclined towards Automation, call themselves SDET / Automation engineers and look down on the role of functional tester. I am not here to stir up another debate of which term is the right term to use. My concern is somewhere else - The understanding of "testing" as a craft and focus on the "testing skills". 

 Some of the questions have no answers with this batch of testers: 
  • Why do we test? (We test to automate - Yes, I was shocked on listening to this answer) 

  • How do you test this product? (I will automate using - Automation is one of the activities based on the context. Don't approach testing with tools. Use tools if they help you in testing) 

  • What kind of bugs have you found? (UI, Usability..) 

  • When was the last time you designed your tests? (Oh, we received them from the manual or functional team and we went ahead and automated them) 

Garbage In - Garbage Out

All I can say is: Garbage in - Garbage out! I am not against automation. Automation is very useful. It helps you achieve the things that you might not be able to do without the tools. 

Without having the right test ideas, what are you going to automate? 
Won't they be incorrect or way too shallow? Think about it. 
Are the checks even valuable? Are they going to answer the questions that matter? 

So, now without your test ideas being strong, I don't understand how you can add value as a tester!

Systematic Product Modelling 
Have you seen the testers who say that they have not tested the feature and hence have no idea about the feature even though it is part of the same product they are testing? On top of it, the two features would even be interacting with each other. Why would one not want to learn about the whole product? 

  • Do we even spend enough time and the right techniques to model the whole application? 

  • Do we think across layers - UI, API, DB?

  • Do we think about the different users - the first-timer, the legacy, just started, the one who has not updated?

  • Do we understand how the data flows across features and how it gets transformed?

  • Do we know the defaults?

  • Do we know how the product is marketed?

  • Do we know the complaints the users raise to our customer support teams?

  • Without modeling the overall application, I wonder what use is the automation skill?

  • Isn't it a very narrow-minded approach?

  • Won't the power of automation be well used if you exactly know where to use it rather than using it at every opportunity? 

Who /also/ encourages them?
Think of all the questions that are asked in the interview. 
  • How much do we focus on the fundamentals? 

  • Do we jump straight to the tools?

  • Do we fill up the positions even if we know that this tester is not a well-rounded tester?

  • Do we give higher emphasis to automation skills?
Check out the job descriptions that are posted as well. No mention of the core skills. The tools form the majority of the JD. We can't blame the testers with such JDs.

Soon, we will get to a state where testers have no idea what testing is, the bigger picture, the multi-dimensional aspect of quality, and how one can add value. We should get away from the zone of robots with tools ready to automate anything and everything without understanding testing.

The next time you have a chance to influence any of the above three factors - "Decide what to automate", "Knowledge of the product", "Interview the testers", dive in and help the industry move ahead!

Leia Mais…