Wednesday, August 11, 2010

No testcases for the scenario. What do I do?

One of my seniors is a big fan of Silk, Selenium, Loadrunner, TestComplete and any other commercial tool available in the market. He is very good at automating few tests depending on the product. My team lead had no tasks for me. So, he asked me to learn Selenium from the senior. My senior demonstrated some basic commands of Selenium and asked me to play for few hours. Now the next question was: 'Which application to use?'

There were two choices presented to me:
a. An application yet to be released to the market and still being tested.
b. An application released to the market after being tested by the team reporting to the senior.

I chose the second application. My senior told me that I'd not find any bugs as it was tested by his team. I took this as a challenge and explored the application.
In twenty five minutes, I found two inconsistencies. The severity of the issues was high too. With a big smile on my face, I called my senior and showed him the two issues. He immediately called the tester who tested the feature.

On being asked how the issues were not found, the answer given by the tester shocked me. His answer was: 'We do not have test cases for those scenarios'

Few strange lessons I learnt:

1. Some testers fail to test beyond the testcases. Should I call that testing?
2. Testcases give a false sense of security to some management.
3. Forcing a tester to learn any automation tool might not be a good idea in the long run.
4. A new pair of eyes - A new test idea - might lead to discovery of a new issue.
5. Its not a good idea to criticize or interrogate your team member in public.
6. I took it as a challenge to find bugs. Did the challenge attitude help me find those issues or the challenge had no effect?

What would be your answers to the question:
'Why did you miss those issues?'

Did I tell you that the tester added the two test ideas as testcases later?

12 comments:

Anne-Marie said...

Hi Ajay,

I think one of the bigger challenges we software testers face is that we often fail to recognise that our internal software testing model is flawed.

For example, if our model is based on 'testing to a specification' this can bias us to perform confirmatory exercises as opposed to testing the unknown.

I think thats why I'm enjoying running these IM coaching sessions.
By setting a testing challenge, its an opportunity to examine a testers internal testing model.
Becoming aware of how we approach our testing, and our flaws, helps us become better testers.

Anne-Marie
Http://mavericktester.com
Skype: charretts

Michel Kraaaij said...

Hey Ajay,

Great to hear some daily stories! Keep on posting.

"Forcing a tester to learn any automation tool might not be a good idea in the long run."

I have to say.... it depends. If you emphasize on "forcing", then i agree. You can't force someone to learn. It has to go willingly. If you emphasize on learning automation tools... well, they might come in handy and save you a lot of time checking dull stuff and get you faster towards the part you really like to test.

"Why did you miss those issues?"

To me this sounds like a typical QA question. A tester can't assure quality. You can't test everything. And besides, just like the developer can't deliver flawless code, a tester can't find every bug. But he/she will surely put in a best effort in doing it.

Joe said...

/i>

It's an interesting question, and may deserve a thoughtful answer, or may not.

On the one hand finding a bug that escapes into production deserves a review to find out what should be done next time, if there are similar bugs that might have escaped, if a different approach to testing is warranted, etc.

But often, the question is asked with overtones of "This is evidence that you testers aren't doing a good job." Whenever I hear that coming through, I usually respond with something like "Well, as hard as we try, the developers are sometimes better at creating lots of bugs, than we are at finding all of them."

Ajoy Kumar Singha said...

I agree Ajay. There are few people who are obsessed about test cases. They do not understand that test cases alone cannot find defects. Although a good tester should be able to write good and complete test cases, it is not necessary that test cases will find all the defects. Good testing requires skills beyond test cases.

A nice post by you.

~Ajoy Kumar Singha
http://ajoysingha.info
http://ajoysingha.blogspot.com

Ajay Balamurugadas said...

@Anne-Marie:
I agree. Most of the bugs are missed as they are not discovered by a particular model. I feel 'The more diverse our thought process, stronger is our model'

The IM sessions is a good way to inspect our model. Interacting with other people and software helps us widen our thought process and indirectly our model.

Thanks for the comment.
Regards,
Ajay

Ajay Balamurugadas said...

@Michel,

>> Great to hear some daily stories! Keep on posting.
Thanks. I'm practicing for the BlogSTAR competition ;)

I'm not against automation. If our testing model is flawed, automation might just help us do the wrong things quicker.

>> "Why did you miss those issues?"
I crack a joke here: The programmer did not tell me where he hid them.

Regards,
Ajay

Ajay Balamurugadas said...

@Joe:

Thanks for your comment.
I liked this: "Well, as hard as we try, the developers are sometimes better at creating lots of bugs, than we are at finding all of them."

Regards,
Ajay

AJ said...

1. Some testers fail to test beyond the testcases. Should I call that testing?
Yeah, but this is where an experience and novice tester differs. In certain cases, its not possible to cover all scenarios in the test suite and a good round of exploratory testing makes up for those missing scenarios from the test case doc.
2. Testcases give a false sense of security to some management.
Absolutely true. Management in most cases feel that test case is the ultimate benchmark and number of failed test cases give them a indication of the project status, which is ridiculous to say the least.
3. Forcing a tester to learn any automation tool might not be a good idea in the long run.
Not so sure about it.
4. A new pair of eyes - A new test idea - might lead to discovery of a new issue.
Spot on. I have experience this and also implemented this in couple of projects and it has helped unearth new issues. We need to make sure we don't castigate the initial tester just because he didn't find those issues.:)
5. Its not a good idea to criticize or interrogate your team member in public.
True. Not just the morale of that person, but your reputation and trust placed on you by the team members gets a beating.
6. I took it as a challenge to find bugs. Did the challenge attitude help me find those issues or the challenge had no effect?
Challenge attitude definitely helps, but sometimes it also leads to false positives.

DiscoveredTester said...

Good post. Most of the comments have already covered some of what I was thinking so I won't reiterate those. One thing that I think can be a problem is time allotment for testing. Ever been in a situation where the time developers took slipped, and they still expected the testing to be done on time?

Having deadlines is a good thing normally, but if something earlier in the queue slips should we still artificially hold to that schedule, or rather adjust it to ensure you have adequate time to prove the software in testing? That's a question I've pondered recently, and it may not be a question that everyone will answer the same way.

johnson said...

While you are going to develop any application,you must have to consider some test cases to make your application bug free.
-quiz software

Ajay Balamurugadas said...

@Veretax,

Thanks.
I agree that a slip in the queue affects later stages(usually testing). We must try to rearrange the schedule.

Most of the times, it is not possible. I have faced such situations. What we have achieved is to inform the management that the delay in schedule is because of the earlier slip and not because of testing.

Regards,
Ajay

Unknown said...

Ajay,
I would like to comment on one of your points "Testcases give a false sense of security to some management".
Well written test cases always assures me of what has been tested atleast.
Am a big fan of exploratory testing, many of the crash defects i have found are because of exploratory testing, but i equally focus on writing quality test cases, if those test cases are executed with an intension of finding defects, i know which features and funcitonality is tested.
We can have a chat sometime about this.

Sandeep