exploratory testing is any testing to the extent that the tester actively
controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.
So my question is:
We follow a restricted tour in Scripted Testing. In Scripted testing, our next test is not based on the result/information gained from the previous tour. Are we not following a strict roadmap or the touring is restricted?
Isn't touring and modeling a one time activity in Scripted Testing? A tour may be done to gain information to write the scripts.
In Exploratory Testing, isn't Touring and Modeling a continuous activity to gain more valuable information?
Are they both - "Touring and Modeling" in a loop in Exploratory Testing?
Final point: "Touring and Modeling" is restricted in Scripted Testing and "Touring and Modeling" is a continuous activity in Exploratory Testing.
What do you think?
Feel free to correct me, comment, discuss, question, argue and finally tour the model I have in my mind.
ParimalaShankaraiah and myself were ready to start the session. Yes, two facilitators to organize a testing session with nearly 25 registered testers.
And James Bach is on Skype waiting for any questions from the testers.
WOW... All set and what's this... I can't find Parimala online.
Beep Beep: Message on Mobile: Powercut at my place - Parimala
SOS sent to SharathByregowda and 2.25pm IST, powercut at my place. Oops... Both Parimala and myself rush to the nearest cyber cafe near our home.
Both Parimala and myself login just in time from two different locations. I started adding the new members for chat while Parimala took over the task of sending the email to those present on time.
Once group chat was initiated, we exchanged roles. I contacted James on twitter.
So with James on standby, Parimala on chat clarifying the questions from testers, I was sending email to every tester who had not yet received the email disclosing the details of the WT 17 session.
Half an hour and it seems that everything is settled.
Slowly questions started cropping up in the discussion. Initially questions were re-directed to James on twitter.
Later, email id and skype id of James was given to testers for direct interaction. Thanks James for your precious time.
Literally, two hours just flew. Finally an email thanking the testers and highlighting the deliverables was drafted and kept ready. While testers were busy testing, Parimala was drafting the email and I was adding the new email ids to the list.
At 5pm IST, we sent out the email.
Testing was stopped at 5.30pm IST and reports were trickling in since 5.15pm.
Surprise element: There was last minute change in the specifications and those who asked James about the specification got more details.
Only the testers who asked questions got to know more information...
Questioning: Very important skill of a tester.
Finally, all the reports came in by 5.45pm.
Parimala and myself logged out to have our lunch :)
Thank you Weekend Testers. See you in next WT session .
Rapid testing is a complete methodology designed for today’s testing, in which we’re dealing with complex products, constant change, and turbulent schedules. It's an approach to testing that begins with developing personal skills and extends to the ultimate mission of software testing: lighting the way of the project by evaluating the product. The approach is consistent with and follow-on to many of the concepts and principles introduced in the book Lessons Learned in Software Testing: a Context-Driven Approach by Kaner, Bach, and Pettichord. In interactive workshop, Michael Bolton, the co-author (with James Bach) of the Rapid Software Testing course introduces testers, managers, developers, and any other interested parties to the philosophy and practice of Rapid Software Testing, through lecture, stories, discussions, and “minds-on” exercises that simulate important aspects of real software testing problems.
1. Every project teaches you something new if you are ready to learn. I tested a web based application for the first time. Introduced to terms like IISRESET, HOSTS, PROXY SERVER, RAD GRID, AJAX, HTTPS, DOMAIN NAME, DATABASE SERVER.
2. A tester's role is to meet the mission. I found 73 bugs in 6 hours and had to close 69 bugs because they were not at all important from the customer's perspective.
Previous Project: Found 710 issues and 685 were fixed. Everyone appreciated me. This project: Found 73 issues and 4 were fixed. Everyone asked me to concentrate on features which the customer would use.
3. Test on the expected environment. Tested for two weeks on an environment which was not the environment at the customer end. Now I feel, why did we waste those two weeks?
4. Test the environment first. Believed a technical person for setting up the environment and the environment was wrong. First build to customer and it failed miserably. Tested the environment and found that a simple mistake meant that we tested on the wrong environment for two months.
5. Work as one team towards one goal. It is good to interact with programmers, product managers, tech support, sales manager, QA head, Development Head. Everyone has something new(read different) to contribute.
WOW, I'm very happy that after three years of testing in office, this is my first product release to market.
Mission: LISTEN to the lecture and Prepare the notes from the lecture. Deliverable: The notes must be exported to a pdf file using Notalon application.
The best note-taker would be judged on the following parameters: 1. Content - Lecture Content 2. Easy to read notes. 3. Good usage of Notalon features.
We started the testing session at 3pm sharp. The video being a short video of 6 minutes helped the testers to play it again and again.
Some testers were not clear with the mission. And they questioned till they got enough information to meet the mission. It was good to see testers question. Questioning is a very important skill and such exercises help the testers improve their questioning skills.
While some testers had questions related to the mission, some had questions totally irrelevant to the mission. Focusing on the mission is important and sometimes testers can get distracted by other interesting parameters of the testing activity. Once all the participants emailed their report, the entire list of reports was then sent to everyone.
About the session, Sushant summed it up in one sentence: “Testers had to take help of all the senses”.
Poulami started the discussion. She was very new to these kinds of exercises. The entire mission of listening to a lecture, take notes with the help of a new application, was in itself a big challenge to her. She took up this challenge and enjoyed multitasking.
She felt such exercises would help hone her multitasking skills. One more important point highlighted by Poulami was her increased concentration levels to help note taking. When asked if she would have concentrated so much if she was listening to the lecture alone, she replied in the negative. She learnt the importance of being detail oriented and at the same time look at the bigger picture.
Sushant was one of those testers who thoroughly enjoyed this session. He felt that such exercises would help testers improve their listening skills and to comprehend a lecture. He was happy that such exercises break the monotony of office activity. He highlighted the importance of filtering out the most important points in a lecture and reducing it to a couple of statements.
Next, Karan described his experience. As he was not clear with the mission, he questioned to get a clear idea of the mission. He gave more importance to listening to the lecture and just jotted down what was present in the slides. According to him, these kinds of exercises help to beat the boredom. Interesting point to note in his description was this exercise increased his self- confidence.
It is always good to know that such testing sessions help testers who are so busy doing testing that they do not learn anything new. Karan felt that the biggest learning for him was to be smart in doing things and not get afraid by the mission or the application.
Bhargavi was next and shared an interesting point. She was trying to understand the relation between the three links: Application, Lecture and the Reference. The only challenge for her was to manage the entire activity and being late to the session did not help her cause. She promised to try this exercise again and share her experiences.
Regarding the mission for this session, she liked it and was of the opinion that such exercises would definitely make a difference in the careers of the testers. Her learning was to join session at right time :) She appreciated the ability to learn and use the Notalon application quickly.
Most of the testers promised to use Notalon application instead of Notepad. The feature to export to pdf is cool. We also discussed if anyone used the Preferences menu to change the pdf settings.
It was my turn to share my experiences. I felt very happy completing this whole exercise. I made use of the borders and fonts feature to improve the overall look of the pdf document. My approach was to pay attention to the video and simultaneously take notes in one shot without pausing the video.
Once I finished one round of video, I played it again and again to hunt for missing ideas/words. Playing the video five times helped me to frame my notes better. Please find my test report here.
I felt that if we could make our own notes for all such testing videos, we could learn a lot than just listening to the videos. Bhargavi and Karan agreed on that point.
And regarding best note taker of the day, Sushant won the title with his excellent summary of the entire lecture in addition to detailed notes.
With a happy learning session, everyone is looking forward to WT12. Meet you next weekend :)
Date and Time: 03rd October 2009, 3pm - 5pm IST Application: Converber v 2.1.0
This session was different just like every other session. The mission was unique as the testers were not supposed to hunt for bugs.
FLASH NEWS: DO NOT HUNT FOR BUGS!!! How often do you see that?
Then what was this session about?
Following details were given to testers:
Context: A client wants Beta Testers for testing Converber. You have to prove them that you are the best Beta Tester for them. How will you be judged? You have to give a list of Test Ideas/Scenarios you would cover if selected as the Beta Tester.
Based on the list of Test Ideas/Scenarios, you'd be selected/rejected as the Beta Tester.
Most of the testers were surprised with the mission and set out to achieve the mission at 3pm IST sharp.
It was a different experience for me too. I was busy browsing through different articles on Exploratory Testing Approach to identify most of the quality criteria.
It was already twenty minutes and I had not even launched the application.
I found some testers interested in finding bugs in some modules of the application. Being the facilitator, I reminded them of the mission.
It was challenging for most of the testers. While some were finding it hard to put ideas to paper, some could not resist the idea of hunting bugs.
This particular session went so quickly that we realized that it was already 4pm IST. We stopped generating test ideas!!!
One of the challenges faced by Tejas was not to start testing as soon as an idea popped up in his mind. Dhanasekar echoed Tejas's concern - he too started testing as soon as he came up with any idea.
Rajesh wanted to clarify what a test idea meant and how it is different from a test case. I felt it was too late for this question to come up as the testing session was already over. Rajesh learnt that he could have gathered valuable information had he raised this question at the start of the testing session itself.
Dhanasekar and Tejas were of the opinion that there are too many terminologies to add to the confusion.
We started off with Dhanasekar sharing his experience, challenges, learning of this session. He found it easy to hunt for bugs than documenting test ideas and promised to work on that aspect. The major challenge he had to tackle was to not to test the test idea he generated. Being unclear about the mission too did not help his cause.
He got diverted on finding a crash and started investigating it.He realized the importance of questioning which could have saved him a lot of time.
His biggest learning was to
"FOCUS ON THE MISSION"
and he was of the opinion that this exercise would help him present his test ideas better.
Rajesh started off by sharing his experience. His favorite subject being Maths, he loved to test this application as it involved a lot of mathematical calculations. His limited knowledge about different conversions forced him to experiment with only those units which he was comfortable with. As he did not ask questions to clarify what a test idea was, he was hunting for bugs along with the task of generating the test ideas.
His biggest learning for the day was
"THE IMPORTANCE OF QUESTIONING"
He had read about questioning the stakeholders for more information and today was his practical experience of questioning the stakeholders.
Dhanasekar added a valuable point that it is difficult to generate test ideas just by looking at the GUI. I'd say that's another trap : The mission did not specify that the application should not be used. Questioning can help us clear traps.
The general challenge most of the testers faced was highlighted by Sushant:
...even though we may not want to hunt for bugs, but eyes find them out...
Sathish re-framed the mission statement:
"The mission is to find ways to identify the bugs"
Vivek was next to present his experiences. He faced difficulty in defining the test scenarios. He decided to give a broad idea of his test scenarios. As I shared with him this link : Heuristic Test Strategy Model, he was happy that being a part of Weekend Testing increases his knowledge base.
Sushant tested the application keeping in mind the age group of the audience. He has a habit of testing any application from user-perspective. As he was exploring the application, he found some issues which he could not ignore. He also highlighted how being in an informal environment helped him think and test better.
He was confident that such Weekend sessions would prepare him for the tough environment at office.
Satish brought with him - a different perspective - he searched for failures in the previous releases. Based on the failures, he modified his test scenarios. Satish concentrated only on the generation of test idea. As part of it, he learnt the application.
The biggest challenge was the lack of knowledge of the categories in the application. He had never attempted a Beta-test and this entire exercise itself proved to be the greatest learning. He stressed the fact participating in this exercise increased his confidence.
As a tester, we have to concentrate on the mission and not on hunting bugs. Many testers find it difficult :)
Tejas had a major challenge: Unclear requirement. Even he fell into the trap of not questioning.
He highlighted the importance of
One more important point which came up in this discussion was that it is OK to fail in front of friends than failing in front of stakeholders.
He promised that he needs to give more attention to record test ideas in a systematic way.
Next, I had to present my learning and experiences.
I listed the two links which helped me and the quality criteria I concentrated on. We had a discussion on the difference between Claims Testing and Acceptance Testing. My learning was to improve my knowledge on the different quality criteria used to test any application.
We had a further discussion on each other's list of test ideas. Every tester had to justify his list of test ideas and why he must be selected as the BETA TESTER.
The test reports were really interesting and covered a lot of different ideas.
Ajay: "I have taken care of more quality criteria : Functional, Usability, System Configuration, Data, Domain, Performance, Claims Testing and Operating System and hence increased coverage." Vivek: "I can cover different versions of different OS. Installation and Functional testing would mean good coverage" Tejas: "I can discover most of the functional bugs" Satish: "Functional, Usability and Scenario Testing were my main focus areas." Rajesh: "I concentrated on Functionality, Usability, Domain, Installation, Upgrade, Performance and Claims Testing"
Finally after the poll, Rajesh was selected as the BETA TESTER.
"It is more important to meet the mission than knowing the terminologies" Even though Rajesh did not understand what a test idea meant, what mattered was his list of ideas to have increased coverage.
This session was lively with interesting mission, discussions, learning and polling.
Thanks to all the testers. See you all in WT Session No. 11
This was the first session where I was not moderating and only testing the product. I was happy that I could dedicate more time for testing.
We tested from 3pm to 4pm and started the discussion session at 4pm sharp.
Poulami started off the discussion. This being her first experience with BWT, used Exploratory Testing Approach to guide her. She wanted to get a feel of the product before she could concentrate on issues in the application.
She found the "Layers" feature interesting enough to continue her focused testing on the Layers and Filters feature.Happy with her first BWT experience, she promised to attend more sessions before passing any feedback to the team.
Poulami found the application very user-friendly and found the Auto-Crop feature not working.
Rajesh was next to describe his experiences. He was interested more in the Sign Up feature of the product. Having created an email address with username of 132 characters length, he was unable to login. Though the email was created successfully, an error message greeted him on Login.
Me and Rajesh had a discussion about an error message popping on the screen if Webcam was not connected. We were not sure if Flash generated the error or the SplashUp application generated this error. While I felt that the error was application specific, Rajesh was of the opinion that it was similar to Flash Generic messages.
I was happy that Rajesh enjoyed testing the application. He also felt that this was a good application to test.
Once Rajesh was done with his description, Amit took over. Amit was frustrated with the application being non user friendly. Absence of help files and lack of support to other image formats posed a serious question regarding the scope of the application.
One of the highlights of Amit's description was the bug he discovered. Moving the error message out of visible window area made it disappear. He felt that such bugs were common in similar applications and make him wonder if the application is really tested before releasing.
Someone had to cool Amit's frustration on the product and Dhanasekar took centre stage. Like Poulami, he too was a first timer to BWT. He had no experience of testing any imaging software and hence concentrated on the different file types for the application.
One of the bugs found by Dhanasekar was the "Improper handling of unsupported file formats".
This made me wonder how different people look at the same application in different ways and How thought process of each individual under the same circumstances varied.
The only concern he expressed was the lack of prior knowledge of the product being tested. BWT's purpose of letting testers to test with less information about the product would be defeated. The thrill of testing an application when one does not know anything about the application is different from testing a known application is different.
There is less chance of getting biased if one does not know much information about an application. Amit also was of the opinion that exploring a product without much information is good as testers get to learn a lot of new things.
What followed next interested me. Suja's description of her testing approach. After the initial "Get to know the product" session, Suja divided her tests into "Happy Testing" and "Negative Cases".
I feel this is a very narrow way of modelling the application. It was good to see other testers actively participating in the discussion. Even Suja wanted the application to have more documentation to help the user. The experience with BWT was good and she was happy.
Gunjan was next and her previous experience in testing imaging software helped her. using an Exploratory approach, she went on different tours of the product. She found some bugs with the Zoom and Filters feature. Her logical approach to testing the application was a different experience when compared to the last BWT session she attended.
Her only concern was that it took some time to know some features.
Next was my turn. Only testing and no moderating was in itself a different experience for me. This application had lots of bugs and if one is Bug-hungry, I'd recommend this application.
One of the strange bugs I discovered was to make the Menu bar disappear.I also learnt a lot of different bugs.
The purpose of BWT is achieved if a tester goes back with some learning. :)
Amit asked a very important question: How many of you tried using the application only with their keyboard? I replied in the negative as if it failed, that would be an usability issue and the mission was to find functionality issues.
Karan's summary was rocking. He had typed everything in a notepad and just pasted everything at once on his turn. Following an Exploratory approach to some extent, he felt the application was not user friendly. He was confident that with time, this application could be developed into a full-fledged application.
Parimala - the moderator for the session was the last one to present. Lack of dedicated time for testing was her main concern. A new software for her, being a curious tester, she explored and learnt most of it quickly. She tested the Tools section of the application till time permitted.
Overall the session was good coupled with strange bugs an discussions about them. The only concern was: It was fast and discussions were not full-fledged.
We will improve on this next time. Thanks to all the testers, I learnt some new bugs. Interested to join us in next session? Email to firstname.lastname@example.org
See you all in BWT 10. Till then, ENJOY TESTING :)
Update: Please find the Test Report shared at Scribd.
20th September 2009, 9pm -11pm IST would be etched in the minds of six testers who got together online to test the ‘Areca Backup’ application. This session marked the eighth session of BWT. It was exactly two months since the concept of ‘Bangalore Weekend Testers’ originated.
In their own words… “Areca-Backup is a file backup software that supports incremental, image and delta backup on local drives or FTP servers. Areca-Backup also allows you to browse your backups and navigate among different version of the files contained in your archives.”
All the testers were geared up for the testing session. The application had been downloaded.
What next? What about the MISSION?
The mission for this session was as special as the session.
Following mission was given to the testers: Mission -: You have to choose one of the quality criteria out of the following – Installability / Usability / Performance / Reliability / Compatibility /Testability. Choose one quality criteria and stick to that quality criteria for the entire session to test the ‘Areca Backup’ application.
Special THANKS to Pradeep who suggested this mission.
Each tester was very enthusiastic on hearing the mission and started their journey of exploring the product in order to find valuable information. The testing session started at 09.03pm and lasted till 10.03pm IST. We had the Discussion session soon after the testing session.
Each tester was supposed to reveal the mission they chose, the approach followed during the testing session. The tester had to highlight any challenges they faced and any specific learning from this session. The individual tester’s experience of the BWT 8 session was the icing on the cake :)
Sudhakar, the first tester to send in his Test Report, started off the discussion session. He was very clear in his mission: “Find issues” and he chose Usability as the Quality Criteria. One interesting thing about his whole approach to test this application was his expectation: “The application should guide the user”
With focus on data validations, Sudhakar was frustrated at the not-so-good validation implemented. One major challenge apart from the poor validation was the time taken to understand the application. Lack of understanding the product fully prevented him from exploring the product to a greater degree. Finally, Sudhakar felt other than lack of time, the overall experience of participating in this session was good.
We moved on to Vasupratha’s experience. Vasupratha echoed Sudhakar’s concern about the lack of time. Usability was the quality criteria once again. Vasupratha felt that additional time for testing would have helped in better exploration of the product.
Next turn was Parimala’s. A different Quality Criteria: Installation was chosen. The mission set by her was straight forward: “To test the installability of Areca 7.1.5” Following an Exploratory approach, Parimala gave a lot of valuable information in terms of bugs. As the number of installation steps was minimal, Parimala did not face a lot of challenges. At the same time a particular intermittent bug was playing hide and seek with her. :)
Parimala learnt some new scenarios to test once she took up Installation as the criteria. The new learning (New Scenarios) helped her do a round of Scenario Testing. With this being a good experience, she wanted to do some functional testing in the near future. :)
Gunjan was ready to share her experiences. Her mission was decided more because of the circumstances than her choice. Usability was her first choice. The application when launched was giving an error about a missing .dll file. So, Gunjan shifted her focus from Usability to Installability as she had to un-install and re-install the application.
With an exploratory approach to her rescue, Gunjan delved deep into issues in installation and un-installation. Some interesting issues greeted Gunjan even though System restore was also tried to get the application working. The help file was one of the sources of information from which she tried out the scenarios. Her biggest learning was to ensure system is in correct condition before testing any application. Gunjan being a first timer to BWT enjoyed herself and found it interesting to think “OUT OF THE BOX”. This was the first time she tested any software out of her office work.
Now, it was the turn of Bhargavi. Bhargavi’s mission focused on finding problems with Performance as the Quality Criteria. Following an Exploratory approach, Bhargavi could face many challenges easily. The major challenge was the difficulty in understanding the features and knowing where to start and how to start modeling the application. Some bugs pertaining to other quality criteria slowed down Bhargavi’s progress.
She had her share of learning too. As she took the “Performance” quality criteria which she hadn’t tested before, she learnt new ideas to test. This boosted her confidence. Bhargavi enjoyed testing the product with a different perspective – Focus on only one quality criteria. Her tests forced 100% CPU usage as well as low disk space.
The mission taught Bhargavi to concentrate on particular quality criteria who habitually concentrated on all quality criteria. Parimala added a point as to how testers find interesting and intriguing issues when focus is on a small part of the application.
Finally, it was my turn. I chose “Performance” as the quality criteria for the simple reason: Never tested for Performance alone before. I too followed an Exploratory approach with my toolkit which consisted of Process Explorer, Windows Task Manager, MS Excel, Notepad, WebEx Recorder and Date and Time Properties Window.
The biggest challenge for me was to learn the product quickly. Help file helped me to some extent. Once I understood how to backup, I started with 23GB folder and that was my biggest mistake of the day. :(
Expecting backup software to handle 23GB of data and backup within 15 minutes was very foolish on part of me. Thereby, I spent fifteen minutes watching the progress bar of the backup process.
On trying with a 4MB file, backup process completed within a matter of few seconds. I glanced through the report which was generated after backup. A bug in the report took away my precious ten minutes.
Biggest learning I had out of this exercise was to prepare test data while the system was being modeled. Also having unrealistic goals(Read 23GB file) does not help the cause.
Later, I tried with 30MB, 60MB and 90MB folder to monitor the performance. But it was almost the end of testing session. Bharath highlighted the value www.testersdesk.com added in testing the Performance criteria. Experience was good as it marked the successful completion of two months of Weekend Testing.
Every BWT session gave me a different experience. Right from BWT 1 where I and Parimala tested www.vischeck.com to BWT 8, every experience is a unique learning and thought provoking experience.
I’d like to thank all the testers, Pradeep and the BWT members: Manoj, Parimala and Sharath for their continuous support and hard work.
Looking forward to BWT 9: A new product, new testers and a new experience. See you there. :)
This is a post highlighting the conversation I had with one of my programmers.
Programmer (P): Hi Ajay! I need your help in reproducing the defect #abcdef
Defect #abcdef: Step 1: ... Step 2: Enter a value 50 in the text field. Step3: ...
Ajay (A): Sure, How can I help you?
P: I'm unable to reproduce the defect.
A: Which OS are you trying it on? I had logged it on Win 2003. P: Yes, I know that. It would be easier if I could reproduce that on Win XP. A: (he he smiles) OK, I'll reproduce the defect first on Win 2003 and then we could try on Win XP also. P: OK, great.
A: Step1, Step2, Step3 and here it is, REPRODUCIBLE!!! P: Ok, let me try it. Step1, Step2 and Step3 and ??? Where's the defect? A: Oh!!! Let me try again. Step1, Step2 and Step3 and again REPRODUCIBLE!!! P: (Smiles) Step1, Step2 and Step3 : NOT REPRODUCIBLE
Silence for few seconds.
A: (thinking what could be different) Hmmm, maybe the speed with which I execute the steps is different from your speed of execution. P: Maybe. Very little chance of that happening. A: OK, let me try consecutive times. Step1, 2, 3: REPRODUCIBLE. Step1, 2, 3: REPRODUCIBLE. P: Oh!!! How do YOU reproduce that Man!!! See, Step1, 2, 3: NOT REPRODUCIBLE.
P: Maybe you are pressing 'Enter' after entering the number. A: Hmmm, I'm not pressing Enter key. P: Then maybe single click causes this problem. Or maybe double click to select the field before entering the number.
A: (thinking WOW, so many factors!!! Let him go on) P: Or this might happen if you select the text by 'Click and Drag' using mouse P: OK, I'll look into this issue. Thanks for your time. ---------- END OF CONVERSATION ---------
This particular conversation refreshed my memory of how many different factors affect a single entry in a text field.
> Operating System > Response time > If the focus is on the field or not > After entering the number, did the user press Enter or Tab? > Did the user double click on the field to select the default text? > Did the user delete the text before typing in the new text? > Did he press 'Delete' or 'Backspace'
I'm sure there are many more factors related to the single entry in the text field.
The point I want to highlight here is "How useful is it to have a conversation with a programmer about the product?"
In my case, it was useful. Have you experienced such an interaction? Do let me know.
Please feel free to share such experiences(Good & Bad).
How many times have you enjoyed a software while testing it?
One such occasion was in the Bangalore Weekend Testers Session held on Saturday, the 5th of September 2009 3pm - 5pm IST.
Twelve testers agreed to test TuxPaint, a free, award-winning drawing program for children ages 3 to 12. It combines an easy-to-use interface, fun sound effects, and an encouraging cartoon mascot who guides children as they use the program.
WOW!!! The kid in every tester came out and literally played with the software such that 60 bugs surfaced which were carefully noted down by the tester at work.
Amazing to know if you enjoy doing something, you can excel at it too!!!
Please do treat yourself with this software available at www.tuxpaint.org. If you want to know the issues beforehand, check out the list of bugs found by us here.
Thanks to Bill Kendrick who permitted us to test and publish the report.
I'm sorry if you missed this week's session too but you have a chance next week. If you want to register for next week, email to email@example.com and watch out this space.
I’m very happy to have participated in five consecutive BWT Sessions. Thanks to all the members for their active participation.
The 5th BWT Session was on 30th August 2009 from 9.30pm to 11.30pm IST.
After a hectic day of writing my MS exams, it was time for the BWT Session.
I logged in at 8.30pm IST to find Amit and Anup online. Slowly as the clock ticked 9.15pm IST, members started joining.
Finally at 9.30pm IST, we were a group of seven testers ready to prove a point.
“Every product has bugs even if it’s a Google Product”
Application to Test: Google Calendar Testing Session: 9.30pm to 10.30pm IST Discussion Session: 10.30pm to 11.30pm IST.
From the learning perspective, it was challenging for me even though I could find bugs in the product.
After the testing session, discussions were good, heated and interesting. Members discussed their plan of attack, their learning experience, feel of the product. Rated the product and finally submitted their reports.
Some of the questions out of the discussion: 1. Why do some people assume that if it’s a Google product, it should not have bugs? 2. Should a product encourage easy learning? Is that an issue if it doesn’t? 3. Should products be compared during testing? To what extent that comparison must be done?
Expecting all the questions here???
No way, I’m going to list out all the questions, answers, comments, opinions here. If you are interested to join us, email to firstname.lastname@example.org
As promised, the test report is an improvement. You can check that out for yourself. We followed a common template this time.
Discussion was better than last time. Instead of telling the bugs in a round-robin manner, we discussed what we felt, what we tested, why we tested, what we learnt. Problems faced, challenges, tools used, questions and ideas were exchanged. And we had a lot of fun discussing.
Some of the questions which came up in the discussion: 1. Should a tester learn the product to find bugs? Follow this question on Test Republic here. 2. How working without pressure brings out the best in some testers? 3. Should we test the full application bit by bit or any one feature fully?
Will it work? Will it be good? Will it be enjoyable?
Can we manage? Will everyone benefit out of it? Will everyone have fun out of it? Will it be a learning experience? Will everyone agree to our motto? Will there be heated arguments? Will everyone come on time?
How many bugs will we find? How long the session it'll be?
How will we coordinate? Do we need more than one software?
What if we face any distractions? What if there is a power cut?
Ufff, all these questions were answered once the Bangalore Weekend Testing Session started. Everyone came on time and wow what a session we had!!!!
Date: 15th August 2009 Session started at 9.30pm IST and ended at 11.30pm IST.
Testing session: 9.30pm to 10.30pm Discussion Session: 10.30pm to 11.30pm
Every member participated actively and bugs flowed(literally) such that the discussion time was extended from 10.30pm -11.00pm to 10.30pm to 11.30pm IST. :)
I'll not hide the report anymore. Please find the report shared at Scribd.
Most Important: We promise to improve our bug reporting skills along with suitable screenshots.
About the website: TinyURL is a web service that provides short aliases for redirection of long URLs. Kevin Gilbertson, a web developer, launched the service in January 2002 so that he would be able to link directly to newsgroup postings which frequently had long and cumbersome addresses.
Our Testing Session highlights: The entire session was conducted over Group Chat on Gmail. We started our search for a software to test around 1145hrs on Aug 08th 2009. The search continued for an hour with Eyeos, Barcode4J and Piwik grabbing our attention.
Lack of prerequisites for the above softwares forced us to test TinyURL.
Bugs were communicated to the group as and when they were found. The tests were based on learning from the tests conducted by the other testers.
Sharath's tests focussed on the security issues. Wish we had a proxy network setup. He'd have loved to test on a proxy network.
Manoj's tests focussed on the usability and Custom Alias feature. He highlighted the disadvantages of using Custom alias feature.
I focussed on the general functionality of the website.
It was fun testing coupled with good learning for me. Though the testing session lasted for over an hour, the lessons gained would be for a long time. Thanks to Sharath and Manoj for their determination and passion at odd hours of Sunday.
This is an article to highlight the importance of TestersDesk in my test data generation activity. As a tester, I have to generate a lot of test data and also take care of the Pesticide Paradox.
One of the most useful tool irrespective of the type of application I test is :
Quantified String Generator:
I use it quite often to check the boundary conditions of most of the fields in the application. This way the consistency within the application is checked for. I am able to hit the Critical Condition of most of my defects using the Quantified String Generator.
Apart from the commonly used string generators, I also use ‘Test Data Generation Toolkit’ regularly.
Each of the tools in the toolkit: ‘Size based File Generator’ , ‘Password Test Generator’, ‘Person Name Generator’, ‘Email address generator’, ‘Date-Time Stamp generator’ and Finally ‘OS Information Scripts’ has its own importance.
There was a time when we were struggling and wasting a lot of time generating the test data.
Also, we were pretty sure that the test data is very limited. Pesticide Paradox was also creeping up.
But once TestersDesk has come up with such a wonderful initiative for test data generation, we are having a lot of fun using the test data and increasing our testing efficiency.
With TestersDesk, test data generation takes less than two minutes.
I happily recommend TestersDesk to every tester and I can say with full confidence that TestersDesk will save your time, increase your testing efficiency and is easy to use. With the seed value feature, you can be sure that the same test data is being used.
If you do need a different set, changing the seed value would do the trick.
Thank you TestersDesk. We look forward to more such useful features.
I had to check a fix provided by a programmer to solve an issue.
I had to replace two files- One an .exe and the other .ini file.
Once I replaced the files, changed the contents of the files to suit my test and went through the steps to reproduce the issue, the issue was still reproducible.
So, I sent an email to the programmer that the issue is not resolved and the fix does not solve the issue. He wanted to have a look at my machine and try the scenario once.
Before handing over the control to the programmer, I copied the two files which were part of the test. This way I can compare the files after the programmer has completed his test.
The programmer was in Germany and was accessing the machine. I could not see what actions he was carrying out on the machine.
After ten minutes, I got a reply to that email that the issue is solved. Please re-test to confirm.
I was surprised and then along with my manager wanted to carry out the test. Before conducting the test, I wanted to compare the current file(after programmer conducted the test) and the file used by me to conduct the initial test.
And surprisingly, the contents were slightly different.
The initial file had 'Timer: 5:00:00 PM' and the new file had 'Time: 5:00:00 PM'.
Also, the programmer asked me to include an additional step so that the issue is resolved.
The additional step was to exit the service and run the .exe so that it may re-read the contents of the modified .ini file. I had not run the .exe after modifying the .ini file.
I replied back to him that the contents were changed and also the additional step was missed. These were the two reasons why I was able to reproduce the issue and he wasn't.
1. It is always safe to have a backup of files before and after conducting the test. Again it depends on what value this step adds to the overall mission. In my case, If I had not compared with the original file, there was very little chance that this issue would have been fixed before the customer found it.
2. Programmers have a natural tendency to fix issues on the fly. It is by nature that they fix issues and they do not consider it important to inform the tester about the changes. In my case, it turned out to be 100% true. The programmer never informed me about the change of text to 'Time' from 'Timer'.
3. I must have paid a bit more attention and come to the conclusion that the .exe must have been run to re-read from the .ini file. This taught me to relate the interaction between files & application to the tests I conduct using those files.
Simple things like having a backup of important files helped me find an important issue before it knocked the customer's door :)
Last week, I was lucky to have captured this image live.
This is about a traffic signal: Not the movie, this is about the signalling device used to control traffic. On one of the main roads near my residence, the traffic signal displayed one of the rare-to-be-seen visuals.
As seen in the image, this seems to be a signal to guide the pedestrian.
I would expect only one image to be displayed at this traffic signal.
'Red Pedestrian' meaning: Do not Cross now
'Green Pedestrian' meaning: It is safe to cross now
should have been displayed. I was waiting at the other end of the traffic light and waiting for it to turn 'Green'.
And I was surprised at what I saw for the next 128 seconds. Both the 'Red Pedestrian' and the 'Green Pedestrian' images were displayed simultaneously next to each other.
Then it turned to 'Green' image for few seconds and alternated between the 'Green' and 'Red' image after a set time interval. I was not able to observe that 'two images at once' visual. :( [Not reproducible ?] After a week long observation, I could not see this visual again. Even after observing the signal continuously for 11 hours (10am - 9pm), the same scenario was not displayed.
My question to all the readers and their friends: How costly is it to miss this bug?
First of all, do you agree that it(display of two images simulataneously) is a bug? Was this Race Condition not covered during testing? Was this risk too costly to be avoided in the first place? Was this risk covered and yet not fixed?