Wednesday 20 June 2012

Test ware

Test ware



"Testware" is a term used to describe all of the materials used to perform a test. Testware includes test plans, test cases, test scripts, and any other items needed to design and perform a test.  
Designing tests effectively, maintaining the test documentation, and keeping track of all the test documentation (testware) is all major challenges in the testing effort.
Generally speaking, Testware a sub-set of software with a special purpose, i.e. for software testing, especially for software testing automation
Testware: - Testware is produced by both verification and validation testing methods.
Testware includes test cases, test plan, test report and etc. Like software, testware should be placed under the control of a configuration management system, saved, faithfully maintained.



Saturday 16 June 2012

Test suite & Test Log


Test suite

The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the Collections of test cases are sometimes incorrectly termed a test plan, a test script, or even a test scenario.

Test Log

test log provides a chronological record of all relevant details about the execution of the test cases

the following table shows that the template of testlog


Requirement Number Tested
Test Case Number
Test Case Description
Date Tested
Test Stage Tested
Pass (P)
Fail (F)
Use the requirement number included in the Requirements Traceability Matrix
Specify the unique test number assigned to the test case
Provide a brief description of the functionality the case will test
mm/dd/yy
Unit, Functional, Integration, System, Interface, Performance, Regression, Acceptance, Pilot
P
F


Add more lines as needed and remove blue text prior to use


















































TOTAL (Pass/Fail)
0
0

Thursday 7 June 2012

Test cases for a pen


Test cases for a pen
  •  To check the pen type
  • To check the pen cap is present or not
  •  To check the pen ink is filled or not
  •  To check the pen writing or not
  • To check the ink color i.e black or blue
  •  To check the pen color
  •  To check weather the pen is used to write all types of papers or not
  •  To check the ink capacity of the pen
  •  To check the pen product by fiber or plastic

Wednesday 6 June 2012

Test case for withdraw module in banking project


 Test case for withdraw module in banking project

Step1: when the balance in the account is nil, try to withdraw some amount (amount>0) should display msg as " insufficient funds in acc"
Step 2: when the account has some balance amount, try to withdraw amount(amount>balance amount in account), should display "insufficient funds in acc"
Step 3: when the account has some balance amount, enter a amount (amount<=balance amount), should withdrawn correct amount from account.
Step 4: when the account has some balance amount, enter the amount as 0, should display msg as withdrawal amount should be > 0 and should be in multiple of hundreds( varies depending on reqs docs).
In the case of Minimum balance mandatory in the Account:
Step 5: When the account has balance amount, try to withdraw whole amount , should display msg as " Minimum balance should be maintained".
Step 6:  When the account has balance amount=minimum balance, try to withdraw any amount , should display msg as " Minimum balance should be maintained".

Positive and Negative test cases


 Positive and Negative test cases

Positive Testing = (Not showing error when not supposed to) + (Showing error when supposed to) So if either of the situations in parentheses happens you have a positive test in terms of its result - not what the test was hoping to find. The application did what it was supposed to do. Here user tends to put all positive values according to requirements.

Negative Testing = (Showing error when not supposed to) + (Not showing error when supposed to)(Usually these situations crop up during boundary testing or cause-effect testing.) Here if either of the situations in parentheses happens you have a negative test in terms of its result - again, not what the test was hoping to find. The application did what it was not supposed to do. User tends to put negative values, which may crash the application.

For example in Registration Form, for Name field, user should be allowed to enter only alphabets. Here for Positive Testing, tester will enter only alphabets and application should run properly and should accept only alphabets. For Negative Testing, in the same case user tries to enter numbers, special characters and if the case is executed successfully, negative testing is successful.

Tuesday 5 June 2012

Test cases for coffee machine


Test cases for coffee machine
1. Plug the power cable and press the on button. The indicator bulb should glow indicating the machine is on.
2. Whether there are three different buttons Red, Blue and Green.
3. Whether Red indicated Coffee.
4. Whether Blue indicated Tea.
5. Whether Green indicated Milk.
6. Whether each button produces the correct out put (Coffee, Tea or Milk).
7. Whether the desired out put is hot or not (Coffee, Tea or Milk).
8. Whether the quantity is exceeding the specified the limit of a cup.
9. Whether the power is off (including the power indicator) when pressed the off button.
10. Verify the Output without Coffee Mix, Milk, Tea Mix in the machine

Test cases for one Rupees Coin Box (Telephone box)



 Test cases for one Rupees Coin Box (Telephone box)

Positive test cases:


TC1: Pick up the Handset
Expected: Should display the message “Insert one rupee coin"  
TC2: Insert the coin 
Expected: Should display the message “Dial the Number"  
TC3: When you get a busy tone, hang-up the receiver
Expected: The inserted one rupee coin comes out of the exit door.
TC4:  Finish off the conversation and hang-up the receiver
Expected: The inserted coin should not come out.
TC5: During the conversation, in case of a local call, (assume the duration is of 60 sec), when 45 as are completed
Expected: It should prompt you to insert another coin to continue by giving beeps.
TC6: In the above scenario, if another coin is inserted
Expected: 60 sec will be added to the counter.
TC7: In the TC5 scenario, if you don't insert one more coin.
Expected: The call gets ended.
TC8: Pick up the receiver. Insert appropriate one rupee coin; Dial the number after hearing the ring tone. Assume it got connected and you are getting the ring tone. Immediately you end up the call.
Expected: The inserted one rupee coin comes out of the exit door.

Error guessing & Error seeding


Error guessing & Error seeding

Error guessing is a test case design technique where the tester has to guess what faults might occur and to design the tests to represent them.

Error seeding is the process of adding known faults intentionally in a program for the reason of monitoring the rate of detection & removal and also to estimate the number of faults remaining In the program. 

Monday 4 June 2012

Web Testing


Web Testing
Web testing:Web testing is testing of Usability, Functionality, Security, Consistency and Performance of websites. It is unique because the number of users that a website can have can never be predicted. Difficult because of the technical complexities of a website and various types of browsers. Assuring website quality requires conducting of tests, automatically and repetitively. It is a challenge in software quality assurance.

Importance of Web Testing:It ensures Quality.It is quality that ensures the user to come back. It quality that give one website an edge over the another. Quality of a website directly reflects on the quality of the company and its products. Poor quality will cost a lot in poor customer relations, lost corporate image, and even in lost sales revenue. Unhappy users are sure to quickly depart to a different site.

When and where should Web testing be performed: Web testing is performed before going live or few months just before the website is launched.Certain automatic web testing scripts should be run periodically to verify consistency. It should also be performed during every major change. It is to be taken care that the testing activity should not impact user's performance.Hence ideally web testing should be done on a test website (test bed). All the development changes should be done on the test bed and when the testbed has been thoroughly tested, then it should be migrated to the live environment.This shall also ensure that there are no under construction pages in the live environment.

Web testing of B2B(Business to Business)sites:Testing should emphasize on features, time to fetch information, business processes, search facility etc. Aesthetics, looks etc should be taken up for testing toward the end of the testing phase.In other words their priority is low.Emphasis is on checking whether the user understands the process ans the way the website has mapped to the business process.Special emphasis is on User Access, Confidentiality of information, Authorization, Order Cancellation, Order amendment etc. Payment methods need thorough security testing.

Web testing of B2C(Business to Consumer)sites:Aesthetics(look and feel), Usability, Content etc.is a definite priority.Requires more emphasis on user friendliness, Navigation, search facility, predictability in terms of content distribution etc. Features that attract and retain the visitors like Chats, News, Newsletter, facility free email, Message boards,Forums, Online help, etc. need testing depending upon target group who could be anybody-children, adults, professionals, women etc. Help available on products, pages needs thorough content and usability testing. Also correctness and completeness of Disclaimer, terms and conditions, etc. should be checked. Special emphasis should be laid on accepting payments through credit card, handling of vendor paid parcels(VPP) etc

What to test in Web Testing:
Functionality and Content:Do all the critical Functionality, especially connection to legacy systems/databases work? Does the content of critical pages match what is supposed to be there? Does all dynamically generated content work properly?Do key phrases exist continually in highly changeable pages?Do critical pages maintain quality content from version to version?

Usability and Navigation:How well do all of the parts of the website hold together?Are all links inside and outside the website working? Do all of the images work? Are there parts of the website that are not connected? Is the structure simple for the user to understand?

Regression and Accuracy:How has one change to the website affect other parts? Are today's copies of the pages downloaded the same as yesterday's? Is the data presented to the user accurate enough?How much has the website changed since last upgrade?How have the changed parts been highlighted?

Performance/Stress:Does the website server respond to the browser request within certain performance parameters?How is the end to end response time after SUBMIT? Are there parts of the site that are so slow the user may discontinue working?Is the Browser--> Web-->Website-->Web-->Browser Connection quick enough.How does the performance vary by time of day, by load and usage?

Security/Integrity:How are Access rights being handled?How secure is the data input by the users? How secure is the website content itself?How are financial transactions, if any handled? How does the website encrypt data?

Friday 1 June 2012

Traceability Matrix



Traceability Matrix
A traceability matrix is a powerful tool. It can be of use to many regardless of the audience. It Clears confusion , settle disputes, shows coverage of requirements in specs, code, tests, etc. , it exposes gaps
shows real project progress, is a great tool in managing change, used to assist with project management
used to establish design/development/test priorities, used to identify risk areas, used to determine what if any 3rd party technologies are needed, used to determine tools needed for design, development and testing, etc.

NOTES!
This is just an illustration in an MS-Word document. Ideally one would assemble such a matrix in a spreadsheet or database to allow for querying. Custom views can be created to show only those columns that fit the specific needs of the user.
Simply stated - as in real life application of such a matrix, the white space represents work to be completed.
Examples within are at varying levels of requirement decomposition. This is intentional. It shows the need for more work - decomposition. It also demonstrates a common challenge to test engineering. The challenge is that some of the requirements are untestable. Ideally, those requirements would be decomposed to a state of testability.
Columns can be modified, added, or deleted/hidden to fit specific purposes.
Following ten columns will be the in Traceability matrix.
1.Requirement ID( the Requirement id provided in the SRS
document)
2.Requirements (Requirement Descrition)
3.High Level Design (document reference)
4.Implementation Design(implemented or not)
5.Source Code (Component/class Program Name)
6.User Documentation(preparation)
7.Unit Test Case Id(Unit test case ID's)
8.Integration Test Case Id (Integration test case ID's)
9.System Test Case Id (System Test case Id's)
10.Release / Build Number(build release number)
It will give coverage of Testcases at different levels of
Testing.

Defect age & Build Interval Period


Defect age


Defect age is nothing but the time gap between bug raised and bug resolved. Defect age analysis suggests how quickly defects are resolved by category. Defect age reports are a type of defect distribution report that shows how long a defect has been in a particular state, such as Open.

Build Interval Period:
The time gap between the two consecutive build versions is called Build Interval Period.


BVA & ECP


Boundary value Analysis (BVA): BVA is different from equivalence partitioning in that it focuses on “corner cases” or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive that value often used as a technique for stress load or volume testing. This type of validation is usually performed after positive functional validation has completed successfully using requirements specifications and user documentation.

Equivalence Partitioning: An approach where classes of inputs categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.

Example: