Wednesday 25 July 2012

API Testing

API Testing

An API (Application Programming Interface) is a collection of software functions and procedures, called API calls that can be executed by other software applications.

API testing is mostly used for the system which has collection of API that needs to be tested. The system could be system software, application software or libraries. API testing is different from other testing types as GUI is rarely involved in API Testing. Even if GUI is not involved in API testing, you still need to setup initial environment, invoke API with required set of parameters and then finally analyze the result. Setting initial environment become complex because GUI is not involved. In case of API, you need to have some way to make sure that system is ready for testing.

Test Cases for API Testing:
The test cases on API testing are based on the output.
•Return value based on input condition
Relatively simple to test as input can be defined and results can be validated. Example: It is very easy to write test cases for int add( int a, int b) kind of API. You can pass different combinations of int a and int b and can validate these against known results.
•Does not return anything
 Behavior of API on the system to be checked when there is no return value.
Example: A test case to delete (List Element) function will probably require validating size of the list or absence of list element in the list.
•Trigger some other API/event/interrupt
 The output of an API if triggers some event or raises some interrupt, then those events and interrupt listeners should be tracked. The test suite should call appropriate API and declarations should be on the interrupts and listener.
•Update data structure
This category is also similar to the API category which does not return anything. Updating data structure will have some effect on the system and that should be validated.
•Modify certain resources
 If API call is modifies some resources, for example makes update on some database, changes registry, kills some processes etc., then it should be validated by accessing the respective resources.

API Testing Approach
An approach to test the Product that contains an API.
Step I: Understand that API Testing is a testing activity that requires some coding and is usually beyond the scope of what developers are expected to do. Testing team should own this activity.
Step II: Traditional testing techniques such as equivalence classes and boundary analysis are also applicable to API Testing, so even if you are not too comfortable with coding, you can still design good API tests.
Step III: It is almost impossible to test all possible scenarios that are possible to use with your API. Hence, focus on the most likely scenarios, and also apply techniques like Soap Opera Testing and Forced Error Testing using different data types and size to maximize the test coverage. Main Challenges of API Testing can be divided into following categories.

In Automation API Testing shows in the following picture.




Wednesday 18 July 2012

How to write the Test Cases

How to write the Test Cases


Level 1: In this level you will write the basic test cases from the available specification and user documentation.
Level 2: This is the practical stage in which writing test cases depend on actual functional and system flow of the application.
Level 3: This is the stage in which you will group some test cases and write a test procedure. Test procedure is nothing but a group of small test cases maximum of 10.
Level 4: Automation of the project. This will minimize human interaction with system and thus QA can focus on current updated functionalities to test rather than remaining busy with regression testing.

Thursday 12 July 2012

Tasks of the test leader and tester


Tasks of the test leader and tester

Test leader tasks may include:

Ø   Coordinate the test strategy and plan with project managers and others.
Ø   Write or review a test strategy for the project, and test policy for the organization.
Ø  Contribute the testing perspective to other project activities, such as integration planning.
Ø  Plan the tests – considering the context and understanding the test objectives and risks –including selecting test approaches, estimating the time, effort and cost of testing, acquiring resources, defining test levels, cycles, and planning incident management.
Ø  Initiate the specification, preparation, implementation and execution of tests, monitor the test results and check the exit criteria.
Ø  Adapt planning based on test results and progress (sometimes documented in status reports) and take any action necessary to compensate for problems.
Ø  Set up adequate configuration management of testware for traceability.
Ø  Introduce suitable metrics for measuring test progress and evaluating the quality of the testing and the product.
Ø  Decide what should be automated, to what degree, and how.
Ø  Select tools to support testing and organize any training in tool use for testers.
Ø  Decide about the implementation of the test environment.
Ø  Write test summary reports based on the information gathered during testing.

Tester tasks may include:

Ø  Review and contribute to test plans.
Ø  Analyze, review and assess user requirements, specifications and models for testability.
Ø  Create test specifications.
Ø  Set up the test environment (often coordinating with system administration and network management).
Ø  Prepare and acquire test data.
Ø  Implement tests on all test levels, execute and log the tests, evaluate the results and document the deviations from expected results.
Ø  Use test administration or management tools and test monitoring tools as required.
Ø  Automate tests (may be supported by a developer or a test automation expert).
Ø  Measure performance of components and systems (if applicable).
Ø  Review tests developed by others.

Defect submission process


Defect submission process

Testing Process


Testing Process

Levels of documents prepared at project testing


Levels of documents prepared at project testing




Project & Product

Project & Product
Examples:

Project: Any Company, Bank..etc sites
                                             
                                                  In the above examples that sites are based on their own requirements for example take Kotak Bank and HDFC Bank sites they seems to be different & their functionality also different. Here only that particular bank gives the requirements to the website developing company. And that site is maintained by only that bank.      

Product: Printers,Scanners,Mobiles etc

                                       In the above examples that are used by so many clients for example take mobile phones every mobile manufacturing company firstly launches the basic models and takes the requirements from the clients(people) like feedback and prepares the mobile according to the clients(peoples) requirements and the cost also differs according to the requirements.

Wednesday 20 June 2012

Test ware

Test ware



"Testware" is a term used to describe all of the materials used to perform a test. Testware includes test plans, test cases, test scripts, and any other items needed to design and perform a test.  
Designing tests effectively, maintaining the test documentation, and keeping track of all the test documentation (testware) is all major challenges in the testing effort.
Generally speaking, Testware a sub-set of software with a special purpose, i.e. for software testing, especially for software testing automation
Testware: - Testware is produced by both verification and validation testing methods.
Testware includes test cases, test plan, test report and etc. Like software, testware should be placed under the control of a configuration management system, saved, faithfully maintained.



Saturday 16 June 2012

Test suite & Test Log


Test suite

The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It definitely contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the Collections of test cases are sometimes incorrectly termed a test plan, a test script, or even a test scenario.

Test Log

test log provides a chronological record of all relevant details about the execution of the test cases

the following table shows that the template of testlog


Requirement Number Tested
Test Case Number
Test Case Description
Date Tested
Test Stage Tested
Pass (P)
Fail (F)
Use the requirement number included in the Requirements Traceability Matrix
Specify the unique test number assigned to the test case
Provide a brief description of the functionality the case will test
mm/dd/yy
Unit, Functional, Integration, System, Interface, Performance, Regression, Acceptance, Pilot
P
F


Add more lines as needed and remove blue text prior to use


















































TOTAL (Pass/Fail)
0
0

Thursday 7 June 2012

Test cases for a pen


Test cases for a pen
  •  To check the pen type
  • To check the pen cap is present or not
  •  To check the pen ink is filled or not
  •  To check the pen writing or not
  • To check the ink color i.e black or blue
  •  To check the pen color
  •  To check weather the pen is used to write all types of papers or not
  •  To check the ink capacity of the pen
  •  To check the pen product by fiber or plastic

Wednesday 6 June 2012

Test case for withdraw module in banking project


 Test case for withdraw module in banking project

Step1: when the balance in the account is nil, try to withdraw some amount (amount>0) should display msg as " insufficient funds in acc"
Step 2: when the account has some balance amount, try to withdraw amount(amount>balance amount in account), should display "insufficient funds in acc"
Step 3: when the account has some balance amount, enter a amount (amount<=balance amount), should withdrawn correct amount from account.
Step 4: when the account has some balance amount, enter the amount as 0, should display msg as withdrawal amount should be > 0 and should be in multiple of hundreds( varies depending on reqs docs).
In the case of Minimum balance mandatory in the Account:
Step 5: When the account has balance amount, try to withdraw whole amount , should display msg as " Minimum balance should be maintained".
Step 6:  When the account has balance amount=minimum balance, try to withdraw any amount , should display msg as " Minimum balance should be maintained".

Positive and Negative test cases


 Positive and Negative test cases

Positive Testing = (Not showing error when not supposed to) + (Showing error when supposed to) So if either of the situations in parentheses happens you have a positive test in terms of its result - not what the test was hoping to find. The application did what it was supposed to do. Here user tends to put all positive values according to requirements.

Negative Testing = (Showing error when not supposed to) + (Not showing error when supposed to)(Usually these situations crop up during boundary testing or cause-effect testing.) Here if either of the situations in parentheses happens you have a negative test in terms of its result - again, not what the test was hoping to find. The application did what it was not supposed to do. User tends to put negative values, which may crash the application.

For example in Registration Form, for Name field, user should be allowed to enter only alphabets. Here for Positive Testing, tester will enter only alphabets and application should run properly and should accept only alphabets. For Negative Testing, in the same case user tries to enter numbers, special characters and if the case is executed successfully, negative testing is successful.

Tuesday 5 June 2012

Test cases for coffee machine


Test cases for coffee machine
1. Plug the power cable and press the on button. The indicator bulb should glow indicating the machine is on.
2. Whether there are three different buttons Red, Blue and Green.
3. Whether Red indicated Coffee.
4. Whether Blue indicated Tea.
5. Whether Green indicated Milk.
6. Whether each button produces the correct out put (Coffee, Tea or Milk).
7. Whether the desired out put is hot or not (Coffee, Tea or Milk).
8. Whether the quantity is exceeding the specified the limit of a cup.
9. Whether the power is off (including the power indicator) when pressed the off button.
10. Verify the Output without Coffee Mix, Milk, Tea Mix in the machine

Test cases for one Rupees Coin Box (Telephone box)



 Test cases for one Rupees Coin Box (Telephone box)

Positive test cases:


TC1: Pick up the Handset
Expected: Should display the message “Insert one rupee coin"  
TC2: Insert the coin 
Expected: Should display the message “Dial the Number"  
TC3: When you get a busy tone, hang-up the receiver
Expected: The inserted one rupee coin comes out of the exit door.
TC4:  Finish off the conversation and hang-up the receiver
Expected: The inserted coin should not come out.
TC5: During the conversation, in case of a local call, (assume the duration is of 60 sec), when 45 as are completed
Expected: It should prompt you to insert another coin to continue by giving beeps.
TC6: In the above scenario, if another coin is inserted
Expected: 60 sec will be added to the counter.
TC7: In the TC5 scenario, if you don't insert one more coin.
Expected: The call gets ended.
TC8: Pick up the receiver. Insert appropriate one rupee coin; Dial the number after hearing the ring tone. Assume it got connected and you are getting the ring tone. Immediately you end up the call.
Expected: The inserted one rupee coin comes out of the exit door.

Error guessing & Error seeding


Error guessing & Error seeding

Error guessing is a test case design technique where the tester has to guess what faults might occur and to design the tests to represent them.

Error seeding is the process of adding known faults intentionally in a program for the reason of monitoring the rate of detection & removal and also to estimate the number of faults remaining In the program. 

Monday 4 June 2012

Web Testing


Web Testing
Web testing:Web testing is testing of Usability, Functionality, Security, Consistency and Performance of websites. It is unique because the number of users that a website can have can never be predicted. Difficult because of the technical complexities of a website and various types of browsers. Assuring website quality requires conducting of tests, automatically and repetitively. It is a challenge in software quality assurance.

Importance of Web Testing:It ensures Quality.It is quality that ensures the user to come back. It quality that give one website an edge over the another. Quality of a website directly reflects on the quality of the company and its products. Poor quality will cost a lot in poor customer relations, lost corporate image, and even in lost sales revenue. Unhappy users are sure to quickly depart to a different site.

When and where should Web testing be performed: Web testing is performed before going live or few months just before the website is launched.Certain automatic web testing scripts should be run periodically to verify consistency. It should also be performed during every major change. It is to be taken care that the testing activity should not impact user's performance.Hence ideally web testing should be done on a test website (test bed). All the development changes should be done on the test bed and when the testbed has been thoroughly tested, then it should be migrated to the live environment.This shall also ensure that there are no under construction pages in the live environment.

Web testing of B2B(Business to Business)sites:Testing should emphasize on features, time to fetch information, business processes, search facility etc. Aesthetics, looks etc should be taken up for testing toward the end of the testing phase.In other words their priority is low.Emphasis is on checking whether the user understands the process ans the way the website has mapped to the business process.Special emphasis is on User Access, Confidentiality of information, Authorization, Order Cancellation, Order amendment etc. Payment methods need thorough security testing.

Web testing of B2C(Business to Consumer)sites:Aesthetics(look and feel), Usability, Content etc.is a definite priority.Requires more emphasis on user friendliness, Navigation, search facility, predictability in terms of content distribution etc. Features that attract and retain the visitors like Chats, News, Newsletter, facility free email, Message boards,Forums, Online help, etc. need testing depending upon target group who could be anybody-children, adults, professionals, women etc. Help available on products, pages needs thorough content and usability testing. Also correctness and completeness of Disclaimer, terms and conditions, etc. should be checked. Special emphasis should be laid on accepting payments through credit card, handling of vendor paid parcels(VPP) etc

What to test in Web Testing:
Functionality and Content:Do all the critical Functionality, especially connection to legacy systems/databases work? Does the content of critical pages match what is supposed to be there? Does all dynamically generated content work properly?Do key phrases exist continually in highly changeable pages?Do critical pages maintain quality content from version to version?

Usability and Navigation:How well do all of the parts of the website hold together?Are all links inside and outside the website working? Do all of the images work? Are there parts of the website that are not connected? Is the structure simple for the user to understand?

Regression and Accuracy:How has one change to the website affect other parts? Are today's copies of the pages downloaded the same as yesterday's? Is the data presented to the user accurate enough?How much has the website changed since last upgrade?How have the changed parts been highlighted?

Performance/Stress:Does the website server respond to the browser request within certain performance parameters?How is the end to end response time after SUBMIT? Are there parts of the site that are so slow the user may discontinue working?Is the Browser--> Web-->Website-->Web-->Browser Connection quick enough.How does the performance vary by time of day, by load and usage?

Security/Integrity:How are Access rights being handled?How secure is the data input by the users? How secure is the website content itself?How are financial transactions, if any handled? How does the website encrypt data?

Friday 1 June 2012

Traceability Matrix



Traceability Matrix
A traceability matrix is a powerful tool. It can be of use to many regardless of the audience. It Clears confusion , settle disputes, shows coverage of requirements in specs, code, tests, etc. , it exposes gaps
shows real project progress, is a great tool in managing change, used to assist with project management
used to establish design/development/test priorities, used to identify risk areas, used to determine what if any 3rd party technologies are needed, used to determine tools needed for design, development and testing, etc.

NOTES!
This is just an illustration in an MS-Word document. Ideally one would assemble such a matrix in a spreadsheet or database to allow for querying. Custom views can be created to show only those columns that fit the specific needs of the user.
Simply stated - as in real life application of such a matrix, the white space represents work to be completed.
Examples within are at varying levels of requirement decomposition. This is intentional. It shows the need for more work - decomposition. It also demonstrates a common challenge to test engineering. The challenge is that some of the requirements are untestable. Ideally, those requirements would be decomposed to a state of testability.
Columns can be modified, added, or deleted/hidden to fit specific purposes.
Following ten columns will be the in Traceability matrix.
1.Requirement ID( the Requirement id provided in the SRS
document)
2.Requirements (Requirement Descrition)
3.High Level Design (document reference)
4.Implementation Design(implemented or not)
5.Source Code (Component/class Program Name)
6.User Documentation(preparation)
7.Unit Test Case Id(Unit test case ID's)
8.Integration Test Case Id (Integration test case ID's)
9.System Test Case Id (System Test case Id's)
10.Release / Build Number(build release number)
It will give coverage of Testcases at different levels of
Testing.

Defect age & Build Interval Period


Defect age


Defect age is nothing but the time gap between bug raised and bug resolved. Defect age analysis suggests how quickly defects are resolved by category. Defect age reports are a type of defect distribution report that shows how long a defect has been in a particular state, such as Open.

Build Interval Period:
The time gap between the two consecutive build versions is called Build Interval Period.


BVA & ECP


Boundary value Analysis (BVA): BVA is different from equivalence partitioning in that it focuses on “corner cases” or values that are usually out of range as defined by the specification. This means that if function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001. BVA attempts to derive that value often used as a technique for stress load or volume testing. This type of validation is usually performed after positive functional validation has completed successfully using requirements specifications and user documentation.

Equivalence Partitioning: An approach where classes of inputs categorized for product or function validation. This usually does not include combinations of input, but rather a single state value based by class. For example, with a given function there may be several classes of input that may be used for positive testing. If function expects an integer and receives an integer as input, this would be considered as positive test assertion. On the other hand, if a character or any other input class other than integer is provided, this would be considered a negative test assertion or condition.

Example:


Tuesday 29 May 2012

Software Development Models


 Software Development Models

1.V-model (Sequential Development Model)
Although variants of the V-model exist, a common type of V-model uses four test levels, corresponding to the four development levels.

The four levels used in this syllabus are:
o  Component (unit) testing
o  Integration testing
o  System testing
o  Acceptance testing

In practice, a V-model may have more, fewer or different levels of development and testing, depending on the project and the software product. For example, there may be component integration testing after component testing, and system integration testing after system testing.

Software work products (such as business scenarios or use cases, requirements specifications, design documents and code) produced during development is often the basis of testing in one or more test levels. References for generic work products include Capability Maturity Model Integration (CMMI) or ‘Software life cycle processes’ (IEEE/IEC 12207). Verification and validation (and early test design) can be carried out during the development of the software work products.



2.Iterative-incremental Development Models
Iterative-incremental development is the process of establishing requirements, designing, building and testing a system in a series of short development cycles. Examples are: prototyping, Rapid Application Development (RAD), Rational Unified Process (RUP) and agile development models. A system that is produced using these models may be tested at several test levels during each iteration. An increment, added to others developed previously, forms a growing partial system, which should also be tested. Regression testing is increasingly important on all iterations after the first one. Verification and validation can be carried out on each increment.

3.Testing within a Life Cycle Model
In any life cycle model, there are several characteristics of good testing:
o  For every development activity there is a corresponding testing activity
o  Each test level has test objectives specific to that level
o  The analysis and design of tests for a given test level should begin during the corresponding
development activity
o  Testers should be involved in reviewing documents as soon as drafts are available in the development life cycle

Test levels can be combined or reorganized depending on the nature of the project or the system architecture. For example, for the integration of a Commercial Off-The-Shelf (COTS) software product into a system, the purchaser may perform integration testing at the system level (e.g.,
integration to the infrastructure and other systems, or system deployment) and acceptance testing
(functional and/or non-functional, and user and/or operational testing).