Sunday 19 November 2017

Types of Testing

Types of Testing:

v  Sanity Testing:
Validating the major functionality of the application is called Sanity Testing.
As soon as application is required into testing environment we need to validate whether the        main functionality of the application is working fine or not. Because if the major functionality of the application is not working fine, we cannot conduct the complete testing on the application.
    In this case we need to report the defect to the development team immediately with urgent severity, we will reject the build. In this case the development team is also need to fix those defects immediately.

Factors to get sanity testing failed:
è Environmental issues
è Compatibility issues

v  Smoke Testing:
Validating the major functionality of the application by the development team,  in the development environment.

v  Usability (or) GUI (or) User interface:
Look and feel of the application
Verifying user friendliness of the application in terms of  logo’s, colors, fonts, alignments

v  Functional Testing:
Validating the overall functionality of the application, along with major functionalities with respect to the client business requirements.

v  Security Testing:
Validating the security of the application in terms of authentication & authorization.

Authentication: Verifying whether system is accepting valid user or not
Authorization: Verifying whether the system is providing right information to the right person or not.

We will give more preference to the security testing for web based applications.

v  Data base Testing:
Validating the database of the application is called database testing..Whatever we performed in the front end that should reflect in the back end database.

For performing DB testing most of the projects use TOAD or Query analyzer.

v  Compatibility Testing:
Performing testing on application in different browsers is called compatibility testing or browser compatibility testing.

v  Configuration Testing:
Performing testing on the application in different configuration is called configuration testing

v  Re Testing:
Testing conducting on the fixed defect is called retesting.

v  Regression testing:
Testing conducting on the fixed build or modified application is called regression testing.

(At the time of execution if we identify any defects, we need to report to the development team and once development team has fixed defect we need to perform Re-testing on fixed defect)

(Once the fixed defect is working fine, we need to thing for fixing defect. Developer might have changed other functionality. That is reason we are again performing the testing on the modified application)

v  Test data:
A data or a value which we are using to perform testing on the application is called test data.
Most of the project test data will provide by client and in some projects we need to create our own test data to perform testing.
In some projects we are maintaining one separate team called Test data management team.

v  Positive test data:
A positive data or value which we are using to perform testing on the application is called positive test data.
EX:- Valid UID, Valid PWD

v  Negative test data:
A negative data or value which we are using to perform testing on application is called negative testing.
EX:- Invalid UID, PSW

v  Positive testing:
Performing testing on the application with positive test data is called as Positive testing.

v  Negative testing:
Performing testing on the application with negative test data is called as negative testing.

v  Alpha Testing:
Performing testing on the application by the client directly, this testing we are performing in the projects, because in project after completion of testing in testing environment client will perform testing in UAT environment.

v  Beta Testing:
Performing testing on the application by the client like people, this testing we are performing in products, because we are not developing the product for the specific client. So client like people perform testing on the product.

v  Performance testing:
Validating the performance of the application in term of load and stress

Load testing: Validating the performance of the application with respect to client expected users.
EX: As per the client requirement application should work fine with 5000 users. After completion of developing the application need to validate the performance of the application.

Stress Testing: Validating the performance of the application by increasing the client expected users to the peak level an identifying the “Break point” or “Knee point
EX: As per the client requirement application should work fine with 5000 users, in stress testing we are validating the performance by increasing the expected users from 5000 to 10000 and 10000 to 30000. At some point of time application will be hanged.

Note: The break point or knee point information we need to provide to the client for appropriate us of application.

v  Recovery Testing:
In this we are verifying how much time application is taking to come back from abnormal state to normal state.
EX: sometimes application will be hanged or crashed in that case we need to check how much time application is taking to come back from down status to up status.

v  Random (or)Monkey(or) Gorilla Testing:
Performing testing on the application randomly

EX: If we have 20 functionalities in one application, we need to perform testing sequentially. But in random testing we are validating testing randomly like 1, 5, 8, 12 etc,.

We can prefer this testing if we don’t have enough time to complete the execution of the test cases. But in real time this testing is not applicable or suggestible.
       
                If we are not able to complete the testing in time w can follow two approaches
1.      Postpone the release
2.      Partial release

Postpone the release:
If we are not able to complete the testing in the project we can postpone the release to the mid release as per the client conformation.

Partial release:
If we are not able to complete the testing we can release the completed module now and incomplete modules we can release in the next release.

v  Exploratory Testing:
By learning the domain knowledge of the application we are performing testing on the application. This helpful when we are changing from one domain to other domain
EX: Healthcare domain to telecom domain.

v  Parallel Testing:
Performing testing on the application by comparing similar type of application in the market

This testing is not possible in projects because in RDP we can able to access only one project related applications. We cannot access other application so parallel testing is not possible.

v  Static testing:
Performing testing on the application without doing any action is called static testing.

v  Dynamic testing:
Performing testing on the application by doing some action is known as dynamic testing.

v  Mutation Testing:
Performing testing on the application by changing the source code of the testing is called mutation testing.

v  URL Testing:

Verifying whether we are able to navigate to the application when we enter URL(one of the security testing)

0 comments:

Post a Comment