Monday, 24 February 2014

Manual Testing Interview Q & A - part 2

• What is software 'quality'?

            Quality software is reasonably bug-free, delivered on time and within budget, meets requirements and/or expectations, and is maintainable. However, quality is obviously a subjective term. It will depend on who the 'customer' is and their overall influence in the scheme of things. A wide-angle view of the 'customers' of a software development project might include end-users, customer acceptance testers, customer contract officers, customer management, the development organization's management/accountants/testers/salespeople, future software maintenance engineers, stockholders, magazine columnists, etc. Each type of 'customer' will have their own slant on 'quality' - the accounting department might define quality in terms of profits while an end-user might define quality as user-friendly and bug-free.• Black box testing - not based on any knowledge of internal design or code. Tests are based on requirements and functionality.

• White box testing - based on knowledge of the internal logic of an application's code. Tests are based on coverage of code statements, branches, paths, conditions.

• unit testing - the most 'micro' scale of testing; to test particular functions or code modules. Typically done by the programmer and not by testers, as it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code; may require developing test driver modules or test harnesses.

• incremental integration testing - continuous testing of an application as new functionality is added; requires that various aspects of an application's functionality be independent enough to work separately before all parts of the program are completed, or that test drivers be developed as needed; done by programmers or by testers.

• integration testing - testing of combined parts of an application to determine if they function together correctly. The 'parts' can be code modules, individual applications, client and server applications on a network, etc. This type of testing is especially relevant to client/server and distributed systems.

• functional testing - black-box type testing geared to functional requirements of an application; this type of testing should be done by testers. This doesn't mean that the programmers shouldn't check that their code works before releasing it (which of course applies to any stage of testing.)

• system testing - black-box type testing that is based on overall requirements specifications; covers all combined parts of a system.

• end-to-end testing - similar to system testing; the 'macro' end of the test scale; involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

• sanity testing or smoke testing - typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

• regression testing - re-testing after fixes or modifications of the software or its environment. It can be difficult to determine how much re-testing is needed, especially near the end of the development cycle. Automated testing tools can be especially useful for this type of testing.

• acceptance testing - final testing based on specifications of the end-user or customer, or based on use by end-users/customers over some limited period of time.

• load testing - testing an application under heavy loads, such as testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

• stress testing - term often used interchangeably with 'load' and 'performance' testing. Also used to describe such tests as system functional testing while under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database system, etc.

• performance testing - term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance' testing (and any other 'type' of testing) is defined in requirements documentation or QA or Test Plans.

• usability testing - testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers.

• install/uninstall testing - testing of full, partial, or upgrade install/uninstall processes.

• recovery testing - testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.

• security testing - testing how well the system protects against unauthorized internal or external access, willful damage, etc; may require sophisticated testing techniques.

• compatability testing - testing how well software performs in a particular hardware/software/operating system/network/etc. environment.

• exploratory testing - often taken to mean a creative, informal software test that is not based on formal test plans or test cases; testers may be learning the software as they test it.

• ad-hoc testing - similar to exploratory testing, but often taken to mean that the testers have significant understanding of the software before testing it.

• user acceptance testing - determining if software is satisfactory to an end-user or customer.

• comparison testing - comparing software weaknesses and strengths to competing products.

• alpha testing - testing of an application when development is nearing completion; minor design changes may still be made as a result of such testing. Typically done by end-users or others, not by programmers or testers.

• beta testing - testing when development and testing are essentially completed and final bugs and problems need to be found before final release. Typically done by end-users or others, not by programmers or testers.

• mutation testing - a method for determining if a set of test data or test cases is useful, by deliberately introducing various code changes ('bugs') and retesting with the original test data/cases to determine if the 'bugs' are detected. Proper implementation requires large computational resources.

What are 5 common problems in the software development process?

• poor requirements - if requirements are unclear, incomplete, too general, or not testable, there will be problems.

• unrealistic schedule - if too much work is crammed in too little time, problems are inevitable.

• inadequate testing - no one will know whether or not the program is any good until the customer complains or systems crash.

• featuritis - requests to pile on new features after development is underway; extremely common.

• miscommunication - if developers don't know what's needed or customer's have erroneous expectations, problems are guaranteed.

What are 5 common solutions to software development problems?

• solid requirements - clear, complete, detailed, cohesive, attainable, testable requirements that are agreed to by all players. Use prototypes to help nail down requirements.

• realistic schedules - allow adequate time for planning, design, testing, bug fixing, re-testing, changes, and documentation; personnel should be able to complete the project without burning out.

• adequate testing - start testing early on, re-test after fixes or changes, plan for adequate time for testing and bug-fixing.

• stick to initial requirements as much as possible - be prepared to defend against changes and additions once development has begun, and be prepared to explain consequences. If changes are necessary, they should be adequately reflected in related schedule changes. If possible, use rapid prototyping during the design phase so that customers can see what to expect. This will provide them a higher comfort level with their requirements decisions and minimize changes later on.

• communication - require walkthroughs and inspections when appropriate; make extensive use of group communication tools - e-mail, groupware, networked bug-tracking tools and change management tools, intranet capabilities, etc.; insure that documentation is available and up-to-date - preferably electronic, not paper; promote teamwork and cooperation; use protoypes early on so that customers' expectations are clarified.

Sunday, 23 February 2014

10 Tips to Survive and Progress in the Field of Software Testing

These tips not only survive but also advance you in your software testing career. Make sure you follow them:

Tip #1) Written communication – I repeatedly saying this on many occasions that keep all things in written communication. No verbal communication please. This is applicable to all instructions or tasks given to you by your superior. No matter how friendly your lead or manager is but keep things in emails or documents.

Tip #2) Try to automate daily routine tasks – Save time and energy by automating daily routine task no matter how small those tasks are.
E.g. If you deploy daily project builds manually, write a batch script to perform the task in one click.


 Tip #3) 360 degree testing approach – To hunt down software defects think from all perspectives. Find all possible information related to the application under test apart from your SRS documents. Use this information to understand the project completely and apply this knowledge while testing.
E.g. If you are testing partner website integration with your application, make sure to understand partner business fully before starting to test.

Tip #4) Continuous learning – Never stop learning. Explore better ways to test application. Learn new automation tools like Selenium, QTP or any performance testing tool. Nowadays performance testing is the hot career destination for software testers! Have this skill under your belt.

Tip #5) Admit mistakes but be confident about whatever tasks you did – Avoid doing the same mistake again. This is the best method to learn and adapt to new things.

Tip #6) Get involved from the beginning – Ask your lead or manager to get you (QAs) involved in design discussions/meetings from the beginning. This is more applicable for small teams without QA lead or manager.

Tip #7) Keep notes on everything – Keep notes of daily new things learned on the project. This could be just simple commands to be executed for certain task to complete or complex testing steps, so that you don't need to ask same things again and again to fellow testers or developers.

Tip #8) Improve you communication and interpersonal skill – Very important for periodic career growth at all stages.

Tip #9) Make sure you get noticed at work – Sometimes your lead may not present the true picture of you to your manager or company management. In such cases you should continuously watch the moments where you can show your performance to top management.
Warning – Don't play politics at work if you think your lead or manager is kind enough to communicate your skill/progress to your manager or top management. In that case no need to follow this tip.

Tip #10) Software testing is fun, enjoy it – Stay calm, be focused, follow all processes and enjoy testing. See how interesting software testing is. I must say it's addictive for some people.

Manual Testing Interview Q & A - Part 1

1. What is Acceptance Testing?
Testing conducted to enable a user/customer to determine whether to accept a software product. Normally performed to validate the software meets a set of agreed acceptance criteria.

2. What is Accessibility Testing?
Verifying a product is accessible to the people having disabilities (deaf, blind, mentally disabled etc.).

3. What is Ad Hoc Testing?
A testing phase where the tester tries to 'break' the system by randomly trying the system's functionality. Can include negative testing as well. See also Monkey Testing.

4. What is Agile Testing?
Testing practice for projects using agile methodologies, treating development as the customer of testing and emphasizing a test-first design paradigm. See also Test Driven Development.

5. What is Application Binary Interface (ABI)?
A specification defining requirements for portability of applications in binary forms across defferent system platforms and environments.

6. What is Application Programming Interface (API)?
A formalized set of software calls and routines that can be referenced by an application program in order to access supporting system or network services.

7. What is Automated Software Quality (ASQ)?
The use of software tools, such as automated testing tools, to improve software quality.

8. What is Automated Testing?
Testing employing software tools which execute tests without manual intervention. Can be applied in GUI, performance, API, etc. testing. The use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions.

9. What is Backus-Naur Form?
A metalanguage used to formally describe the syntax of a language.

10. What is Basic Block?
A sequence of one or more consecutive, executable statements containing no branches.

11. What is Basis Path Testing?
A white box test case design technique that uses the algorithmic flow of the program to design tests.

12. What is Basis Set?
The set of tests derived using basis path testing.

13. What is Baseline?
The point at which some deliverable produced during the software engineering process is put under formal change control.

14. What you will do during the first day of job?
What would you like to do five years from now?

15. What is Beta Testing?
Testing of a rerelease of a software product conducted by customers.

16. What is Binary Portability Testing?
Testing an executable application for portability across system platforms and environments, usually for conformation to an ABI specification.

17. What is Black Box Testing?
Testing based on an analysis of the specification of a piece of software without reference to its internal workings. The goal is to test how well the component conforms to the published requirements for the component.

18. What is Bottom Up Testing?
An approach to integration testing where the lowest level components are tested first, then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested.

19. What is Boundary Testing?
Test which focus on the boundary or limit conditions of the software being tested. (Some of these tests are stress tests).

20. What is Bug?
A fault in a program, which causes the program to perform in an unintended or unanticipated manner.

20. What is Defect?
If software misses some feature or function from what is there in requirement it is called as defect.

21. What is Boundary Value Analysis?
BVA is similar to Equivalence Partitioning but focuses on "corner cases" or values that are usually out of range as defined by the specification. his means that if a function expects all values in range of negative 100 to positive 1000, test inputs would include negative 101 and positive 1001.

22. What is Branch Testing?
Testing in which all branches in the program source code are tested at least once.

23. What is Breadth Testing?
A test suite that exercises the full functionality of a product but does not test features in detail.

24. What is CAST?
Computer Aided Software Testing.

25. What is Capture/Replay Tool?
A test tool that records test input as it is sent to the software under test. The input cases stored can then be used to reproduce the test at a later time. Most commonly applied to GUI test tools.

26. What is CMM?
The Capability Maturity Model for Software (CMM or SW-CMM) is a model for judging the maturity of the software processes of an organization and for identifying the key practices that are required to increase the maturity of these processes.


27. What is Cause Effect Graph?
A graphical representation of inputs and the associated outputs effects which can be used to design test cases.

28. What is Code Complete?
Phase of development where functionality is implemented in entirety; bug fixes are all that are left. All functions found in the Functional Specifications have been implemented.

29. What is Code Coverage?
An analysis method that determines which parts of the software have been executed (covered) by the test case suite and which parts have not been executed and therefore may require additional attention.

30. What is Code Inspection?
A formal testing technique where the programmer reviews source code with a group who ask questions analyzing the program logic, analyzing the code with respect to a checklist of historically common programming errors, and analyzing its compliance with coding standards.

31. What is Code Walkthrough?
A formal testing technique where source code is traced by a group with a small set of test cases, while the state of program variables is manually monitored, to analyze the programmer's logic and assumptions.

32. What is Coding?
The generation of source code.

33. What is Compatibility Testing?
Testing whether software is compatible with other elements of a system with which it should operate, e.g. browsers, Operating Systems, or hardware.

34. What is Component?
A minimal software item for which a separate specification is available.

35. What is Component Testing?
Testing of individual software components (Unit Testing).


36. What is Concurrency Testing?
Multi-user testing geared towards determining the effects of accessing the same application code, module or database records. Identifies and measures the level of locking, deadlocking and use of single-threaded code and locking semaphores.

37. What is Conformance Testing?
The process of testing that an implementation conforms to the specification on which it is based. Usually applied to testing conformance to a formal standard.

38. What is Context Driven Testing?
The context-driven school of software testing is flavor of Agile Testing that advocates continuous and creative evaluation of testing opportunities in light of the potential information revealed and the value of that information to the organization right now.

39. What is Conversion Testing?
Testing of programs or procedures used to convert data from existing systems for use in replacement systems.

40. What is Cyclomatic Complexity?
A measure of the logical complexity of an algorithm, used in white-box testing.

41. What is Data Dictionary?
A database that contains definitions of all data items defined during analysis.

42. What is Data Flow Diagram?
A modeling notation that represents a functional decomposition of a system.

43. What is Data Driven Testing?
Testing in which the action of a test case is parameterized by externally defined data values, maintained as a file or spreadsheet. A common technique in Automated Testing.

44. What is Debugging?
The process of finding and removing the causes of software failures.

45. What is Defect?
Nonconformance to requirements or functional / program specification

46. What is Dependency Testing?
Examines an application's requirements for pre-existing software, initial states and configuration in order to maintain proper functionality.

47. What is Depth Testing?
A test that exercises a feature of a product in full detail.

48. What is Dynamic Testing?
Testing software through executing it. See also Static Testing.

49. What is Emulator?
A device, computer program, or system that accepts the same inputs and produces the same outputs as a given system.

50. What is Endurance Testing?
Checks for memory leaks or other problems that may occur with prolonged execution

51. What is End-to-End testing?
Testing a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate.

52. What is Equivalence Class?
A portion of a component's input or output domains for which the component's behaviour is assumed to be the same from the component's specification.

53. What is Equivalence Partitioning?
A test case design technique for a component in which test cases are designed to execute representatives from equivalence classes.

54. What is Exhaustive Testing?
Testing which covers all combinations of input values and preconditions for an element of the software under test.

55. What is Functional Decomposition?
A technique used during planning, analysis and design; creates a functional hierarchy for the software.

54. What is Functional Specification?
A document that describes in detail the characteristics of the product with regard to its intended features.

55. What is Functional Testing?
Testing the features and operational behavior of a product to ensure they correspond to its specifications. Testing that ignores the internal mechanism of a system or component and focuses solely on the outputs generated in response to selected inputs and execution conditions. or Black Box Testing.

56. What is Glass Box Testing?
A synonym for White Box Testing.

57. What is Gorilla Testing?
Testing one particular module, functionality heavily.

58. What is Gray Box Testing?
A combination of Black Box and White Box testing methodologies? testing a piece of software against its specification but using some knowledge of its internal workings.

59. What is High Order Tests?
Black-box tests conducted once the software has been integrated.

60. What is Independent Test Group (ITG)?
A group of people whose primary responsibility is software testing,

61. What is Inspection?
A group review quality improvement process for written material. It consists of two aspects; product (document itself) improvement and process improvement (of both document production and inspection).

62. What is Integration Testing?
Testing of combined parts of an application to determine if they function together correctly. Usually performed after unit and functional testing. This type of testing is especially relevant to client/server and distributed systems.

63. What is Installation Testing?
Confirms that the application under test recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions.

64. What is Load Testing?
See Performance Testing.

65. What is Localization Testing?
This term refers to making software specifically designed for a specific locality.

66. What is Loop Testing?
A white box testing technique that exercises program loops.

67. What is Metric?
A standard of measurement. Software metrics are the statistics describing the structure or content of a program. A metric should be a real objective measurement of something such as number of bugs per lines of code.

68. What is Monkey Testing?
Testing a system or an Application on the fly, i.e just few tests here and there to ensure the system or an application does not crash out.

69. What is Negative Testing?
Testing aimed at showing software does not work. Also known as "test to fail". See also Positive Testing.

70. What is Path Testing?
Testing in which all paths in the program source code are tested at least once.

71. What is Performance Testing?
Testing conducted to evaluate the compliance of a system or component with specified performance requirements. Often this is performed using an automated test tool to simulate large number of users. Also know as "Load Testing".

72. What is Positive Testing?
Testing aimed at showing software works. Also known as "test to pass". See also Negative Testing.

73. What is Quality Assurance?
All those planned or systematic actions necessary to provide adequate confidence that a product or service is of the type and quality needed and expected by the customer.

74. What is Quality Audit?
A systematic and independent examination to determine whether quality activities and related results comply with planned arrangements and whether these arrangements are implemented effectively and are suitable to achieve objectives.

75. What is Quality Circle?
A group of individuals with related interests that meet at regular intervals to consider problems or other matters related to the quality of outputs of a process and to the correction of problems or to the improvement of quality.

76. What is Quality Control?
The operational techniques and the activities used to fulfill and verify requirements of quality.

77. What is Quality Management?
That aspect of the overall management function that determines and implements the quality policy.

78. What is Quality Policy?
The overall intentions and direction of an organization as regards quality as formally expressed by top management.

79. What is Quality System?
The organizational structure, responsibilities, procedures, processes, and resources for implementing quality management.

80. What is Race Condition?
A cause of concurrency problems. Multiple accesses to a shared resource, at least one of which is a write, with no mechanism used by either to moderate simultaneous access.

81. What is Ramp Testing?
Continuously raising an input signal until the system breaks down.

82. What is Recovery Testing?
Confirms that the program recovers from expected or unexpected events without loss of data or functionality. Events can include shortage of disk space, unexpected loss of communication, or power out conditions

83. What is Regression Testing?
Retesting a previously tested program following modification to ensure that faults have not been introduced or uncovered as a result of the changes made.

84. What is Release Candidate?
A pre-release version, which contains the desired functionality of the final version, but which needs to be tested for bugs (which ideally should be removed before the final version is released).

85. What is Sanity Testing?
Brief test of major functional elements of a piece of software to determine if its basically operational. See also Smoke Testing.

86. What is Scalability Testing?
Performance testing focused on ensuring the application under test gracefully handles increases in work load.

87. What is Security Testing?
Testing which confirms that the program can restrict access to authorized personnel and that the authorized personnel can access the functions available to their security level.

88. What is Smoke Testing?
A quick-and-dirty test that the major functions of a piece of software work. Originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch on fire.

89. What is Soak Testing?
Running a system at high load for a prolonged period of time. For example, running several times more transactions in an entire day (or night) than would be expected in a busy day, to identify and performance problems that appear after a large number of transactions have been executed.

90. What is Software Requirements Specification?
A deliverable that describes all data, functional and behavioral requirements, all constraints, and all validation requirements for software/

91. What is Software Testing?
A set of activities conducted with the intent of finding errors in software.

92. What is Static Analysis?
Analysis of a program carried out without executing the program.

93. What is Static Analyzer?
A tool that carries out static analysis.

94. What is Static Testing?
Analysis of a program carried out without executing the program.

95. What is Storage Testing?
Testing that verifies the program under test stores data files in the correct directories and that it reserves sufficient space to prevent unexpected termination resulting from lack of space. This is external storage as opposed to internal storage.

96. What is Stress Testing?
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Often this is performance testing using a very high level of simulated load.

97. What is Structural Testing?
Testing based on an analysis of internal workings and structure of a piece of software. See also White Box Testing.

98. What is System Testing?
Testing that attempts to discover defects that are properties of the entire system rather than of its individual components.

99. What is Testability?
The degree to which a system or component facilitates the establishment of test criteria and the performance of tests to determine whether those criteria have been met.

100. What is Testing?
The process of exercising software to verify that it satisfies specified requirements and to detect errors. The process of analyzing a software item to detect the differences between existing and required conditions (that is, bugs), and to evaluate the features of the software item (Ref. IEEE Std 829). The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component. What is Test Automation? It is the same as Automated Testing.

101. What is Test Bed?
An execution environment configured for testing. May consist of specific hardware, OS, network topology, configuration of the product under test, other application or system software, etc. The Test Plan for a project should enumerated the test beds(s) to be used.

102. What is Test Case?
Test Case is a commonly used term for a specific test. This is usually the smallest unit of testing. A Test Case will consist of information such as requirements testing, test steps, verification steps, prerequisites, outputs, test environment, etc. A set of inputs, execution preconditions, and expected outcomes developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. Test Driven Development? Testing methodology associated with Agile Programming in which every chunk of code is covered by unit tests, which must all pass all the time, in an effort to eliminate unit-level and regression bugs during development. Practitioners of TDD write a lot of tests, i.e. an equal number of lines of test code to the size of the production code.

103. What is Test Driver?
A program or test tool used to execute tests. Also known as a Test Harness.

104. What is Test Environment?
The hardware and software environment in which tests will be run, and any other software with which the software under test interacts when under test including stubs and test drivers.

105. What is Test First Design?
Test-first design is one of the mandatory practices of Extreme Programming (XP).It requires that programmers do not write any production code until they have first written a unit test.

106. What is Test Harness?
A program or test tool used to execute a tests. Also known as a Test Driver.

107. What is Test Plan?
A document describing the scope, approach, resources, and schedule of intended testing activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.

108. What is Test Procedure?
A document providing detailed instructions for the execution of one or more test cases.

109. What is Test Script?
Commonly used to refer to the instructions for a particular test that will be carried out by an automated test tool.

110. What is Test Specification?
A document specifying the test approach for a software feature or combination or features and the inputs, predicted results and execution conditions for the associated tests.

111. What is Test Suite?
A collection of tests used to validate the behavior of a product. The scope of a Test Suite varies from organization to organization. There may be several Test Suites for a particular product for example. In most cases however a Test Suite is a high level concept, grouping together hundreds or thousands of tests related by what they are intended to test.

112. What is Test Tools?
Computer programs used in the testing of a system, a component of the system, or its documentation.

113. What is Thread Testing?
A variation of top-down testing where the progressive integration of components follows the implementation of subsets of the requirements, as opposed to the integration of components by successively lower levels.

114. What is Top Down Testing?
An approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested.

115. What is Total Quality Management?
A company commitment to develop a process that achieves high quality product and customer satisfaction.

116. What is Traceability Matrix?
A document showing the relationship between Test Requirements and Test Cases.

117. What is Usability Testing?
Testing the ease with which users can learn and use a product.

118. What is Use Case?
The specification of tests that are conducted from the end-user perspective. Use cases tend to focus on operating software as an end-user would conduct their day-to-day activities.

119. What is Unit Testing?
Testing of individual software components.

120. how do the companies expect the defect reporting to be communicated by the tester to the development team. Can the excel sheet template be used for defect reporting. If so what are the common fields that are to be included ? who assigns the priority and severity of the defect
To report bugs in excel:
Sno. Module Screen/ Section Issue detail Severity
Prioriety Issuestatus
this is how to report bugs in excel sheet and also set filters on the Columns attributes.
But most of the companies use the share point process of reporting bugs In this when the project came for testing a module wise detail of project is inserted to the defect managment system they are using. It contains following field
1. Date
2. Issue brief
3. Issue discription(used for developer to regenrate the issue)
4. Issue satus( active, resolved, onhold, suspend and not able to regenrate)
5. Assign to (Names of members allocated to project)
6. Prioriety(High, medium and low)
7. Severity (Major, medium and low)

 121. How do you plan test automation?
1. Prepare the automation Test plan
2. Identify the scenario
3. Record the scenario
4. Enhance the scripts by inserting check points and Conditional Loops
5. Incorporated Error Hnadler
6. Debug the script
7. Fix the issue
8. Rerun the script and report the result

122. Does automation replace manual testing?
There can be some functionality which cannot be tested in an automated tool so we may have to do it manually. therefore manual testing can never be repleaced. (We can write the scripts for negative testing also but it is hectic task).When we talk about real environment we do negative testing manually.

123. How will you choose a tool for test automation?
choosing of a tool deends on many things ...
1. Application to be tested
2. Test environment
3. Scope and limitation of the tool.
4. Feature of the tool.
5. Cost of the tool.
6. Whether the tool is compatible with your application which means tool should be able to interact with your appliaction
7. Ease of use

124. How you will evaluate the tool for test automation?
We need to concentrate on the features of the tools and how this could be benficial for our project. The additional new features and the enhancements of the features will also help.

125. How you will describe testing activities?
Testing activities start from the elaboration phase. The various testing activities are preparing the test plan, Preparing test cases, Execute the test case, Log teh bug, validate the bug & take appropriate action for the bug, Automate the test cases.

126. What testing activities you may want to automate?
Automate all the high priority test cases which needs to be exceuted as a part of regression testing for each build cycle.

127. Describe common problems of test automation.
The commom problems are:
1. Maintenance of the old script when there is a feature change or enhancement
2. The change in technology of the application will affect the old scripts

129. What is memory leaks and buffer overflows ?
Memory leaks means incomplete deallocation - are bugs that happen very often. Buffer overflow means data sent as input to the server that overflows the boundaries of the input area, thus causing the server to misbehave. Buffer overflows can be used.

130. What are the major differences between stress testing,load testing,Volume testing?
Stress testing means increasing the load ,and cheking the performance at each level. Load testing means at a time giving more load by the expectation and cheking the performance at that leval. Volume testing means first we have to apply initial.

Tips To Face Your Interviews

Going into a job interview can be a little bit daunting. The more you want to make a good impression and convey an accurate representation of yourself, the more nerve-wracking the situation can be! You want to come off as relaxed, cool and collected and also professional, friendly and intelligent all in one sit-down meeting. While this can be a bit anxiety-inducing, rest assured that there are some great ways to make sure you stay on top of your game and ace your next interview. Career Mind Mastery is a great course with lots of helpful tips and tricks to get you properly prepped for your next interview.

Some people will tell you to come off friendly and jovial while other advice suggests that you keep cool and never crack a smile. How do you know what is the best way to act on an interview? The bottom line is that you want to be aware of the situation and ready to adapt to the signals and cues you get from your potential employer. Go on as many interviews as you can – even the bad ones help you learn about yourself and adapt more readily to different situations. Practice makes perfect – very few people are naturally great at interviewing so unless you are one of those people, don’t turn down chances to keep getting better! In addition to practice, though, we’ve put together some helpful tips to keep in mind on your next interview.

1. Stay relaxed:
Even if this is your dream job, you want to stay calm and keep your cool. Take a few deep breaths before going in and remember to breathe in and out in between questions and responses. You can even try some relaxation techniques before you go into the interview – relaxing your muscles and reminding your mind to stay calm. These can be very helpful in any situation that could raise your heartbeat a bit. Try doing something you enjoy right before the interview so your mind is in a relaxed state – this could be taking a walk, talking to a friend, playing the guitar, etc.

2. Do your homework:
If you walk into the interview with a lot of background knowledge about the company and the person who is interviewing you, you’ll run a much smaller risk of being thrown off-guard. You will be able to answer questions with more confidence and also to ask more thoughtful and intelligent questions about the position. Your interviewer will notice your preparedness and will be impressed with it. Especially if they are interviewing a number of candidates for the job, your interest and engagement with the company prior to hire will be a key factor in giving you a boost over your competition. Find Me a Job is a great course that is full of interview tips and other lessons on researching job tracks and the companies behind them.

3. Tell the truth:
It’s always tempting to want to stand out or embellish your work history to make it sound a little more impressive, but this rarely ever works in your favor. If you think that you have gaps in your work history or skill set and want to appear more appealing, stretching the truth isn’t the answer. Be straightforward and tell your interviewer that you always wanted to chance to learn HTML or Powerpoint and would be excited to do so but aren’t currently familiar with the programs. They will respect your honesty and appreciate your willingness to learn. And if the lack of skill costs you the job, it wasn’t the right position for you anyway! Consider our course, Your Job Search Is a Spiritual Journey, which will keep you steady on the path to not just finding a job, but finding the exact right job for you.

4. Pay attention to body language:
Body language is one of the key factors in acing an interview. Make sure that you aren’t giving off negative signals with your body language – having your arms crossed over your chest or touching your face and neck are usually signs of disinterest or nervousness and they have been shown to turn people off. Try to keep your body language warm and engaged. Lean forward to express your interest, smile and maintain eye contact. These will help you to appear attentive and interested and also friendly.  The Secrets of Body Language will help you key into some of the more important cues you are giving and how to read other people’s body language more effectively.

Thursday, 20 February 2014

Performance Testing on Mobile Applications


Performance Testing Mobile Applications before Release Can Save Thousands and Your Company’s Reputation .


Most companies today recognize the importance of mobile applications and are quickly jumping on the mobile bandwagon. Some are so eager to release mobile apps that they are skipping a crucial step — proper testing. Everyone knows how frustrating it is to download apps that don’t live up to their promises. The more entrenched in mobility we become, the less patience we have for slow, buggy apps.
 If you look at any mobile app store – Apple or Android – there are hundreds of apps that have two star ratings because they launch and crash right away or they take forever to load. If enterprises allow the time for proper testing, they’d find the bugs that are preventing apps from performing as planned. Letting these bugs sneak into enterprise apps can lead to lost time and development dollars, decreased employee efficiency,  a poor reputation for quality.

Why are so many companies missing the mark? Far too many are skipping the testing process and relying on the crowd to give them feedback. In fact, according to a recent survey from the Software Development Times, 42% of companies are not testing mobile apps before release. Of those who are taking the time to test, only 35.8% use an official test team while the majority relies on developers themselves.

 When you look at the number of apps that are uninstalled after their first attempted use, and combine that with the very real risks that under-performing and unsecured apps can have for enterprise users – like banks and financial institutions – it’s clear the industry as a whole needs to be dealing with these issues before release and much earlier in the app development life cycle.

Performance Testing Mobile Applications is Essential to Stay Relevant:

 Unlike web and desktop applications, mobile applications need to be updated frequently and maintained in order to function correctly. Although there are updates and changes to web and desktop computing platforms, they pale in comparison to the changes in the mobile market. All of these changes lead to more possibilities for app performance issues.

Operating systems for mobile devices are updated one to three times per year. Combine that number with the multitude of phones being released each quarter. It’s no wonder that many companies opt to skip rigorous testing and hope that the market will help develop the app.

The problem is multiplied for organizations that embrace “bring your own device” (BYOD). Apps that are essential for work need to function on a variety of devices and BYOD means an influx of device types and operating systems.

 Errors are Easy to Miss, But Fixing Them Later Can Cost More:

 On top of the complications from the number of devices on the market and the constantly updated operating systems, it can be easy to miss a defect during the development process. If a problem slips through during development without performance testing mobile applications, it can be very costly for your company —both financially and in terms of negativity related to the corporate brand.

 For example, if the login functionality of a website doesn’t work or has a timeout problem due to poor performance, that defect could really cost you. There’s going to be a lot of rework after the app has reached the market which can break trust between your company and the users. This applies whether you’re developing for a consumer or business user – if your app isn’t functioning, you’re going to lose your audience.

There’s a saying that goes like this, “it costs $1 to fix a bug in development, $10 to fix it in testing and $100 to fix it in production.” Performance testing mobile applications to ensure performance is in line with plans can save thousands by stopping problems and alerting developers well before the application is used in the market.

If organizations do not have the time or resources to do a full-on performance test, it is essential that the testing team uses transaction timers within a functional test at the very least.  These timers can alert the tester that something may have slowed down by providing a benchmark for subsequent test runs. Knowing how your app is performing from application, operating system, or hardware changes can help save money by preventing issues making it through to production.

Long story short, more testing can lead to better functioning apps, which leads to fewer problems later on. Developers need to be careful during the development process because there is a lot that can be missed. Automating mobile app testing and including performance benchmarks can help catch and fix problems well before an app is deployed and preserve a company’s reputation.

Wednesday, 19 February 2014

Define Capability Maturity Model Integration ( CMMI )

CMMI stands for Capability Maturity Model Integration. It is a process improvement approach that provides companies with the essential elements of effective process. CMMI can serve as a good guide for process improvement across a project, organization or division.
CMMI was formed by using multiple previous CMM process.

Below are the areas which CMMI addresses because of integrating with CMM process:-

Systems engineering: - This covers development of total systems. System engineers concentrate on converting customer needs to product solution and support them through out the product life cycle.

Software Engineering: - Software engineers concentrate on the application of systematic, disciplined, and quantifiable approaches to the development, operation, and maintenance of software.

Integrated Product and Process Development:- Integrated Product and Process Development (IPPD) is a systematic approach that achieves a timely collaboration of relevant stakeholders throughout the life of the product to better satisfy customer needs, expectations, and requirements. This section mostly concentrates on the integration part of the project for different processes. For instance it’s possible that your project is using services of some other third party component. In such situation the integration is a big task itself and if approached in a systematic manner can be handled with ease.

Software Acquisition: - Many times organization has to acquire products of other
organization. Acquisition is itself a big step for any organization and if not handled in a
proper manner means just calling for disaster.

Below is what CMMI call about


Tuesday, 18 February 2014

Top 10 reasons why Testing for Mobile Applications is different.

We have witnessed transition from desktop to web and are witnessing another transition from web to mobile. I have been thinking about a blog series around testing mobile applications for a while and this is the first blog post in the series. In the coming few weeks, I will try to cover various topics / products / approaches related to testing mobile applications. I will focus on Android to start with and will move on to other platforms.

Before I dwell deeper into the subject - it is important to understand how testing mobile applications is different from testing browser / desktop applications. If we understand the distinction and challenges of testing mobile apps, it will be a bit more easier to tackle them.

1. Supported platforms & devices - you have more combinations to test :

Desktop apps were usually targeted for specific platforms and it was relatively easy to access those platforms. Web based applications made it a bit more challenging by adding another dimension - browsers.

Mobile applications take complexity of supported platforms to the next level by adding devices. Ensuring that mobile apps are working on all type of devices (SmartPhone, Tablets and Phablets) supplied by major brands (various models from Samsung, Sony, Nokia, HTC, Apple etc) and on all the platforms (iOS, Android, Windows, BlackBerry etc) is challenging. On top of that, new devices are hitting market so often that it becomes impossible to cover all the major devices.

In the mobile world, it is important to create something on the lines of graded browser support used by Yahoo to ensure that major platforms are covered.

2. Adaptability & Limited space - Screen size is changing constantly:

Pretty much all the major players are changing screen sizes of their phones, tablets and phablets to figure out what works or in response to the competition. How applications adapts themselves for various screen sizes, layout and configuration is a challenging task.

Apart from adaptability to different screen sizes, mobile applications have to deal with the limited screen size. Limited screen size means that user can not be given 30 different options on a single screen - usability, similar experience, on-screen help, inability to use search or other applications easily etc poses different challenges and as a tester we need to think beyond what is developed and always think of who will use it and in what circumstances.

3. Complex user interaction - More than one way to do everything:

User interaction in desktop and browser based applications was pretty much limited to mouse and keyboard. Mobile applications on the other hand are trying to make user interaction as fluid as possible. We had touch screen and with new phones from Samsung, you can just wave your hand to give commands. Siri is becoming more and more advanced and gives us a glimpse of future that voice commands may become part of every application in future. Devices are smart enough to understand complex gestures, eye movement, direction, tilt, movement, acceleration, GPS coordinates, surroundings, sound and so on. As a tester, we need to ensure that application works as expected when user interacts with the app in different ways.

4. Application Type - HTML5, Native or Hybrid?

In the desktop and browser world, applications were straightforward. They were either desktop or web based applications. However, with the adoption and support of HTML5 - applications are merging. On all the mobile devices, it is not difficult to find HTML5 applications, Native applications and hybrid applications. Testing for hybrid application would be different from testing native applications and it is important to understand that difference.

5. Dependency on emulator / simulator - Get devices

For the desktop and browsers, developers always had access to the platform or browsers they were targeting with their applications. Also, virtualization has become more or less commonplace and can be trusted for desktop and browsers.

Mobile devices on the other hand relies on emulator and simulators. However, they are still not true representation of the devices. It is also not possible to replicate advanced user interaction on these simulators. As a tester, we have to be aware of the capabilities and limitations of these emulators / simulators and figure out what can be tested (reliability) on them and what can not.

6. Security & Privacy - You can’t touch me but I can.

Though most mobile applications live in their own sandbox but many platform features are accessible to them. For example, applications such as phone book, pictures and videos are accessible to many other applications. These are all personal user data - and any defect around the misuse (unintentional) of this data can jeopardize trust of the application.

In mobile world, it is important to ensure that applications are secure from the intruders, and it is equally important to ensure that applications are not intruding or accessing data unintentionally.

7. Dependency on Network / carrier - more variations

In desktop and web world, most users were either on LAN or Wireless. These network were not predictable, but compare to mobile networks, they were very predictable. Many connected mobile applications rely on the network - how application responds to 3G, 4G, weak signals, no signals, powerful signals, Switching from cellular to wireless and vice-versa or when user is moving at different speeds etc can affect how application will behave. It is often not possible to come up with or simulate real life situations for mobile applications.

Apart from the variation in signal strength and type, mobile apps can respond differently to different carriers. As a tester, it is important to understand if there are any difference or not and whether application works for all the major carriers or not.

8. Installation, removal and upgrade - Would you come back?

Mobile apps are installed, removed or updated more frequently than desktop applications. Also, underlying OS and platform is updated more frequently as well. As an app developer and tester in the mobile world, you have to be on top of what changes are coming in the next revision of OS / Platform and how it might affect application.

Usually for most of the applications, user data is stored on the servers and not on the devices. It makes installation a bit tricky - what if user has multiple devices, what if multiple devices have different version of applications and so on.

Things like backward compatibility, simultaneous support for multiple versions, data preservation, restoring state and data, ability to install / upgrade multiple times etc are all part of important checks for mobile application testing.

9. Session Management & Interruptions - Who’s calling?

Handling Interruptions are the way of life for mobile applications. Apps and users are constantly interrupted by calls, SMS, push notifications and so on. How applications handle these interruptions, how they maintain their state etc are important, but it is also important to see how much interruptions application is generating and what triggers those interruptions.

As a tester, it is important to ensure that application behaves properly when it is interrupted and it is also important to ensure that application does not interrupt unnecessary and works according to the boundary defined by platform or users.

10. Mobile specific Non-functional testing - and you thought it’s over.

Mobile applications add many more dimensions to the non-functional testing. Old school performance of the application is the obvious one, but there are many other factors as well which should be considered.

How much data your application is consuming? How much it would cost user (data usage) to use this application? How much battery is consumed by applications? Does it behave differently in high battery and low battery conditions? How much space it is occupying? How much trail it is leaving? How it is clearing the trails / logs etc are important non-functional factors which should be considered as part of mobile testing strategy.? .

Web Application Testing Checklist

Testing web application is certainly different than testing desktop or any other application. With in web applications, there are certain standards which are followed in almost all the applications. Having these standards makes life easier for use, because these standards can be converted into checklist and application can be tested easily against the checklist.

 1. FUNCTIONALITY

1.1 LINKS

1.1.1 Check that the link takes you to the page it said it would.
1.1.2 Ensure to have no orphan pages (a page that has no links to it)
1.1.3 Check all of your links to other websites
1.1.4 Are all referenced web sites or email addresses hyperlinked?
1.1.5 If we have removed some of the pages from our own site, set up a custom 404 page that redirects your visitors to your home page (or a search page) when the user try to access a page that no longer exists.
1.1.6 Check all mailto links and whether it reaches properly

1.2 FORMS

1.2.1 Acceptance of invalid input
1.2.2 Optional versus mandatory fields
1.2.3 Input longer than field allows
1.2.4 Radio buttons
1.2.5 Default values on page load/reload(Also terms and conditions should be disabled)
1.2.6 Is Command Button can be used for HyperLinks and Continue Links ?
1.2.6 Is all the datas inside combo/list box are arranged in chronolgical order?
1.2.7 Are all of the parts of a table or form present? Correctly laid out? Can you confirm that selected texts are in the "right place?
1.2.8 Does a scrollbar appear if required?

1.3 DATA VERIFICATION AND VALIDATION

1.3.1 Is the Privacy Policy clearly defined and available for user access?
1.3.2 At no point of time the system should behave awkwardly when an invalid data is fed
1.3.3 Check to see what happens if a user deletes cookies while in site
1.3.4 Check to see what happens if a user deletes cookies after visiting a site

2. APPLICATION SPECIFIC FUNCTIONAL REQUIREMENTS

2.1 DATA INTEGRATION

2.1.1 Check the maximum field lengths to ensure that there are no truncated characters?
2.1.2 If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers?
2.1.3 If a particular set of data is saved to the database check that each value gets saved fully to the database. (i.e.) Beware of truncation (of strings) and rounding of numeric values.

2.2 DATE FIELD CHECKS

2.2.1 Assure that leap years are validated correctly & do not cause errors/miscalculations.
2.2.2 Assure that Feb. 28, 29, 30 are validated correctly & do not cause errors/ miscalculations.
2.2.3 Is copyright for all the sites includes Yahoo co-branded sites are updated

2.3 NUMERIC FIELDS

2.3.1 Assure that lowest and highest values are handled correctly.
2.3.2 Assure that numeric fields with a blank in position 1 are processed or reported as an error.
2.3.3 Assure that fields with a blank in the last position are processed or reported as an error an error.
2.3.4 Assure that both + and - values are correctly processed.
2.3.5 Assure that division by zero does not occur.
2.3.6 Include value zero in all calculations.
2.3.7 Assure that upper and lower values in ranges are handled correctly. (Using BVA)

2.4 ALPHANUMERIC FIELD CHECKS

2.4.1 Use blank and non-blank data.
2.4.2 Include lowest and highest values.
2.4.3 Include invalid characters & symbols.
2.4.4 Include valid characters.
2.4.5 Include data items with first position blank.
2.4.6 Include data items with last position blank.

3. INTERFACE AND ERROR HANDLING

3.1 SERVER INTERFACE

3.1.1 Verify that communication is done correctly, web server-application server, application server-database server and vice versa.
3.1.2 Compatibility of server software, hardware, network connections

3.2 EXTERNAL INTERFACE

3.2.1 Have all supported browsers been tested?
3.2.2 Have all error conditions related to external interfaces been tested when external application is unavailable or server inaccessible?

3.3 INTERNAL INTERFACE

3.3.1 If the site uses plug-ins, can the site still be used without them?
3.3.2 Can all linked documents be supported/opened on all platforms (i.e. can Microsoft Word be opened on Solaris)?
3.3.3 Are failures handled if there are errors in download?
3.3.4 Can users use copy/paste functionality?Does it allows in password/CVV/credit card no field?
3.3.5 Are you able to submit unencrypted form data?
3.4 INTERNAL INTERFACE

3.4.1 If the system does crash, are the re-start and recovery mechanisms efficient and reliable?
3.4.2 If we leave the site in the middle of a task does it cancel?
3.4.3 If we lose our Internet connection does the transaction cancel?
3.4.4 Does our solution handle browser crashes?
3.4.5 Does our solution handle network failures between Web site and application servers?
3.4.6 Have you implemented intelligent error handling (from disabling cookies, etc.)?

4. COMPATIBILITY

4.1 BROWSERS

4.1.1 Is the HTML version being used compatible with appropriate browser versions?
4.1.2 Do images display correctly with browsers under test?
4.1.3 Verify the fonts are usable on any of the browsers
4.1.4 Is Java Code/Scripts usable by the browsers under test?
4.1.5 Have you tested Animated GIFs across browsers?

4.2 VIDEO SETTINGS

4.2.1 Screen resolution (check that text and graphic alignment still work, font are readable etc.) like 1024 by 768, 600x800, 640 x 480 pixels etc
4.2.2 Colour depth (256, 16-bit, 32-bit)

4.3 CONNECTION SPEED

4.3.1 Does the site load quickly enough in the viewer's browser within 7 Seconds?

Entry and Exit criteria in a project?

Entry and exit criteria are a must for the success of any project. If you do not know from where to start and where to finish then your goals are not clear. By defining exit and entry criteria you define your boundaries. For instance you can define entry criteria that the customer should give the requirement document or acceptance plan. If these entry criteria’s are not met then you will not start the project. On the other end you can also define exit criteria for your project. For instance one of the common exit criteria’s in all project is that customer has successfully executed all the Acceptance Test plan.


Validation and Verification:

difference between validation and verification is that in validation we actually execute the application, while in verification we do review without actually running the application.

 Verifications are basically of two main types

1). Walk-through  2).Inspection.

Walk-through is an informal way of verification. For instance you can call your colleague and do an informal walk-through to just check if the documentation and coding is proper.

Inspection is a formal procedure and official. For instance in your organization you can have an official body which approves design documents for any project. Every project in your organization need to go through an inspection which reviews your design documents. In case there are issues in the design documents then your project will get a NC (Non conformance) list. You can not proceed
ahead with out clearing the NC’s given by the inspection team.

Six Sigma Concepts

Sigma is a statistical measure of variation in a process. We say a process has achieved six
sigma if the quality is 3.4 DPMO (Defect per Million opportunities). It’s a problem
solving methodology that can be applied to a process to eliminate the root cause of
defects and costs associated with the same.

design process in SIX sigma :

The main focus of SIX sigma is on reducing defects and variations in the processes.DMAIC and DMADV are the models used in most SIX sigma initiatives. DMADV is model for designing process while DMAIC is for improving the process.

DMADV model has the below five steps:-

· Define: - Determine the project goals and the requirements of customers (external and internal).
· Measure: - Assess customer needs and specifications.
· Analyze: - Examine process options to meet customer requirements.
· Design: - Develop the process to meet the customer requirements.
· Verify: - Check the design to ensure that it’s meeting customer requirements

DMAIC model has the below five steps:-

· Define the projects, the goals, and the deliverables to customers (internal and external). Describe and quantify both the defect and the expected improvement.
· Measure the current performance of the process. Validate data to make sure it is credible and set the baselines.
· Analyze and determine the root cause(s) of the defects. Narrow the causal factors to the vital few.
· Improve the process to eliminate defects. Optimize the vital few and their interrelationships.
· Control the performance of the process. Lock down the gains.


Monday, 17 February 2014

Testing Techniques

boundary value analysis :

In projects there can be scenarios where in we need to do boundary value testing. For instance let’s say for a bank application you can withdraw maximum 25000 and minimum 100. So in boundary value testing we only test the exact boundaries rather hitting in middle. That means we only test above the max and below the max. This covers all scenarios. Below figure shows the BV testing for the bank application which we described previously. TC1 and TC2 are sufficient to test all condition for the bank. TC3 and TC4 are just duplicate / redundant test cases which really do not add any value to the testing. So by applying proper BV fundamentals we can avoid duplicate test cases which do not add value as such to testing.




Equivalence partitioning :

In equivalence partitioning we identify inputs which are treated by system in the same
way and produce the same results. You can see from the below figure application TC1
and TC2 both give same result i.e. Result1 and TC3 and TC4 both give same result i.e.
Result2. In short we have two redundant test cases. By applying equivalence partitioning
we minimize the redundant test cases.




So below test you can apply to see if it forms equivalence class or not:-
· All the test cases should test the same thing.
· They should produce the same results.
· If one test case catches a bug then the other should also catch.
· If one of them does not catch the defect then the other should also not catch.
Below figure shows how equivalence partition works. Below we have a scenario in
which valid values lie between 20 and 2000. Any values beyond 2000 and below 20
are invalid. In the below scenario the tester has made four test cases:-
· Check below 20 ( TC1)
· Check above 2000 (TC2)
· Check equal to 30 (TC3)
· Check equal to 1000 (TC4)


Test case 3 and 4 give same outputs so they lie in same partition. In short we are doing
redundant testing. Both TC3 and TC4 fall in one equivalence partitioning. So we can
prepare one test case by testing one value in between the boundary, thus eliminating
redundancy testing in projects.

Mobile Localization Testing Concepts

Localization testing checks how well the build has been localized into a particular target language. For example, if a web/mobile application is built in English and if the company wants to use the same application in other languages like Hindi, Chinese, Japanese, etc. instead of creating the different application for different languages or regions. Then at this juncture localization testing comes into play. The developers would make the same application run on different languages. But with the localization process exceedingly difficult to verify, without knowing the native language of the region.  If the product is not globalized enough to support a given language then your application is regional and run and get popularity in target area. Localization testing requires both source and target language versions of the product installed on the environment that a typical user would use. Therefore attention must be paid to the correct version of the operating system, language, regional settings and more.

Most of them would get really confused with localization testing and the linguistic testing. Localization testing focuses only on the following parameters:

1) Correct functionality
2) Appearance
3) Completeness of the localized product.


whereas linguistic testing takes care of ensuring the correct language rules are being used and focuses on correct in-context linguistic usage.

Testing teams should pay attention to the tiniest details and differences. They should look for truncations, misalignment's, un-translated strings. The testers should also be noted that the localization bug differs from the core product bug. The localization bug is specific to the language under test and it may be specific to the group of languages. For example: Most of you would have noticed some websites offering country specific, however the content of the website would be the same but the language would change. Hence it is enough to regress once or twice because it is done by tester. But should be verified by technical writer or language expert. Tester can only compare the text and observe the truncation and overlap issue generally.
Three stages of Localization are given below:

Stage 1: The product is not translated, but it must work on foreign language operating system.

Stage 2: The product user interface, release notes and installation guide only translated on targeted language.

Stage 3: The product user interface, online help, and printed documentations are translation


The following needs to be considered in localization testing:

• Upper and Lower case conversions (general rules for any language)
• Video Content (if implement).
• Keyboards is mandatory (pre defined key for app handling)
• Things that are often altered during localization, such as the User Interface and content files.
• Operating System (rex,android,iPhone OS)
• Text Filters if applicable (searching based on alphabet)
• Hot keys(before release the build)
• Spelling Rules (general rules for any language)
• Sorting Rules (general rules for searching and sorting)
• Date formats and currencies of country like dollar sign, pound and other format(special character).
• Rulers and Measurements(if implements)
• Memory Availability
• Local market complies with the local laws and regulations
• Help and about (how to install, play, mouse key function etc...) 

Security testing Concepts and main things to test in Security Testing?

Whenever we develop any applications security testing should be on top priority basically for Finance domain and banking applications. Commonly in security testing below terms uses most of the times.

 - Password cracking
 - Vulnerability
 - URL manipulation
 - SQL injection
 - Cross Site Scripting
 - Spoofing

Below are few things needed to concentrate while doing security testing:

 - Authentication validations and Password protection
 - Direct URL’s should not work after logging to the application
 - HTTP and HTTP’s validations
 - Protocols and IP config validations
 - Memory leeks
 - Configuration of the application in servers

Sunday, 16 February 2014

5 Basic Interview Tips To Get Job

Now a day most of the people facing lot of problems to clear Interviews to get good job. Some people who is having lot of technical skills they also failing during the interview just because of basic skills so guys here I am trying to give some basic interview tips to get clear your interview so just try to follow these tips during your interview gets placed in good organizations. 
 
1. Always be Confident
2. Act As If You are what you believe
3. Research about Company before going for the Interview
4. Be prepared what there in CV or resume
5. Show interest and Ask for the Job  
 

Be Confident: Always the first impression is the best so be confident but do not be over confident. When you meet the person give shake hand with the confidence. During the interview does not look here and there just maintain eye contact and show enthusiasm to know that thing what interviewer telling and what he is going to be ask.  If you have problems while pronouncing any words better do not try to use them on an interview. All these things never come in a single day better practice in front of the mirror.          

Act As If You are what you believe: When you start with the interview always feel like you got the job or you already doing the same then feel how you will react when that situation comes to you what interviewer asking and think what will be the responsibilities for this job and what you will do if there will be any problematic situation. 

Research about The Company before going to the Interview: Always research about the company whenever going for the interview that is very important to tell interviewer when he asks questions like what you understand company and prepare to speak about company at least two min. 

Be prepared what there in CV or resume: Always you should know what written in your CV and have to verify resume two to three times before leaving for the interview and should able to answer for any question what you wrote in resume. 

Show interest and Ask for the Job: Ask for the job and justify why they have to hire you for that passion and show your capabilities why only you are suitable for that vacancy.


Be careful  with below things while giving interview:

-Slouching in a chair
-Crossing your arms
-Playing with your hair or jewelry
-Leaning back in chair

Software testing Interview questions for freshers - Part 2

What is Endurance Testing?

Endurance testing: in this testing we test application behavior against the load and stress applies over application for a long duration of time. The goal of this testing are:

    To determine the how the application is going to responds for high load and stress conditions in the real scenario.
    To ensure that the response times in highly load and stress conditions are within the user’s requirement of response time.
    Checks for memory leaks or other problems that may occur with prolonged execution.

What is End-to-End testing?

End-to-End Testing we take the application from the starting phase of the development cycle till the ending of the development cycle. We can simple say that it comes into play when we take requirement from the customer till the end of the delivery of the application. The purposes of End-to-End testing are:

    Validates the software requirements and checks it is integrated with external interfaces.
    Testing application in real world environment scenario.
    It involves testing of interaction between application and database.
    Executed after functional and system testing.
    End-to-End testing is also called Chain Testing

What is Gorilla Testing?

A test technique that involves testing with various ranges of valid and invalid inputs a particular module or component functionality extensively. In Gorilla testing test case and test data are not required. It uses random data and test cases to perform testing of application. The purpose of Gorilla testing is to examine the capability of single module functionality by applying heavy load and stress to it. And determine how much load and stress it can tolerate without getting crashed.

Why we need Localization Testing?

Localization testing mainly deals with the functionality of application and GUI of the application. The purposes of using Localization testing are following:

    Mainly deal with internationalization and localization aspects of software.
    Evaluate how successfully the language is interpreted into a specific language.
    Translate GUI of application so that it can adapt to a particular region language and interface.

What is Metric?

Metric is a standard of measurement. Software metrics uses the statistical method for explaining the structure of the application. The software metric tells us the measurable things like number of bugs per lines of code. We can take the help of software metrics to make the decision regarding the application development. The test metrics is derived from raw test data because what cannot be measured cannot be managed. Software metric also helps the Project Management team to manage the project like schedule for development of each phase.

Explain Monkey testing.

Monkey testing is a type of Black Box Testing used mostly at the Unit Level. In this tester enter the data in any format and check the software is not crashing. In this testing we use Smart monkey and Dumb monkey.

    Smart monkeys are used for load and stress testing, they will help in finding the bugs. They are very expensive to develop.
    Dumb monkey, they are important for basic testing. They help in finding those bugs which are having high severity. Dumb monkey are less expensive as compare to Smart monkeys.

Example: In phone number filed Symbols are entered.

What is Negative Testing?

Negative Testing is performed to find the situation when the software crashed. It is a negative approach, in this tester try to put efforts to find the negative aspects of the application. Negative testing ensures that application can handle the invalid input, incorrect data and incorrect user response. For example, when user enters the alphabetical data in a numeric field, then error message should be display saying “Incorrect data type, please enter a number”

What is Path Testing?

Path testing is a testing in which tester ensure that every path of the application should be executed at least once. In this testing, all paths in the program source code are tested at least once. Tester can use the control flow graph to perform this type of testing.

What is Performance Testing?

Performance Testing is focused on verifying the system performance requirements like response time, Transactional throughput and number of concurrent users. It is used to accurately measure the End-to-End performance of a system. It identifies the loop holes in Architectural Design which helps to tune the application.

It includes the following:

    Emulating ‘n’ number of users interacting with the system using minimal hardware.
    Measuring End-User’s Response time.
    Repeating the load consistently
    Monitoring the system components under controlled load.
    Providing robust analysis and reporting engines

What is the difference between baseline and benchmark testing?

The difference between baseline and benchmark testing are:

    Baseline testing is the process of running a set of tests to capture performance information whereas Benchmarking is the process of comparing application performance with respect to industry standard that is given by some other organization.
    Baseline testing use the information collected to made the change in the application to improve performance and capabilities of the application whereas benchmark information where our application stands with respect to others.
    Baseline compares present performance of application with its own previous performance where as benchmark compares our application performance with other companies application’s performance.

What is test driver and test stub?

    The Stub is called from the software component to be tested. It is used in top down approach.
    The driver calls a component to be tested. It is used in bottom up approach.
    Both test stub and test driver are dummy software components.

We need test stub and test driver because of following reason:

    Suppose we want to test the interface between modules A and B and we have developed only module A. So we cannot test module A but if a dummy module is prepare, using that we can test module A.
    Now module B cannot send or receive data from module A directly so, in these cases we have to transfer data from one module to another module by some external features. This external feature used is called Driver.

What is Agile Testing?

Agile Testing means to quickly validation of the client’s requirements and make the application of high quality user interface. When the build is released to the testing team, testing of the application is started to find the bugs. As a Tester, we need to focus on the customer or end user requirements. We put the efforts to deliver the quality product in spite of short time frame which will further help in reducing the cost of development and test feedbacks will be implemented in the code which will avoid the defects coming from the end user.

Explain bug life cycle.

Bug Life Cycle:

    When a tester finds a bug .The bug is assigned NEW or OPEN with status,
    The bug is assigned to development project manager who will analyze the bug .He will check whether it is a valid defect. If not valid bus is rejected, now status is REJECTED.
    If not, next the defect is checked whether it is in scope. When bug is not part of the current release .Such defects are POSTPONED
    Now, Tester checks whether similar defect was raised earlier. If yes defect is assigned a status DUPLICATE
    When bug is assigned to developer. During this stage bug is assigned a status IN-PROGRESS
    Once bug is fixed. Defect is assigned a status FIXED
    Next the tester will re-test the code. In case the test case passes the defect is CLOSED
    If test case fails again the bug is RE-OPENED and assigned to the developer. That’s all to Bug Life Cycle.

What is Matching Defects?

Matching Defects helps us to remove the locking of same defect in the bug in the application. While using QC, every time we lock a bug, QC saves the list of keywords from the Summary and Description Fields of the bug. When we search for similar defects in QC, keywords in these fields are matched with other defects which are locked previously. Keywords are more than two characters and they are not case sensitive. We have two methods to conduct search of similar defects.

    Finding similar Defects: compare a selected defect with all other existing defects in project.
    Finding similar Text: compare a specific test string against all other existing defects in project.

What is Recovery Testing?

Recovery testing is done to check how fast and better the application can recover against any type of crash or hardware failure. Type or extent of recovery is specified in the requirement specifications. Recovery testing will enable customer to avoid any inconvenience that are generally associated with the loss of data and performance of the application. We can perform regular recovery testing in order to take backup of all necessary and important data.

What is Test Case?

A test case is a set of conditions which is used by tester to perform the testing of application to make sure that application is working as per the requirement of the user.

    A Test Case contains information like test steps, verification steps, prerequisites, outputs, test environment, etc
    The process of developing test cases can also enable us to determine the issues related to the requirement and designing process of the application.

In Test First Design what step you will follow to add new functionality into the project?

When we have to add new functionality our project, we perform the following steps:

    Quickly add a developer test: we need to create a test that ensures that new added functionality will not crash our project.
    Run your tests. Execute that test, to ensure that new add functionality does not crash our application.
    Update your production code. In this we update our code with few more functionality so that the code passes the new test. Like adding of error message in field where field can take only numeric data.
    Run your test suite again. If test fails, we have to do change in the code and perform retesting of the application.

What is Validation and Verification?

Verification: process of evaluating work-products of a development phase to determine whether they fulfill the specified requirements for that phase.
Validation: process of evaluating software during or at the end of the development process to determine whether it specified requirements.

Difference between Verification and Validation:

    Verification is Static testing where as Validations is Dynamic Testing.
    Verification takes place before validation.
    Verification evaluates plans, document, requirements and specification, where as Validation evaluates product.
    Verification inputs are checklist, issues list, walkthroughs and inspection where as in Validation testing of actual product.

Verification output is set of document, plans, specification and requirement documents where as in Validation actual product is output.
What are different approaches to do Integration Testing?

Integration testing is black box testing. Integration testing focuses on the interfaces between units, to ensure that units work together to complete a specify task. The purpose of integration testing is to confirm that different components of the application interact with each other. Integration testing is considered complete, when actual results and expected results are same. There are mainly three approaches to do integration testing.

    Top-down Approach: Tests the components by integrating from top to bottom.
    Bottom-up approach: It takes place from the bottom of the control flow to the higher level components
    Big bang approach: In this are different module are joined together to form a complete system and then testing is performed on it.

Can you explain the elementary process?

Software applications are made up by the help of several elementary processes. There are two types of elementary processes:

    Dynamic elementary Process: The dynamic elementary involves process of moving data from one location to another location. The location can be within the application and outside the application.
    Static elementary Process: Static elementary involves maintaining the data of the application.

Explain the PDCA cycle.

Software testing is an important part of the software development process. In normal software development there are four important steps PDCA (Plan, Do, Check, Act) cycle. The four steps are discussed below:

    Plan: Define the goal and the plan for achieving that goal.
    Do: execute those plan strategy which is planned in plan phase
    Check: Check to make sure that everything is going according to the plan and gets the expected results.
    Act: Act according to that issue.

What are the categories of defects?

There are three main categories of defects:

    Wrong: The requirements are implemented incorrectly in the application.
    Missing: When requirement given by the customer and application is unable to meet those application.
    Extra: A requirement incorporated into the product that was not given by the end customer. This is always a variance from the specification, but may be an attribute desired by the user of the product.

What are different types of verifications?

Verification is static type of software testing which is started in earlier phase of development of software. In this approach we don’t execute the software that the reason it comes in static testing. The product is evaluated by going through the code. Types of verification are:

    Walkthrough: Walkthroughs are informal technique. Where the Developer leader organizing a meeting with team member to take feedback regarding the software. This can be used for the improvement of the software quality. Walkthrough are unplanned in the SDLC cycle.
    Inspection: Inspection is a done by checking of a software product thoroughly with the intention to find out defect and ensuring that software is meeting the user requirements.

Which test cases are written first: white boxes or black boxes?

Generally, black box test cases are written first and white box test cases later. To write black box test cases we need the requirement documents and design or project plan. All these documents are easily available in the earlier phase of the development. A black box test case does not need functional design of the application but white box testing needs. Structural design of the application is clearer in the later part of project, mostly while executing or designing. For black box testing you need to only analyze from the functional perspective which is easily available from a simple requirement document.
What is difference between latent and masked defect?

The difference between latent and masked defect are:

    A latent defect is an existing defect that has not yet caused a failure because the conditions that are required to invoke the defect is not meet.
    A masked defect is an existing defect that has not yet caused a failure just because another defect hides that part of the code from being executed where it is present.

What is coverage and what are the different types of coverage techniques?

Coverage is a measurement used in software testing to describe the degree to which the source code is tested. There are three basic types of coverage techniques as shown in the following figure:

    Statement coverage: This coverage ensures that each line of source code has been executed and tested.
    Decision coverage: This coverage ensures that every decision (true/false) in the source code has been executed and tested.
    Path coverage: In this coverage we ensure that every possible route through a given part of code is executed and tested.

Explain the concept of defect cascading?

Defect cascading is a defect which is caused by another defect. In this one defect invokes the other defect in the application. When a defect is present in any stage but is not identified, hide to other phases without getting noticed. This will result in increase in number of defects.

What are the basic elements of defect report format?

The basic elements of Defect Report Format are:

1. Project name
2. Module name
3. Defect detected on
4. Defect detected by
5. Defect id
6. Defect name
7. Snapshot of the defect(if the defect is in the non reproducible environment)
8. Priority, severity, status
9. Defect resolved by
10. Defect resolved on.

What is destructive testing, and what are its benefits?

Destructive testing includes methods where material is broken down in to evaluate the mechanical properties, such as strength, toughness and hardness. For example, finding the quality of a weld is good enough to withstand extreme pressure and also to verify the properties of a material.

Benefits of Destructive Testing (DT)

    Verifies properties of a material
    Determines quality of welds
    Helps you to reduce failures, accidents and costs
    Ensures compliance with regulations

What is Use Case Testing?

Use Case: A use case is a description of the process which is performed by the end user for a particular task. Use case contains a sequence of step which is performed by the end user to complete a specific task or a step by step process that describe how the application and end user interact with each other. Use case is written by the user point of view.

Use case Testing: the use case testing uses this use case to evaluate the application. So that, the tester can examines all the functionalities of the application. Use case testing cover whole application.