Friday, 18 April 2014

Testing Strategies and Tactics for Mobile Applications

Testing. No one really wants to do it. It’s expensive. It’s time consuming. But unfortunately, it’s needed to ensure that your consumers have a positive experience when they use your mobile applications. And it’s vital that you make sure that the experience is a great one for every consumer every time they use your application, starting with that very first time. Fail to do a good job of testing and your customer will end up doing it for you—and unlike your testing team, your customers don’t have the tools or the time to report back problems. And keep in mind that your customers don’t want to be treated like guinea pigs. When they find a fault, they simply never come back, and you’ll never hear a word from them.

The goal of your testing efforts is not to find errors. Perhaps your developer has actually done a great job and did not make any mistakes. Instead, your goal in testing should be to understand the quality of your offering. Does it work? Does it function as expected? Will it meet the needs of your users, so that they come back again and again?

But when it comes to testing mobile applications there are unique challenges. The challenges of mobile testing present you with tradeoffs that you need to consider and choices that you need to make about the mix of different techniques and methods that you will use in testing. Each testing choice you consider will have pros and cons associated with it, and you will probably find that no one testing choice will be completely satisfying. Rather, you will need to consider a testing strategy that combines different testing options that together provide you with the best overall testing result that balances the tradeoff between cost, quality, and time-to-market.

In this document, we examine the various testing options for mobile applications while explaining the factors that you will need to consider in determining your testing strategy. Finally, we make some recommendations on how you can combine the various testing options to find the testing strategy that fits your mobile application.

Mobile Testing Challenges for Native Applications

To many, “mobile apps” have become synonymous with native applications (and hybrid applications). Commonly downloaded from an App store, they offer the user a unique experience that maximizes the capabilities of the device & operating systems for which they are developed. The app download is often controlled by the gate-keeping app store, with mechanisms in place to charge potential consumers. This simple and proven monetization model has fueled native apps very popular in the development community. Beyond their acceptance in the consumer market, they also allow enterprises to deliver productivity tools to an increasingly mobile workforce.

While native applications can provide a rich experience to the user—and possibly a lucrative one for the developer—they also add some complexity to the lives of those tasked with testing them.
Testing needs to ascertain that the app can be successfully downloaded to the device, executed on the device and interact with the supporting back content infrastructure. When updates are made you need to be sure that the application can to be pushed out to and accepted by the end user. There’s a misperception that successful testing of app functionality on one device provides assurance across all others of the same operating system.

Native applications are tied to the hip with the hardware and operating systems for which they are written. To meet the challenge of testing for native mobile applications, it’s essential to test on the physical devices supported by your application. You’ll also want to ensure backward compatibility with each older generation of the device you’re expected to support. Owning and maintaining a version of each device can be expensive and burdensome. Consistency in executing your test plan is also a problem when you’re limited to manual testing from a proverbial closet of mobile devices. Finally, you’ll want to be sure that when issues are found with your native apps that they can be quickly captured and shared with others.

Mobile Testing Challenges for Web Applications

Like the Web itself, a mobile Web application is viewable by users around the world. Even if you’re initially targeting only users in a single country or on a single network, it helps to understand the global dynamic.

When we test mobile Web applications we encounter several challenges presented by the nature of the global, mobile Web. As we understand the nature of each challenge, we can explore different technology options to manage issues and mitigate risk. Coming up with the right solutions for your requires an assessment of the advantages and disadvantages inherent in each of the testing options available to you and determining the technology that best suits your testing requirements. These mobile testing challenges include devices, network, and scripting.

Devices: The Biggest Mobile Testing Challenge


The mobile devices used by consumers create the most obvious challenge to mobile Web testing. There are potentially tens of thousands of different client devices that could be used to access your mobile app or website, and they must therefore all be considered when testing your mobile applications. This number can be reduced to an extent, but each time you reduce the number of device types that you test against, you are taking a chance that your application might not work on a device, locking out a number of potential customers.
To handle the device challenge, you have three options: You can test exclusively using real devices, you can test exclusively with emulated devices, or you can use a combination of each.

Real devices have the advantage of having all of the limitations and quirks present in the actual client hardware and firmware combination in the hands of your target consumers. However testing with real devices can be expensive, depending on how you go about it. They are expensive to buy—and forget about the advertised prices, for those are the operator-subsidized prices that come only with a contract that has its own cost implications. You might be able to get a manufacturer or network operator to loan you devices for testing, but you need to join a waiting list and convince the hundreds of manufacturers and hundreds of mobile network operators that you should be a priority. Airtime and subscription costs also need to be paid. And finally, testing with real devices can be disorganized and labor intensive if the testing environment is not conducive to creating, collecting and reproducing results in a consistent manner.

Emulated devices, on the other hand, are relatively easier to manage. You can switch device types by simply loading a new device profile, and instantly you have a new device that presents itself to your mobile Web application in the same way that the real device would. And because the emulators run on more powerful PCs and servers and were designed with testing in mind, they are typically fully instrumented to capture detailed diagnostics about the protocols that go back and forth between client and server at the various levels of the stack.

When you encounter an application fault, you will have the information to isolate and thus correct the problem. Emulated devices are thus cost effective, because a single platform with frequent updates of device profiles can be used to test every device on the market both today and tomorrow.

The big disadvantage of emulated devices is that they lack the quirks, faults and characteristics that only the real device can provide. An emulated device may not give the pixel-perfect accurate rendering that you’re assured to have with a real device solution. And while the processing power of your local PC can be an attribute, it will also hide any issues that you may have with the responsiveness of your Web application. Finally, an emulated device is not sensitive to the ambient conditions that can impact the behavior of the device. In the majority of cases this is a good thing, however if you want to know how well a device performs in an exact location such as a crowded stadium, a real device is your better bet.

Fortunately you’re not limited to an either/or selection when determining the right device solution for your mobile testing needs. A third approach is to select a mix of both emulation and real device testing. First start testing in an emulated environment to take advantage of the speed and device diversity that an emulator can provide. Emulated device testing early in the development cycle can help you achieve these goals at a relatively low cost. Early in the development cycle you don’t need the pixel-perfect rendering afforded by an actual device. The risk of not having the nth degree of certitude is easily outweighed by the benefits gained by increasing the number of test cases and device types covered in the test suite. Add real devices into your test plan later in the development cycle so you can add validate the applications are functioning as expected and certify that all development requirements and objectives have been met.

Network: A Regional Challenge

There are well over 400 mobile network operators in the world.

Each mobile operator may support multiple network technologies including LTE, CDMA, GSM, and some use less common or local networking standards such as iDEN, FOMA, and TD-SCDMA. Each network has a unique combination of network infrastructure that tunnels the packet-based protocols used by mobile networks into TCP-IP protocols used by the mobile Web. Each network operator has implemented systems that behave slightly differently from different vendors to perform the required tunneling. Lastly, most network operators have inserted mobile Web proxies (that is, the gateway) to dictate how, when, and if you are able to connect to a particular site. When a network operator implements a mobile Web proxy, it can restrict the flow of information that travels between your server and the test client. Some proxies limit the sites that can be accessed via a phone to only those approved by the operator in what is often referred to as a “walled garden.” Other proxies might use “transcoding” in an attempt to scale down fixed Web content to better fit onto mobile phones, thus expanding the number of mobile sites that can be seen—and unfortunately they might also “transcode” your made-for-mobile application. Finally, some proxies strip vital information from the HTTP headers that your application might depend on to provide functionality or to provide device adaptation. As you can see, the network challenge has a lot of complications to it.

It’s not possible to discuss the network challenge without discussing location. It’s a simple fact that to fully test the full network stack on a particular operator’s network infrastructure, you must be connected to the target network. But the challenge is made more difficult by the fact that the radio signals are not strong on cellular networks, so you must be adjacent to a cell connected to the operator’s core network to run your test. Thus, if you want to test against SFR, you must be in France, and if you want to test against China Mobile, you must be in China.

Obviously, traveling to every network operator that you need to support can be very expensive, and there are obvious cost tradeoffs to be considered.

There are different ways of dealing with network challenge it. We can bypass the lower layers of the network and simply test over the Internet or LAN, or we can use the real network by using either phones or modems.

Network Bypass

When you bypass the network’s lower layers, you use TCP/IP to connect directly to the server and you ignore the GPRS tunneling systems used by network operators. Since most real devices are not capable of doing this, you will need to use a device emulator to perform the bypass. Not all device emulators support this feature, and you may want to look for a device emulator that can perform network bypass by using the Internet. Some device emulators also have the ability to access the operator’s proxy (but only if it is exposed to the Internet) to allow a more realistic test. Even if the operator’s Web proxy is available to only its customers, there are test proxies on the Internet that can be used. Even if you don’t have a test proxy, you will still be able to test directly against your origin Web server.

An advantage of bypassing the network is that you will not need to use and thus pay for airtime. And because you are using a device emulator, you again benefit from having a fully instrumented stack.

The disadvantages of network bypass are that is that we often cannot emulate the effects and timing of the network and the various network elements such as proxies. Finally, when you use this technique, you can’t use real devices and thus don’t see the quirks and limitations that real consumers will see.

Real Networks


Naturally, it is possible to test against real networks. One method is to use real devices at the target location, though you will face many of the problems already discussed. Alternatively, many device emulators support modems that allow you to use your emulated devices on the local network—but again there is the cost of traveling into range of the network. But there is another option.

One piece of useful test equipment is a real device in the cloud. This type of testing solution consists of a physical handset mounted in a remote box with a remote control unit and a remote antenna. The remote control unit is physically connected to the device’s screen and keypad control circuits and is capable of pressing keys and collecting screen images. Exposed to the Internet, this solution lets a user on a local PC or Web client control a device with their mouse and keyboard, and thereby see what is happening remotely on the screen. These devices provide an elegant solution that can be connected to either live networks or simulated networks, although most rely on live networks.

Remote real device solutions often have the ability to record a test for subsequent replay, a capability that can be useful for regression testing.

Real device in the cloud reduce the cost of travel to foreign networks, but still can be expensive because the cost of the device is now amplified by the cost of the remote control hardware, remote control software, and local software. Because there are so many different makes and models of devices, it is often too expensive to buy a remote real device solution for all the devices that you need to test against. Fortunately, most of the companies that make this type of equipment offer the ability to “rent” testing time on a resource that is shared with others and is managed for you. You simply need to open an account, and you then buy testing time with a given make and model of device when and where you need it.

Scripting: The Repeatability Challenge




















Our last challenge of mobile testing is what we call scripting, the method that is used to actually execute the test script. Script execution can either be manual or automated. You either write down the scripts in a document or a spreadsheet, which is then used by a test engineer who manually enters keystrokes, or you run automated scripts that in turn evoke the keystrokes and record the results.
Because there are so many different devices with different menu structures and keystroke options, automated scripting needs to be abstracted away from the device to be of any real use. Consider a script that follows strict keystrokes on an Apple iPhone. This script would not have any chance of working on a Nokia N70, because the user interfaces are completely different. Fortunately, most automated testing software provides high-level scripting functions such as “goto URL” or “send SMS”, which are not dependent on the particular menu structure of the target device. Most device emulators are capable of automating test execution using a higher-level, abstracted scripting language that is not device dependent.

When you use automated scripting, the cost of setting up the script will typically be higher than the cost of a single manual execution of a test script. But if it is a test script that you run on a periodic basis, every time that you subsequently run the script, the more time and effort you will save. If you run the script enough, you will eventually recover the cost of initial scripting.

Finally, many automated scripting tools have a special ability to “spider” or “crawl” a mobile Web site. This is a special capability that can test an entire site with a single command. Although this capability will not be able to perform complex transactions, it is a quick test to set up that will walk your mobile Web site looking at every page for errors and device inconsistencies and is a very powerful and cost-effective tool.

Recommendations
Hopefully, you now understand a lot more about the challenges associated with mobile testing of native and web applications. But what do you do with this information? What should be your testing strategy for mobile application testing?

First, it is not a matter of choosing one tool or technique; there are simply too many compromises that must be made. Most likely you will need to use a combination of testing tools and techniques to meet your quality requirements. But generally you can narrow your choices down based on the following recommendations:

    Invest in a device emulator. Emulated devices are very cost effective because they allow you to do a lot of testing quickly and efficiently. This will allow you to perform the bulk of your testing in a well-instrumented test environment that is cost effective. You will want to use your device emulator with various options such as bypassing the network, using the live network via modems, and a good scripting language, so you should look for these features during your selection process. When you look at device emulators for testing, make sure that they have the instrumentation and the network options to provide you with the flexibility that you will need. But ensure the tool has the diagnostics you will need to isolate problems and the flexibility in network stacks you will need to test different network options. Make sure that your emulated device solution contains a high-level scripting solution to allow you to replay your test cases over and over. Finally, look for an emulated device which will allow you to change device profiles quickly.
    Take advantage of remote real devices in the cloud. Having an account with a vendor that lets you access remote real devices at any time is very handy. You never know when you might need to test on a remote live network with a device that you might not have. It’s a great solution to have in your bag of tricks.
    Automate wherever possible. Emulators and remote, real-device solutions that support script & playback functionality are time-savers that can allow you to execute more test cases with a higher degree of consistency. Clearly, a solution that integrates real and emulated devices is ideal.

Techniques to make exploratory testing even more efficient

Exploratory testing is a hugely efficient test technique in many situations. When it is extremely important to get it right, you want to try test in every possible way and then also test by exploration. When the tests have to be fast and it takes too long to write down in advance what is to be tested, exploratory testing comes in handy. Even in most other situations, it is useful to think exploratory.

In this article you will learn more about some important techniques that will give you great benefits in exploratory testing.

Test comprehensive, yet simple

When testing exploratory you test more or less in different ways each time. The opposite way (running the same tests over and over again) is good to catch things that have broken (regression testing), but to find new information, you need to run new tests or variations on the old ones. When varying tests you get deeper and better tests with a greater chance of finding new, important information.

Keep it simple; you get far without frills. Use the principle of ALAP (As Late As Possible) and determine the details as late as possible. Start by describing what is most in important in the format of one-liners or checklists, which can be reviewed just before being used.

It is better to test well enough in many ways than to test perfectly in just one or two ways.

Serendipity

Serendipity is not just the 2001 romantic comedy starring John Cusack and Kate Beckinsale, it is also to look for something, but then finding something else that is valuable. This is extremely common for testers who have their eyes wide open. Very often we’ll be testing something, but seeing something very different, important, somewhere along the road. Specs are a good start, but there is so much more you can do, for example, the below.

Use RIMGEA before you write a bug report

An mnemonic which comes in handy is that of RIMGEA. This one is especially useful when filling out bug reports.

What does it stand for?

Replicate it – Try to see if you can replicate the bug.

Isolate it – Try to limit the steps or the conditions that trigger the bug.

Maximize it – Try to do follow-up steps to see if you can trigger a worse failure.

Generalize it – Try to broaden the extent of the bug.

Externalize it – Try to consider the value and impact of the bug in other stakeholders’ perspective.

And finally, say it clearly and dispassionately – Try to create bug reports that are as easy to understand and as neutral as possible.

The Top 5 Usability Mistakes

Product usability is the cornerstone of a successful app or website. To ensure that your software, website, app or whatever, can be enjoyed to the full by its users, I have compiled a list of the top five usability mistakes that you should be aware of before deploying your product.

In this article, I work my way from outside inwards, starting with problems that relate to the aesthetics of apps, the language they use, their functionality, engagement potential and, finally, actual user-involvement.

1. Disregarding design principles

First impressions are crucial in determining the continued engagement of users with an app or website. Disregarding common design rules such as having an uneven alignment of information, large chunks of words, awkward placement of buttons and poor management of white space can seriously interfere with usability and diminish the quality of the user experience.

It is natural for something as subjective as good design to top the list of usability gripes that developers have to face. However, while the devil is in the details, abiding by simple principles can go a long way to ensure a harmonious and logical layout to your product.

But good design is never an end to itself, and over-doing aesthetics can be just as bad as over-looking them. The purpose of an app’s great looks is simply to pull the user in by encouraging interaction. Developers who rely on generic design or neglect to add a personal feel to the layout risk boring the user, or worse, alienating them completely.


2. Using confusing or ambiguous words and labels

If good design is the skin of your product, then all the words (copy, content, call it what like) that need to be included are its voice. Another major usability mistake is confusing or ambiguous labelling or descriptions.

Your app cannot judge how sophisticated the user is, so it must be able to communicate in as few words and as direct a manner as possible. Whilst books preach, apps reach; they reach out to the user and engage them by keeping it simple, concise and directive.
Can you understand this? No? Neither can most people, I'm sure.

Can you understand this? No? Neither can most people, I’m sure.

A common usability mistake involving the wording that appears in products is poorly labelled buttons. A user-centric approach will ensure that every button is clearly defined by the outcome that is produced when pressing it. Therefore, the ubiquitous ‘Contact Us’ can easily be swapped with ‘Send us an email’ if pressing it opens up the user’s email client rather than a page with contact details.

Another semantic blunder is the prolific use of abbreviations or peppering of text with jargon words that cannot be immediately understood by non-techies. The copy must always fit the intended audience, rather than the developer of the product.


3. Not giving users what they want


An app’s (or website, or any product’s) usability is intrinsically linked to the user’s ability to make it work. If it doesn’t do what it says on the tin, then that piece of software – your piece of software – is only a waste of precious storage space and memory on the user’s device.

There are two main types of mistakes that can be made when it comes to an app’s functionality. The first and most obvious one is a bug that interferes with the way the software was intended to work.

Offering users a buggy app is certainly one of the most serious errors that can be committed, and that’s where proper testing and bug reporting come to the rescue. An agile work methodology allows for an admittedly very minor degree of ‘unfinished-ness’ in one’s products, but it doesn’t imply getting a faulty piece of software into your client’s hands as quick as possible.

The second common functionality mistake doesn’t lie in the software but in the way the user interacts with it. If your app doesn’t direct the user to the outcome he or she desire in as few steps as possible, it risks confusing them or losing their interest in achieving the goal your app was designed to enable.

BONUS - Rickard Östberg asks why it is that so few use usability skills and lots, lots more in this incredible article.


4. Creating an impersonal and unengaging product

In a marketplace where new apps are being added all the time the way to a user’s hands is through his or her heart. A very important usability issue and one of the commonest mistakes made by developers is neglecting to add to your product a personal touch which connects emotionally with the user.

There is a sea of bland and impersonal apps out there and if you don’t want your product to rise above the crowd, you’ll need to watch out for this element of user experience. Including a friendly welcome message, congratulating the user when completing certain tasks and inviting them to try new options kick-starts the engagement process and makes the user more satisfied with the product.

Sometimes the user cannot appreciate an app’s usability until it’s explicitly shown to him or her. Omitting to take a proactive approach to pulling in your user into benefitting from what your app offers is a shortcoming that hurts your chances of increasing user engagement, as well as educating the user to the possibilities your app opens.

 
5. Not recruiting the user as your ultimate source of feedback


Finally, top usability mistake number five is leaving the user out in the cold. Whilst it’s understandably that as a developer you take great pride in your creations, it is the user who gets to unfold their potential. Usability depends on both your expertise as the technician, as well as the user’s experience as your final critic.

Including user feedback in the loop is an essential part of usability improvement and a common mistake is to depend completely on your own opinions about how the software should be. Opening up a communication channel with your users and ensuring that it’s easy to navigate, and quick to cross will provide you with critical information on how to improve your offerings.

Make sure to integrate social into your app and manage it with sufficient dedication to reap all the benefit that you can from learning about real cases of user experience.

 Usability is the beginning of user satisfaction

Do any of these common mistakes sound familiar? Keeping in mind these top usability issues can help increase your users’ satisfaction with the product you offer and keep them actively engaged with your apps, whilst boosting the chances of them sharing their positive experiences and recommending your product to their friends.

Seven tips to help you advance your software testing career

If there is one thing we often get asked for it’s tips for making headway in the world of software testing. Software testing is a highly specialised field and any knowledge anyone can offer is always highly valued by people with good intentions and a hunger for learning.

These tips will help you to not only keep up, but also to help you advance in your software testing career.

1)     Communication in writing –

By all means use verbal communication but for important things that will need actions to be taken upon them, for example tasks or instructions, make sure you communicate in writing. Documenting these things is extremely important.

2)     One location -

Do yourself a favour, when documenting things, use one location and stick to it. It’s useless to document if some stuff goes in Excel, other stuff goes in emails, more stuff goes on post-it notes and so on.

Convince your team leader or manager to invest in a proper test management tool like ReQtest and document what needs to be documented there and nowhere else! That way you can never lose crucial instructions, requirements or other documentation again.

3)     Automate daily routine tasks –

Save your own and your team’s time and energy by automating daily routine tasks, no matter how small those tasks are. For example, if you deploy project builds daily and do this manually, write a batch script to perform the task in one click.

4)     Keep notes on everything –

Take notes of all the new things you learn on the project. Even if it’s just a small notebook, keep one per project and take notes regularly, be they simple commands to be executed for certain tasks to complete, complex testing steps or implicit requirements you heard about in a meeting. Keeping these notes will help you remember and you won’t need to ask the same things to fellow testers, developers, managers or clients over and over.

5)     Get involved –

And get involved as early as possible. Ask your lead or manager to get you as a tester involved in requirements and design discussions and meetings from the very beginning. If you’re a big team, make sure that the testing team is represented by its manager or lead anyway.

6)     Keep learning –

Never ever stop learning. Technology is always moving on and we have to keep up. It’s incredible how much software testing has changed in less than a decade, and it’s not likely to stop now.  Read and read some more; keep on reading books, white papers and case studies related to the world of software testing and quality assurance. Stay on top of the news in the software testing and QA industry. Explore new and better ways to test applications. Learn new tools and keep in mind that software testing is a hot career choice for a lot of people!

7)     Enjoy! –

Software testing is fun so stay calm, be focused, follow the processes and enjoy the testing. You already know how interesting software testing is. Let the good times roll!

Monday, 7 April 2014

Types of SDLC Model

Waterfall Model :

Let’s have a look on Waterfall model which is basically divided into two subtypes:-

· Big Bang waterfall model.
· Phased waterfall model.

As the name suggests Waterfall means flow of water always goes in one direction so
when we say waterfall model we expect every phase/stage is freezed.

Big Bang waterfall model:


The figure shows waterfall Big Bang model which has several stages and are described as
below:-
  •  Requirement stage : - This stage takes basic business needs required for the project which is from a user perspective so this stage produces typical word documents with simple points or may be in a form of complicated use case documents. 
  • Design stage: - Use case document / requirement document is the input for this stage. Here we decide how to design the project technically and produce technical document which has Class diagram, pseudo code etc.
  • Build stage:-This stage follow the technical documents as an input so code can be generated as an output by this stage. This is where the actual execution of the project takes place.
  • Test stage:-Here testing is done on the source code produced by the build stage and final software is given a green flag. Deliver stage: - After succeeding in Test stage the final product/project is finally installed at client end for actual production. This stage is start for the maintenance stage.

In water fall big bang model, it is assumed that all stages are freezed that means it’s a
perfect world. But in actual projects such processes are impractical.

Phased Waterfall model :

In this model the project is divided into small chunks and delivered at intervals by different teams. In short, chunks are developed in parallel by different teams and get integrated in the final project. But the disadvantage of this model if there is improper planning may lead to fall of the project during integration or any mismatch of coordination between the team may cause huge failure.

Iterative model :

Iterative model was introduced because of problems faced in Waterfall model. Now let’s try to have a look on Iterative model which also has a two subtype as follows:-

Incremental model  :

In this model work is divided into chunks like phase waterfall model but the difference is that in Incremental model one team can work on one or many chunks which was unlike in phase waterfall model.

Spiral model:


This model uses series of prototype which refine on understanding of what we are
actually going to deliver. Plans are changed if required as per refining of the prototype. So every time in this model refining of prototype is done and again the whole process
cycle is repeated.

V-model :

This type of model was developed by testers to emphasis the importanceof early testing. In this model testers are involved from requirement stage itself. So below is the diagram  (V model cycle diagram) which shows how for every stage some testing activity is done to ensure that the project is moving as planned.

For instance,

  • In requirement stage we have acceptance test documents created by the testers.Acceptance test document outlines that if these test pass then customer will accept the software.
  •  In specification stage testers create the system test document. In the coming section system testing is explained in more elaborate fashion.
  • In design stage we have integration documents created by the testers. Integration test documents define testing steps of how the components should work when integrated. For instance you develop a customer class and product class. You have tested the customer class and the product class individually. But in practical Scenario the customer class will interact with the product class. So you also need to test is the customer class interacting with product class properly.
  • In implement stage we have unit documents created by the programmers or testers.
Lets try to understand every of this testing phase in more detail.

Unit Testing
Starting from the bottom the first test level is "Unit Testing". It involves checking that
each feature specified in the "Component Design" has been implemented in the
component.
In theory an independent tester should do this, but in practice the developer usually does
it, as they are the only people who understand how a component works. The problem
with a component is that it performs only a small part of the functionality of a system,
and it relies on co-operating with other parts of the system, which may not have been
built yet. To overcome this, the developer either builds, or uses special software to trick
the component into believe it is working in a fully functional system.

Integration Testing
As the components are constructed and tested they are then linked together to check if  they work with each other. It is a fact that two components that have passed all their tests, when connected to each other produce one new component full of faults. These tests can be done by specialists, or by the developers.
Integration Testing is not focused on what the components are doing but on how they communicate with each other, as specified in the "System Design". The "System Design" defines relationships between components. The tests are organized to check all the interfaces, until all the components have been  built and interfaced to each other producing the whole system.

System Testing
Once the entire system has been built then it has to be tested against the "System Specification" to check if it delivers the features required. It is still developer focused, although specialist developers known as systems testers are normally employed to do it. In essence System Testing is not about checking the individual parts of the design, but about checking the system as a whole. In fact it is one giant component. System testing can involve a number of specialist types of test to see if all the functional and non-functional requirements have been met. In addition to functional requirements these may include the following types of testing for the non-functional requirements:

· Performance - Are the performance criteria met?
· Volume - Can large volumes of information be handled?
· Stress - Can peak volumes of information be handled?
· Documentation - Is the documentation usable for the system?
· Robustness - Does the system remain stable under adverse circumstances?

Wednesday, 2 April 2014

Types of Mobile Application

Mobile Application 'Types' is quite confusing topic. Some people misunderstood this with Mobile Application 'categories'. Mobile Application 'categories' are Utility Apps, Entertainment Apps, Private Apps, Games, Banking Apps. But the Mobile Application 'Types' are totally different topics.
Here is the description:

1. Browser Access : The applications which we are accessing through native browser. Ex : m.yahoo.com, www.google.com, m.redbus.in, etc

2. Hybrid Apps - Web : We are installing the application in our device and for the functioning of that particular application internet is required. Ex : Social Networking Apps(Facebook, Twitter), Instant Messengers(Skype), E-Commerce(Flipkart), Internet Speed Testing(Speedtest), etc.

3. Hybrid Apps - Mixed : We are installing the application in our device and if required we are connecting it to internet also. Ex : Few games in which we can play alone and we go online too for playing with different players(multi player). And any medical apps where u want to keep a track record of your health and later want to share with your friends or doctor via internet.

4. Native Apps : The applications which we are installing in our device. Ex : Reminders, Few Games, etc.

It can be further understand by the communication medium of the apps:


Native Apps- Which can be installed in  the devices and the app does not need any data transfer to the server. With out network these apps work in the device. The data about the app will be stored in the device itself. Example Gaming applications. Here the device memory and configuration is very important as the app completely dependent on this.

Client Server apps- They can be called Semi native apps. Here the app can be installed in the device. But the with out a network it cannot be launched. Because It gets the data from the server. With out the data the app will not proceed further. Example Commercial apps like Banking app. Here you basically can see the form UI but the all the data comes from the server. So the device memory is partially dependent just to install the app as the data comes from the server for every service call.

Mobile Web applications.- They can be called as Mobile browser apps as these are not installed in the device. these can be accessed using the mobile browser by hitting the url of the web. Here the device memory size is not all important as neither of the from or the app data is stored in the device. It is completely dependent on the quality of  the browser. Every thing comes from the server and rendered in the browser when you hit the url.


Comparison between Native Apps, Hybrid Apps and Mobile Apps:

1. Skills/tools needed for cross-platform apps:
Native         : Objective-C, Java, C, C++, C#, VB.net
Hybrid         : HTML, CSS, Javascript, Mobile development framework (like PhoneGap)
Mobile web : HTML, CSS, Javascript

2. Distribution:
Native         : App Store/Market
Hybrid         : App Store/Market
Mobile web : Internet

3. Development Speed:
Native         : Slow
Hybrid         : Moderate
Mobile web : Fast

4. Number of applications needed to reach major smartphone platforms
Native         : 4
Hybrid         : 1
Mobile web : 1

5. Ongoing application maintenance:
Native         : Difficult
Hybrid         : Moderate
Mobile web : Low

6. Device access:
Native         : Full access(Camera, microphone, GPS, gyroscope, accelerometer, file upload, etc…)
Hybrid         : Full access(Camera, microphone, GPS, gyroscope, accelerometer, file upload, etc…)
Mobile web : Partial access(GPS, gyroscope, accelerometer, file upload)

7. Offline access:
Native         : Yes
Hybrid         : Yes
Mobile web : Yes

8. Advantages:
Native         : Lets you create apps with rich user interfaces and/or heavy graphics
Hybrid         : Combines the development speed of mobile web apps with the device access and app store distribution of native apps
Mobile web : Offers fast development, simple maintenance, and full application portability.  One mobile web app works on any platform.

9. Disadvantages:
Native         : Development Time, Development Cost, Ongoing Maintenance, No portability (apps cannot be used on other platforms)
Hybrid         : Can’t handle heavy graphics, Requires familiarity with a mobile framework
Mobile web : Can’t handle heavy graphics, Can’t access camera or microphone

10. Best used for:
Native         : Games, Consumer-focused apps that require a highly graphic interface
Hybrid         : Consumer-focused apps with a moderately graphical interface, Business-focused apps that need full device access.
Mobile web : General non-game apps, Business-focused apps

Tuesday, 1 April 2014

API and API Testing

What is API?

An API (Application Programming Interface) is a collection of software functions and procedures, called API calls that can be executed by other software applications.

What is API Testing?
API testing is mostly used for the system which has collection of API that needs to be tested. The system could be system software, application software or libraries.API testing is different from other testing types as GUI is rarely involved in API Testing. Even if GUI is not involved in API testing, you still need to setup initial environment, invoke API with required set of parameters and then finally analyze the result. Setting initial environment become complex because GUI is not involved. In case of API, you need to have some way to make sure that system is ready for testing. This can be divided further in test environment setup and application setup. Things like database should be configured, server should be started are related to test environment setup. On the other hand object should be created before calling non static member of the class falls under application specific setup. Initial condition in API testing also involves creating conditions under which API will be called. Probably, API can be called directly or it can be called because of some event or in response of some exception.

Test Cases for API Testing:
The test cases on API testing are based on the output.

•Return value based on input condition:

Relatively simple to test as input can be defined and results can be validated. Example: It is very easy to write test cases for int add(int a, int b) kind of API. You can pass different combinations of int a and int b and can validate these against known results.

•Does not return anything:
Behavior of API on the system to be checked when there is no return value.
Example: A test case to delete(ListElement) function will probably require to validate size of the list or absence of list element in the list.

•Trigger some other API/event/interrupt:
 The output of an API if triggers some event or raises some interrupt, then those events and interrupt listeners should be tracked. The test suite should call appropriate API and declarations should be on the interrupts and listener.

•Update data structure:
 This category is also similar to the API category which does not return anything. Updating data structure will have some effect on the system and that should be validated.

•Modify certain resources:
 If API call is modifies some resources, for example makes update on some database, changes registry, kills some processes etc, then it should be validated by accessing the respective resources.


API Testing vs. Unit Testing: What’s the difference?

1. API testing is not Unit testing. Unit testing is owned by development team and API by QE team.API is mostly black box testing where as unit testing is essentially white box testing.

2. Both API-testing and unit-testing target the code-level , hence similar tools can be used for both activities. There are several open source tools available for API testing and a few of them are Webinject, Junit, XMLUNIT, HttpUnit, ANT etc.

3. API testing process involves testing the methods of .NET, JAVA, J2EE APIs for any valid, invalid, and inappropriate inputs, and also testing the APIs on Application servers.

4. Unit testing activity is owned by the development team; and the developers are expected to build Unit tests for each of their code modules (these are typically classes, functions, stored procedures, or some other ‘atomic’ unit of code), and to ensure that each module passes its unit tests before the code is included in a build. API testing, on the other hand, is owned by the QE team, a staff other than the author of the code. API tests are often run after the build is ready, and it is common that the authors of the tests do not have access to the source code; they essentially create black box tests against an API rather than the traditional GUI.

5. Another key difference between API and Unit testing lies in the Test Case design. Unit tests are typically designed to verify that each unit in isolation performs as it should. The scope of unit testing often does not consider the system-level interactions of the various units. Whereas, API testing, are designed to consider the ‘full’ functionality of the system, as it will be used by the end users. This means that API tests must be far more extensive than unit tests, and take intoconsideration the sorts of ‘scenarios’ that the API will be used for, which typically involveinteractions between several different modules within the application.

API Testing Approach
:
An approach to test the Product that contains an API.

Step I:
Understand that API Testing is a testing activity that requires some coding and is usually beyond the scope of what developers are expected to do. Testing team should own this activity.

Step II:
Traditional testing techniques such as equivalence classes and boundary analysis are also applicable to API Testing, so even if you are not too comfortable with coding, you can still design good API tests.

Step III:
It is almost impossible to test all possible scenarios that are possible to use with your API. Hence, focus on the most likely scenarios, and also apply techniques like Soap Opera Testing and Forced Error Testing using different data types and size to maximize the test coverage. Main Challenges of API Testing can be divided into following categories.
• Parameter Selection
• Parameter combination
• Call sequencing

API Framework
The framework is more or less self-explanatory. The purpose of the config file is to hold all the configurable components and their values for a particular test run. As a follow through, the automated test cases should be represented in a ‘parse-able’ format in the config file. The script should be highly ‘configurable’. In the case of API Testing, it is not necessary to test every API in every test run ( the number of API’s that are tested will lessen as testing progresses). Hence the config file should have sections which detail which all API’s are “activated” for the particular run. Based on this, the test cases should be picked up.

Since inserting the automation test case parameters into config file can be a tedious activity, it should be designed in such a way that the test case can be left static with a mechanism of  ‘activating’ and ‘deactivating’ them.




Definitions:

Soap Opera Testing:
Soap opera tests exaggerate and complicate scenarios in the way that television soap operas exaggerate and complicate real life.

Forced Error Testing:
Forced error testing is nothing but mutation testing. It is process of inducing error /changes to the application to find how application is working. The forced-error test (FET) consists of negative test cases that are designed to force a program into error conditions. A list of all error messages that the program issues should be generated. The list is used as a baseline for developing test cases.

Software Functions and Procedures:
Functions and procedures are the foundations of programming. They provide the structure to organize the program into logical units that can manage the various activities needed for a program.

Functions:
There are two basic types of functions:

Built-in:
—these are built into the programming environment and do things such as opening and closing files, printing, writing, and converting variables (e.g., text to numbers, singles to integers, etc.).

Application/user-specific:
—depending on what the program needs, you can build functions and procedures using built-in functions and procedures and variables.

Procedures:
Procedures are used to perform a series of tasks. They usually include other procedures and functions within the program. Procedures typically do not return a value; they are simply executed and return control to the calling procedure or subroutine. Procedures in Visual Basic are called "Subroutines," often "Sub" for short. In JavaScript, "Functions" are used as procedures (they simply return no or null values to whatever called them).

Windows Phone Testing Checklist

During mobile application exploring, i got the idea of maintaining the checklist of Android and Window based application. I am maintaining the checklist here, i will update this list once i get the more scenarios.

  • Verify Application Tile Images :
1.View the Application list.
2.Verify that the small mobile app tile image is representative of the application.
3.From the Application list, tap and hold the small mobile app tile of your application and select 'pin to start'.
4.Verify that the large mobile tile image on the Start screen is representative of the application.
  •  Application Closure:
1.Launch your application.
2.Navigate throughout the application, and then close the application through device's "back" button.
  • Application Responsiveness:
1.Launch your application.
2.Thoroughly test the application features and functionality.
3.Verify that the application does not become unresponsive for more than three seconds.
4.Verify that a progress indicator is displayed if the application performs an operation that causes the device to appear to be unresponsive for more than three seconds.
5.If a progress indicator is displayed, verify that the application provides the user with an option to cancel the operation being performed.
  • Application Responsiveness After Being Closed:
1.Launch your application.
2.Close the application using the Back button, or by selecting the Exit function from the application menu.
3.Launch your application again.
4.Verify that the application launches normally within 5 seconds, and is responsive within 20 seconds of launching.
  • Application Responsiveness After Being Deactivated:
1.Launch your application.
2.De-activate the app. This can be achived by pressing the "Start" button or by launching another app. (By deactivation we are not closing the app's process but are merely putting the app in the background.)
3.Verify that the application launches normally within 5 seconds, and is responsive within 20 seconds of launching.
4.If your application includes pause functionality, pause the application.
5.Launch your application again.
6.Verify that the application launches normally within 5 seconds, and is responsive within 20 seconds of launching.
  • Back Button: Previous Pages:
1.Launch your application.
2.Navigate through the application.
3.Press the Back button.
4.Verify that the application closes the screen that is in focus and returns you to a previous page within the back stack.
  • Back Button: First Screen:
1.Launch your application.
2.Press the Back button.
3.Verify that either the application closes without error, or allows the user to confirm closing the application with a menu or dialog.
  • Back Button: Context Menus and Dialog:
1.Launch your application.
2.Navigate through the application.
3.Display a context menu or dialogs.
4.Tap the Back button.
5.Verify that the context menu or dialog closes and returns you to the screen where the context menu or dialog was opened.
  • Back Button: Games:
1.Launch your application.
2.Begin playing the game.
3.Tap the Back button.
4.Verify that the game pauses.
  • Trial Applications:
1.Launch the trial version of your application.
2.Launch the full version of your application.
3.Compare the performance of the trial and full versions of your application.
4.Verify that the performance of the trial version of your application meets the performance requirements mentioned in test cases 1-9
  • Verify that Application doesn't affect Phone Calls:
1.Ensure that the phone has a valid cellular connection.
2.Launch your application. Receive an incoming phone call.
3.Verify that the quality of the phone call is not negatively impacted by sounds or vibrations in your application.
4.End the phone call.
5.Verify that the application returns to the foreground and resumes.
6.De-activate the application by tapping the Start button.
7.Verify that you can successfully place a phone call.
  • Verify that Application doesn't affect SMS and MMS Messaging:
1.Ensure that the phone has a valid cellular connection.
2.Ensure that the phone is not in Airplane mode by viewing the phone Settings page.
3.Launch your application. Deactivate the application by tapping the Start button.
4.Verify that a SMS or MMS message can be sent to another phone.
5.Verify that notifications regarding the SMS or MMS messages are displayed on the phone either from within the application, or within 5 seconds after the application is closed.
  • Verify Application Responsiveness With Incoming Phone Calls and Messages:
1.Ensure that the phone has a valid cellular connection.
2.Ensure that the phone is not in Airplane mode by viewing the phone Settings page.
3.Receive an incoming phone call, SMS message or MMS message.
4.Verify that the application does not stop responding or close unexpectedly when the notification is received.
5.After verifying the above step, tap on the message notification or receive the incoming phone call.
6.If a message was received, verify that User can return to the application by pressing the Back button.
  • Language Validation:
1.Review the product description of the application and verify that it is localized to the target language.
2.Launch your application.
3.Verify that the UI text of the application is localized to the target language.

Installation of Android App in Emulator

Installation of any App in Emulator:

Installing App in Emulators need the .apk file which we can find in the same project folder.
Path of the .apk file: workspace> project> bin\

There are different ways of installing Android App (.apk file) in emulators as mentioned below:

Way 1: If we are using Eclipse version 3.7.2, then on executing the Android project in Eclipse (without throwing any error) will automatically install the App (.apk file) in emulator (make sure the Emulator is open while executing the Eclipse Android project).

Way 2: We can also install the Android App (.apk file) through ‘adb’ (Android Debug Bridge).
adb is a versatile command line tool that lets you communicate with an emulator instance or connected Android-powered device.

Path of adb: Drive\android-sdk-windows\platform-tools

‘adb’ used for installing any App in Android. As per mentioned command in the screenshot, is used for installing any app.

Installation Command is:

adb install demo.apk

[adb( command) install (command) demo.apk (path of.apk file of  demo android project)]
have a look on screenshot for reference:



Note: Emulator should be already opened while installing any App into Emulator