Sunday, 28 June 2015

Software Testing - Types of Testing

This section describes the different types of testing that may be used to test a software during SDLC.

Manual Testing

Manual testing includes testing a software manually, i.e., without using any automated tool or any script. In this type, the tester takes over the role of an end-user and tests the software to identify any unexpected behavior or bug. There are different stages for manual testing such as unit testing, integration testing, system testing, and user acceptance testing.
Testers use test plans, test cases, or test scenarios to test a software to ensure the completeness of testing. Manual testing also includes exploratory testing, as testers explore the software to identify errors in it.

Automation Testing

Automation testing, which is also known as Test Automation, is when the tester writes scripts and uses another software to test the product. This process involves automation of a manual process. Automation Testing is used to re-run the test scenarios that were performed manually, quickly, and repeatedly.
Automation Testing
Apart from regression testing, automation testing is also used to test the application from load, performance, and stress point of view. It increases the test coverage, improves accuracy, and saves time and money in comparison to manual testing.

What is Automate?

It is not possible to automate everything in a software. The areas at which a user can make transactions such as the login form or registration forms, any area where large number of users can access the software simultaneously should be automated.
Furthermore, all GUI items, connections with databases, field validations, etc. can be efficiently tested by automating the manual process.

When to Automate?

Test Automation should be used by considering the following aspects of a software:
  • Large and critical projects
  • Projects that require testing the same areas frequently
  • Requirements not changing frequently
  • Accessing the application for load and performance with many virtual users
  • Stable software with respect to manual testing
  • Availability of time

How to Automate?

Automation is done by using a supportive computer language like VB scripting and an automated software application. There are many tools available that can be used to write automation scripts. Before mentioning the tools, let us identify the process that can be used to automate the testing process:
  • Identifying areas within a software for automation
  • Selection of appropriate tool for test automation
  • Writing test scripts
  • Development of test suits
  • Execution of scripts
  • Create result reports
  • Identify any potential bug or performance issues

Software Testing Tools

The following tools can be used for automation testing:
  • HP Quick Test Professional
  • Selenium
  • IBM Rational Functional Tester
  • SilkTest
  • TestComplete
  • Testing Anywhere
  • WinRunner
  • LaodRunner
  • Visual Studio Test Professional
  • WATIR

Software Testing - ISO Standards

Many organizations around the globe develop and implement different standards to improve the quality needs of their software. This chapter briefly describes some of the widely used standards related to Quality Assurance and Testing.

ISO/IEC 9126

This standard deals with the following aspects to determine the quality of a software application:
  • Quality model
  • External metrics
  • Internal metrics
  • Quality in use metrics
This standard presents some set of quality attributes for any software such as:
  • Functionality
  • Reliability
  • Usability
  • Efficiency
  • Maintainability
  • Portability
The above-mentioned quality attributes are further divided into sub-factors, which you can study when you study the standard in detail.

ISO/IEC 9241-11

Part 11 of this standard deals with the extent to which a product can be used by specified users to achieve specified goals with Effectiveness, Efficiency and Satisfaction in a specified context of use.
This standard proposed a framework that describes the usability components and the relationship between them. In this standard, the usability is considered in terms of user performance and satisfaction. According to ISO 9241-11, usability depends on context of use and the level of usability will change as the context changes.

ISO/IEC 25000:2005

ISO/IEC 25000:2005 is commonly known as the standard that provides the guidelines for Software Quality Requirements and Evaluation (SQuaRE). This standard helps in organizing and enhancing the process related to software quality requirements and their evaluations. In reality, ISO-25000 replaces the two old ISO standards, i.e. ISO-9126 and ISO-14598.
SQuaRE is divided into sub-parts such as:
  • ISO 2500n - Quality Management Division
  • ISO 2501n - Quality Model Division
  • ISO 2502n - Quality Measurement Division
  • ISO 2503n - Quality Requirements Division
  • ISO 2504n - Quality Evaluation Division
The main contents of SQuaRE are:
  • Terms and definitions
  • Reference Models
  • General guide
  • Individual division guides
  • Standard related to Requirement Engineering (i.e. specification, planning, measurement and evaluation process)

ISO/IEC 12119

This standard deals with software packages delivered to the client. It does not focus or deal with the clients’ production process. The main contents are related to the following items:
  • Set of requirements for software packages.
  • Instructions for testing a delivered software package against the specified requirements.

Miscellaneous

Some of the other standards related to QA and Testing processes are mentioned below:
StandardDescription
IEEE 829A standard for the format of documents used in different stages of software testing.
IEEE 1061A methodology for establishing quality requirements, identifying, implementing, analyzing, and validating the process, and product of software quality metrics.
IEEE 1059Guide for Software Verification and Validation Plans.
IEEE 1008A standard for unit testing.
IEEE 1012A standard for Software Verification and Validation.
IEEE 1028A standard for software inspections.
IEEE 1044A standard for the classification of software anomalies.
IEEE 1044-1A guide for the classification of software anomalies.
IEEE 830A guide for developing system requirements specifications.
IEEE 730A standard for software quality assurance plans.
IEEE 1061A standard for software quality metrics and methodology.
IEEE 12207A standard for software life cycle processes and life cycle data.
BS 7925-1A vocabulary of terms used in software testing.
BS 7925-2A standard for software component testing.

Software Testing - Myths

Myth 1 : Testing is Too Expensive

Reality : There is a saying, pay less for testing during software development or pay more for maintenance or correction later. Early testing saves both time and cost in many aspects, however reducing the cost without testing may result in improper design of a software application rendering the product useless.

Myth 2 : Testing is Time-Consuming

Reality : During the SDLC phases, testing is never a time-consuming process. However diagnosing and fixing the errors identified during proper testing is a time-consuming but productive activity.

Myth 3 : Only Fully Developed Products are Tested

Reality : No doubt, testing depends on the source code but reviewing requirements and developing test cases is independent from the developed code. However iterative or incremental approach as a development life cycle model may reduce the dependency of testing on the fully developed software.

Myth 4 : Complete Testing is Possible

Reality : It becomes an issue when a client or tester thinks that complete testing is possible. It is possible that all paths have been tested by the team but occurrence of complete testing is never possible. There might be some scenarios that are never executed by the test team or the client during the software development life cycle and may be executed once the project has been deployed.

Myth 5 : A Tested Software is Bug-Free

Reality : This is a very common myth that the clients, project managers, and the management team believes in. No one can claim with absolute certainty that a software application is 100% bug-free even if a tester with superb testing skills has tested the application.

Myth 6 : Missed Defects are due to Testers

Reality : It is not a correct approach to blame testers for bugs that remain in the application even after testing has been performed. This myth relates to Time, Cost, and Requirements changing Constraints. However the test strategy may also result in bugs being missed by the testing team.

Myth 7 : Testers are Responsible for Quality of Product

Reality : It is a very common misinterpretation that only testers or the testing team should be responsible for product quality. Testers’ responsibilities include the identification of bugs to the stakeholders and then it is their decision whether they will fix the bug or release the software. Releasing the software at the time puts more pressure on the testers, as they will be blamed for any error.

Myth 8 : Test Automation should be used wherever possible to Reduce Time

Reality : Yes, it is true that Test Automation reduces the testing time, but it is not possible to start test automation at any time during software development. Test automaton should be started when the software has been manually tested and is stable to some extent. Moreover, test automation can never be used if requirements keep changing.

Myth 9 : Anyone can Test a Software Application

Reality : People outside the IT industry think and even believe that anyone can test a software and testing is not a creative job. However testers know very well that this is a myth. Thinking alternative scenarios, try to crash a software with the intent to explore potential bugs is not possible for the person who developed it.

Myth 10 : A Tester's only Task is to Find Bugs

Reality : Finding bugs in a software is the task of the testers, but at the same time, they are domain experts of the particular software. Developers are only responsible for the specific component or area that is assigned to them but testers understand the overall workings of the software, what the dependencies are, and the impacts of one module on another module.

Verification & Validation

Verification & Validation
These two terms are very confusing for most people, who use them interchangeably. The following table highlights the differences between verification and validation.
S.N.VerificationValidation
1Verification addresses the concern: "Are you building it right?"Validation addresses the concern: "Are you building the right thing?"
2Ensures that the software system meets all the functionality.Ensures that the functionalities meet the intended behavior.
3Verification takes place first and includes the checking for documentation, code, etc.Validation occurs after verification and mainly involves the checking of the overall product.
4Done by developers.Done by testers.
5It has static activities, as it includes collecting reviews, walkthroughs, and inspections to verify a software.It has dynamic activities, as it includes executing the software against the requirements.
6It is an objective process and no subjective decision should be needed to verify a software.It is a subjective process and involves subjective decisions on how well a software works.

When to Stop Testing?

It is difficult to determine when to stop testing, as testing is a never-ending process and no one can claim that a software is 100% tested. The following aspects are to be considered for stopping the testing process:
  • Testing Deadlines
  • Completion of test case execution
  • Completion of functional and code coverage to a certain point
  • Bug rate falls below a certain level and no high-priority bugs are identified
  • Management decision

When to Start Testing?

When to Start Testing?

An early start to testing reduces the cost and time to rework and produce error-free software that is delivered to the client. However in Software Development Life Cycle (SDLC), testing can be started from the Requirements Gathering phase and continued till the deployment of the software. It also depends on the development model that is being used. For example, in the Waterfall model, formal testing is conducted in the testing phase; but in the incremental model, testing is performed at the end of every increment/iteration and the whole application is tested at the end.
Testing is done in different forms at every phase of SDLC:
  • During the requirement gathering phase, the analysis and verification of requirements are also considered as testing.
  • Reviewing the design in the design phase with the intent to improve the design is also considered as testing.
  • Testing performed by a developer on completion of the code is also categorized as testing.

Who does Testing?

Who does Testing?
It depends on the process and the associated stakeholders of the project(s). In the IT industry, large companies have a team with responsibilities to evaluate the developed software in context of the given requirements. Moreover, developers also conduct testing which is called Unit Testing. In most cases, the following professionals are involved in testing a system within their respective capacities:
  • Software Tester
  • Software Developer
  • Project Lead/Manager
  • End User
Different companies have different designations for people who test the software on the basis of their experience and knowledge such as Software Tester, Software Quality Assurance Engineer, QA Analyst, etc.
It is not possible to test the software at any time during its cycle. The next two sections state when testing should be started and when to end it during the SDLC.

What is Testing?

What is Testing?
 
Testing is the process of evaluating a system or its component(s) with the intent to find whether it satisfies the specified requirements or not. In simple words, testing is executing a system in order to identify any gaps, errors, or missing requirements in contrary to the actual requirements.
According to ANSI/IEEE 1059 standard, Testing can be defined as - A process of analyzing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item

Testing and Debugging

What is Testing and Debugging ?? In Software Testing

Testing : It involves identifying bug/error/defect in a software without correcting it. Normally professionals with a quality assurance background are involved in bugs identification. Testing is performed in the testing phase.
Debugging : It involves identifying, isolating, and fixing the problems/bugs. Developers who code the software conduct debugging upon encountering an error in the code. Debugging is a part of White Box Testing or Unit Testing. Debugging can be performed in the development phase while conducting Unit Testing or in phases while fixing the reported bugs.

Audit and Inspection

What is Audit and Inspection?? In software testing

Audit : It is a systematic process to determine how the actual testing process is conducted within an organization or a team. Generally, it is an independent examination of processes involved during the testing of a software. As per IEEE, it is a review of documented processes that organizations implement and follow. Types of audit include Legal Compliance Audit, Internal Audit, and System Audit.
Inspection : It is a formal technique that involves formal or informal technical reviews of any artifact by identifying any error or gap. As per IEEE94, inspection is a formal evaluation technique in which software requirements, designs, or codes are examined in detail by a person or a group other than the author to detect faults, violations of development standards, and other problems.
Formal inspection meetings may include the following processes: Planning, Overview Preparation, Inspection Meeting, Rework, and Follow-up.

Software Testing - QA, QC & Testing

Most people get confused when it comes to pin down the differences among Quality Assurance, Quality Control, and Testing. Although they are interrelated and to some extent, they can be considered as same activities, but there exist distinguishing points that set them apart. The following table lists the points that differentiate QA, QC, and Testing.

Quality AssuranceQuality ControlTesting
QA includes activities that ensure the implementation of processes, procedures and standards in context to verification of developed software and intended requirements.It includes activities that ensure the verification of a developed software with respect to documented (or not in some cases) requirements.It includes activities that ensure the identification of bugs/error/defects in a software.
Focuses on processes and procedures rather than conducting actual testing on the system.Focuses on actual testing by executing the software with an aim to identify bug/defect through implementation of procedures and process.Focuses on actual testing.
Process-oriented activities.Product-oriented activities.Product-oriented activities.
Preventive activities.It is a corrective process.It is a preventive process.
It is a subset of Software Test Life Cycle (STLC).QC can be considered as the subset of Quality Assurance.Testing is the subset of Quality Control.

Traceability Matrix

Traceability Matrix (also known as Requirement Traceability Matrix - RTM) is a table that is used to trace the requirements during the Software Development Life Cycle. It can be used for forward tracing (i.e. from Requirements to Design or Coding) or backward (i.e. from Coding to Requirements). There are many user-defined templates for RTM.
Each requirement in the RTM document is linked with its associated test case so that testing can be done as per the mentioned requirements. Furthermore, Bug ID is also included and linked with its associated requirements and test case. The main goals for this matrix are:
  • Make sure the software is developed as per the mentioned requirements.
  • Helps in finding the root cause of any bug.
  • Helps in tracing the developed documents during different phases of SDLC.

Test Case

Test cases involve a set of steps, conditions, and inputs that can be used while performing testing tasks. The main intent of this activity is to ensure whether a software passes or fails in terms of its functionality and other aspects. There are many types of test cases such as functional, negative, error, logical test cases, physical test cases, UI test cases, etc.
Furthermore, test cases are written to keep track of the testing coverage of a software. Generally, there are no formal templates that can be used during test case writing. However, the following components are always available and included in every test case:
  • Test case ID
  • Product module
  • Product version
  • Revision history
  • Purpose
  • Assumptions
  • Pre-conditions
  • Steps
  • Expected outcome
  • Actual outcome
  • Post-conditions
Many test cases can be derived from a single test scenario. In addition, sometimes multiple test cases are written for a single software which are collectively known as test suites.

Test Scenario

It is a one line statement that notifies what area in the application will be tested. Test scenarios are used to ensure that all process flows are tested from end to end. A particular area of an application can have as little as one test scenario to a few hundred scenarios depending on the magnitude and complexity of the application.
The terms 'test scenario' and 'test cases' are used interchangeably, however a test scenario has several steps, whereas a test case has a single step. Viewed from this perspective, test scenarios are test cases, but they include several test cases and the sequence that they should be executed. Apart from this, each test is dependent on the output from the previous test.

Test Plan

A test plan outlines the strategy that will be used to test an application, the resources that will be used, the test environment in which testing will be performed, and the limitations of the testing and the schedule of testing activities. Typically the Quality Assurance Team Lead will be responsible for writing a Test Plan.
A test plan includes the following:
  • Introduction to the Test Plan document
  • Assumptions while testing the application
  • List of test cases included in testing the application
  • List of features to be tested
  • What sort of approach to use while testing the software
  • List of deliverables that need to be tested
  • The resources allocated for testing the application
  • Any risks involved during the testing process
  • A schedule of tasks and milestones to be achieved

Software Testing - Documentation

Testing documentation involves the documentation of artifacts that should be developed before or during the testing of Software.
Documentation for software testing helps in estimating the testing effort required, test coverage, requirement tracking/tracing, etc. This section describes some of the commonly used documented artifacts related to software testing such as:
  • Test Plan
  • Test Scenario
  • Test Case
  • Traceability Matrix

Monday, 15 June 2015

What is Acceptance testing?

What is Acceptance testing?

 
  • After the system test has corrected all or most defects, the system will be delivered to the user or customer for acceptance testing.
  • Acceptance testing is basically done by the user or customer although other stakeholders may be involved as well.
  • The goal of acceptance testing is to establish confidence in the system.
  • Acceptance testing is most often focused on a validation type testing.
  • Acceptance testing may occur at more than just a single level, for example:
    • A Commercial Off the shelf (COTS) software product may be acceptance tested when it is installed or integrated.
    • Acceptance testing of the usability of the component may be done during component testing.
    • Acceptance testing of a new functional enhancement may come before system testing.
  • The types of acceptance testing are:
    • The User Acceptance test: focuses mainly on the functionality thereby validating the fitness-for-use of the system by the business user. The user acceptance test is performed by the users and application managers.
    • The Operational Acceptance test: also known as Production acceptance test validates whether the system meets the requirements for operation. In most of the organization the operational acceptance test is performed by the system administration before the system is released. The operational acceptance test may include testing of backup/restore, disaster recovery, maintenance tasks and periodic check of security vulnerabilities.
    • Contract Acceptance testing: It is performed against the contract’s acceptance criteria for producing custom developed software. Acceptance should be formally defined when the contract is agreed.
    • Compliance acceptance testing: It is also known as regulation acceptance testing is performed against the regulations which must be adhered to, such as governmental, legal or safety regulations.
  • What is Security testing in software testing?

  • It is a type of non-functional testing.
  • Security testing is basically a type of software testing that’s done to check whether the application or the product is secured or not. It checks to see if the application is vulnerable to attacks, if anyone hack the system or login to the application without any authorization.
  • It is a process to determine that an information system protects data and maintains functionality as intended.
  • The security testing is performed to check whether there is any information leakage in the sense by encrypting the application or using wide range of software’s and hardware’s and firewall etc.
  • Software security is about making software behave in the presence of a malicious attack.
  • The six basic security concepts that need to be covered by security testing are: confidentiality, integrity, authentication, availability, authorization and non-repudiation.
  • What is V-model- advantages, disadvantages and when to use it?

    V- model means Verification and Validation model. Just like the waterfall model, the V-Shaped life cycle is a sequential path of execution of processes. Each phase must be completed before the next phase begins. Testing of the product is planned in parallel with a corresponding phase of development
    Requirements like BRS and SRS begin the life cycle model just like the waterfall model. But, in this model before development is started, a system test plan is created. The test plan focuses on meeting the functionality specified in the requirements gathering.
     
    The high-level design (HLD) phase focuses on system architecture and design. It provide overview of solution, platform, system, product and service/process. An integration test plan is created in this phase as well in order to test the pieces of the software systems ability to work together.
     
    The low-level design (LLD) phase is where the actual software components are designed. It defines the actual logic for each and every component of the system. Class diagram with all the methods and relation between classes comes under LLD. Component tests are created in this phase as well.
     
    The implementation phase is, again, where all coding takes place. Once coding is complete, the path of execution continues up the right side of the V where the test plans developed earlier are now put to use.
    Coding: This is at the bottom of the V-Shape model. Module design is converted into code by developers.
     

    What is V-model- advantages, disadvantages and when to use it?






    V- model means Verification and Validation model. Just like the waterfall model, the V-Shaped life cycle is a sequential path of execution of processes. Each phase must be completed before the next phase begins. Testing of the product is planned in parallel with a corresponding phase of development.
    Diagram of V-model:
    V-model
    The various phases of the V-model are as follows:
    Requirements like BRS and SRS begin the life cycle model just like the waterfall model. But, in this model before development is started, a system test plan is created. The test plan focuses on meeting the functionality specified in the requirements gathering.
    The high-level design (HLD) phase focuses on system architecture and design. It provide overview of solution, platform, system, product and service/process. An integration test plan is created in this phase as well in order to test the pieces of the software systems ability to work together.
    The low-level design (LLD) phase is where the actual software components are designed. It defines the actual logic for each and every component of the system. Class diagram with all the methods and relation between classes comes under LLD. Component tests are created in this phase as well.
    The implementation phase is, again, where all coding takes place. Once coding is complete, the path of execution continues up the right side of the V where the test plans developed earlier are now put to use.
    Coding: This is at the bottom of the V-Shape model. Module design is converted into code by developers.
    Advantages of V-model:
    • Simple and easy to use.
    • Testing activities like planning, test designing happens well before coding. This saves a lot of time. Hence higher chance of success over the waterfall model.
    • Proactive defect tracking – that is defects are found at early stage.
    • Avoids the downward flow of the defects.
    • Works well for small projects where requirements are easily understood.
     
    Disadvantages of V-model:
    • Very rigid and least flexible.
    • Software is developed during the implementation phase, so no early prototypes of the software are produced.
    • If any changes happen in midway, then the test documents along with requirement documents has to be updated.
    When to use the V-model:
    • The V-shaped model should be used for small to medium sized projects where requirements are clearly defined and fixed.
    • The V-Shaped model should be chosen when ample technical resources are available with needed technical expertise.
    High confidence of customer is required for choosing the V-Shaped model approach. Since, no prototypes are produced, there is a very high risk involved in meeting customer expectations.

    Sunday, 14 June 2015

    10 Useful Tips to Dress up for an Interview for Males & Females


    10 Useful Tips to Dress up for an Interview for Males & Females

    You know the old saying, “You never get a second chance to make a first impression.” That’s always the case when meeting a person for the first time, but especially when it comes to a job interview.

    Hot pastel shirts, ripped jeans, an overdose of cologne and a pair of shades might impress your crush in the class. But when it comes to job interviews, doing this would put up an impression that you are not serious about the job and you would join the queue of rejected candidates.

    Your subject knowledge confidence and amiability are essential. But making a good first impression by your appearance is very crucial in dictating the final outcome of the job interview. Most of the interviewers take just 2 seconds to judge whether you a potential candidate or not. Before you say a single word to the interviewer, you have already made an impression based on how you’re dressed.


    Knowing what to wear to a job interview is an age-old enigma. There is no room for experimentation in this and there are some hard and fast rules to be followed. So here is a guide to make sure you choose a winning outfit to ace that job interview you are applying for.

    1. One notch above

    The rule of thumb is to dress one notch above what you would usually consider suitable for the job. If you consider casuals as suitable, wear business casuals to the interview. If business casuals is ideal for work, go in proper formals for the job interview. Remember that you are the “expert”.

    This will definitely show that you are serious and care for the job.

    1. Suit Up

    When it comes to dressing up for success, the easiest and the most effective way is to invest in a good two-piece tailored suit. You can never go wrong with a single colour suit with a light coloured shirt and a tie.

    This would be applicable for both men and women. Women can suit up in either pants or skirts depending upon their comfort and the company culture. Some other things to be noted are:

    • For girls, the length of the skirt should be long enough so that you can sit down comfortably.
    • Wear a single coloured shirt, preferably white. White gives a good impression and makes the other person trust you. While you can go for other light colours such as beige, cream and light pink as well.
    • The trousers should be the right length. They should fall well and should not bunch up at the bottom.
    • The suit should properly fit you. A very loose or a very tight suit makes you uncomfortable and is not very pleasing to the eye.
    • The colour of the suit should be dark. Go for blacks, blues, greys or browns.

    1. Planning to go Indian

    Women can leave a lasting impression on the interviewer if they decide to go Indian as well. Wearing kurtis, suits or sarees to a job interview is also a trend that cannot be neglected. The elegance of Indian can get you the much-needed advantage. Wear a plain cotton saree or a salwar-kameez with a dupatta. Put on a blazer above it and you have made a great combination.

    White and pastel shades work best for interviews. Dark colours like navy blue and black also look good. Just make sure you avoid a lot of patterns and details. The simple, the better. Also wearing sleeveless is a strict “no”.

    1. Shine those shoes

    “Shoes speak louder than words”

    One of the best advices for a person trying to impress his interviewer is to invest in a good pair of shoes. Most people don’t pay attention to their shoes, and this is where they murder their chances of getting selected.

    Many interviewers look for well polished, clean, decent shoes, which is why it is always advised to polish them well. Nothing defines detail better than a good pair of shoes. In fact, it’s a proven fact that the first thing a person unconsciously notices about another person is always the shoes.

    • Men should wear leather lace-up or slip-on dress shoes in black or brown.
    • While women should go for closed pumps. A basic pump is versatile, flattering and will stay in style forever.

    Shoes should be fairly low heeled. High heels are difficult to walk-in. You don’t want to grab attention while hobbling in uncomfortable noisy shoes.

    1. Bling is not the thing

    “Too much bling is not the thing.”

    Wearing a lot of jewellery, piercings and flashy things is only going to distract you and the interviewer. Avoid necklaces and flashy hair accessories. Stick to those that are not flashy, distractive or shiny. One ring per hand is the best. You want the interviewer to pay attention to you and not the bling.

    1. Pass on the Perfume

    Flaunting that new Chanel or Armani perfume in the interview is probably not the best advice. Don’t drench yourself in cologne or deodorant. You never know if your interviewer is allergic and this isn’t a good way to find out. This could even give him a headache the moment you enter. A gentle spray of a light perfume is enough to smell good for the day.

    1. Tats under Wraps

    Celebrities like Angelina Jolie or Deepika Padukone might have upped the cool factor of tattoos by flaunting it. But this does not mean it is something appropriate to show off in a job interview. This might send a wrong impression about you not being serious about the job. Dress in such a way that the tattoo is hidden.

    1. Pay Attention to the Details

    Paying heed to the most trivial details is very crucial. Because these small details add up substantially to your overall look and impression. If you belong to the class of people who believe that only girls color code their clothes with their accessories, you’ll have to think again!

    Both the guys and the girls need to pay heed to the color of their Shoes, Socks and Belt. There is no dead set rule about it, but preferably the colour of the socks should match the color of your trousers/skirts and the belt should match the shoes.

    The clothes should be perfectly ironed! A wrinkled shirt would give the impression of a lousy personality and disinterest, and you surely wouldn’t want that. So, check for any dangling threads or loose/missing buttons a few days before so that you can save yourself from the last moment hurry and panic.

    Nails should be perfectly trimmed and clean. Girls should avoid wearing any nail polish, especially flashy colours, which make you look very unprofessional.

    1. Tidy up the Hair

    Your hairstyle is a very important part of your persona. A hairstyle can easily make or break your impression! Just like you dress yourself up according to the occasion you need to dress up your hair too. Guys need to ensure that they have a neat haircut and a dapper hairstyle. Pointy spikes and very long hair would land you a job only if you are applying for a position in a rock band!

    Girls need to ensure that their hair is securely tied in a way that they don’t distract her or the interviewer mid-interview. A High-Bun or slick ponytail would be perfect for long hair and short haired girls can use hairbands to keep their hair off their face.

    1. Don’t Wake up just for Makeup (For Girls)

    Minimal is the best bet when you are going for an interview. Remember you are not going to an interview for a modeling agency and the interviewer is not going to hire you on the basis of your makeup skills. Neither are you going to attend a big fat Indian wedding. Keep it simple!! Flashy makeup is a big no! A dash of kajal and a balm to keep chipped lips at bay is enough to preen you up before an interview.

    Remember guys, when you look good, you tend to feel good. A well groomed person feels a lot more confident and self-assured than a person who is dressed up lousily. A confident person who is comfortable in his own skin tends to make the people around him comfortable too. You end up putting up an excellent presentation and a lasting impression on the interviewer!

     

    Monday, 8 June 2015

    Software quality assurance (SQA)

    Software testing is a part of the software quality assurance (SQA) process. In SQA, software process specialists and auditors are concerned for the software development process rather than just the artifacts such as documentation, code and systems. They examine and change the software engineering process itself to reduce the number of faults that end up in the delivered software: the so-called "defect rate". What constitutes an "acceptable defect rate" depends on the nature of the software; A flight simulator video game would have much higher defect tolerance than software for an actual airplane. Although there are close links with SQA, testing departments often exist independently, and there may be no SQA function in some companies.

    Software testing is a task intended to detect defects in software by contrasting a computer program's expected results with its actual results for a given set of inputs. By contrast, QA (quality assurance) is the implementation of policies and procedures intended to prevent defects from occurring in the first place.

    Software verification and validation

  • Verification: Have we built the software right? (i.e., does it implement the requirements).
  • Validation: Have we built the right software? (i.e., do the deliverables satisfy the customer).
  • The terms verification and validation are commonly used interchangeably in the industry; it is also common to see these two terms incorrectly defined. According to the IEEE Standard Glossary of Software Engineering Terminology:
    Verification is the process of evaluating a system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase.
    Validation is the process of evaluating a system or component during or at the end of the development process to determine whether it satisfies specified requirements.
    According to the ISO 9000 standard:
    Verification is confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
    Validation is confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.

    Certifications

    Several certification programs exist to support the professional aspirations of software testers and quality assurance specialists. No certification now offered actually requires the applicant to show their ability to test software. No certification is based on a widely accepted body of knowledge. This has led some to declare that the testing field is not ready for certification.[50] Certification itself cannot measure an individual's productivity, their skill, or practical knowledge, and cannot guarantee their competence, or professionalism as a tester

    Test Case

    Test case A test case normally consists of a unique identifier, requirement references from a design specification, preconditions, events, a series of steps (also known as actions) to follow, input, output, expected result, and actual result. Clinically defined a test case is an input and an expected result.This can be as pragmatic as 'for condition x your derived result is y', whereas other test cases described in more detail the input scenario and what results might be expected. It can occasionally be a series of steps (but often steps are contained in a separate test procedure that can be exercised against multiple test cases, as a matter of economy) but with one expected result or expected outcome. The optional fields are a test case ID, test step, or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database, or other common repository. In a database system, you may also be able to see past test results, who generated the results, and what system configuration was used to generate those results. These past results would usually be stored in a separate table

    Automated testing

    Many programming groups are relying more and more on automated testing, especially groups that use test-driven development. There are many frameworks to write tests in, and continuous integration software will run tests automatically every time code is checked into a version control system.

    While automation cannot reproduce everything that a human can do (and all the ways they think of doing it), it can be very useful for regression testing. However, it does require a well-developed test suite of testing scripts in order to be truly useful

    Agile or Extreme development model

    In contrast, some emerging software disciplines such as extreme programming and the agile software development movement, adhere to a "test-driven software development" model. In this process, unit tests are written first, by the software engineers (often with pair programming in the extreme programming methodology). Of course these tests fail initially; as they are expected to. Then as code is written it passes incrementally larger portions of the test suites. The test suites are continuously updated as new failure conditions and corner cases are discovered, and they are integrated with any regression tests that are developed. Unit tests are maintained along with the rest of the software source code and generally integrated into the build process (with inherently interactive tests being relegated to a partially manual build acceptance process). The ultimate goal of this test process is to achieve continuous integration where software updates can be published to the public frequently. [44] [45]

    This methodology increases the testing effort done by development, before reaching any formal testing team. In some other development models, most of the test execution occurs after the requirements have been defined and the coding process has been completed.

    What is Regression Testing ??

    Regression testing focuses on finding defects after a major code change has occurred. Specifically, it seeks to uncover software regressions, as degraded or lost features, including old bugs that have come back. Such regressions occur whenever software functionality that was previously working, correctly, stops working as intended. Typically, regressions occur as an unintended consequence of program changes, when the newly developed part of the software collides with the previously existing code. Common methods of regression testing include re-running previous sets of test-cases and checking whether previously fixed faults have re-emerged. The depth of testing depends on the phase in the release process and the risk of the added features. They can either be complete, for changes added late in the release or deemed to be risky, or be very shallow, consisting of positive tests on each feature, if the changes are early in the release or deemed to be of low risk. Regression testing is typically the largest test effort in commercial software development,[37] due to checking numerous details in prior software features, and even new software can be developed while using some old test-cases to test parts of the new design to ensure prior functionality is still supported.

    Sunday, 31 May 2015

    What is Black-box testing ??


    Black-box testing




    Black box diagram

    Black-box testing treats the software as a "black box", examining functionality without any knowledge of internal implementation. The testers are only aware of what the software is supposed to do, not how it does it.[23] Black-box testing methods include: equivalence partitioning, boundary value analysis, all-pairs testing, state transition tables, decision table testing, fuzz testing, model-based testing, use case testing, exploratory testing and specification-based testing.

    Specification-based testing aims to test the functionality of software according to the applicable requirementsThis level of testing usually requires thorough test cases to be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional.

    Specification-based testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.

    One advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlightBecause they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case, or leaves some parts of the program untested.

    This method of test can be applied to all levels of software testing: unit,integration, systemand acceptance. It typically comprises most if not all testing at higher levels, but can also dominate unit testing as well.

    what is White-Box testing??


    White-Box testing

    Main article: White-box testing
    White-box testing (also known as clear box testing, glass box testing, transparent box testing and structural testing) tests internal structures or workings of a program, as opposed to the functionality exposed to the end-user. In white-box testing an internal perspective of the system, as well as programming skills, are used to design test cases. The tester chooses inputs to exercise paths through the code and determine the appropriate outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).
    While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.
    Techniques used in white-box testing include:
    • API testing (application programming interface) – testing of the application using public and private APIs
    • Code coverage – creating tests to satisfy some criteria of code coverage (e.g., the test designer can create tests to cause all statements in the program to be executed at least once)
    • Fault injection methods – intentionally introducing faults to gauge the efficacy of testing strategies
    • Mutation testing methods
    • Static testing methods
    Code coverage tools can evaluate the completeness of a test suite that was created with any method, including black-box testing. This allows the software team to examine parts of a system that are rarely tested and ensures that the most important function points have been tested.[22] Code coverage as a software metric can be reported as a percentage for:
    • Function coverage, which reports on functions executed
    • Statement coverage, which reports on the number of lines executed to complete the test
    • Decision coverage, which reports on whether both the True and the False branch of a given test has been executed
    100% statement coverage ensures that all code paths or branches (in terms of control flow) are executed at least once. This is helpful in ensuring correct functionality, but not sufficient since the same code may process different inputs correctly or incorrectly.

    Tuesday, 26 May 2015

    What is Integration testin??

    Integration testin

     Integration testing is any type of software testing that seeks to verify the interfaces between components against a software design. Software components may be integrated in an iterative way or all together ("big bang"). Normally the former is considered a better practice since it allows interface issues to be located more quickly and fixed.

    Integration testing works to expose defects in the interfaces and interaction between integrated components (modules). Progressively larger groups of tested software components corresponding to elements of the architectural design are integrated and tested until the software works as a system.[

    What is Unit testing ??


    Unit testing

    Unit testing, also known as component testing, refers to tests that verify the functionality of a specific section of code, usually at the function level. In an object-oriented environment, this is usually at the class level, and the minimal unit tests include the constructors and destructors.

    These types of tests are usually written by developers as they work on code (white-box style), to ensure that the specific function is working as expected. One function might have multiple tests, to catch corner cases or other branches in the code. Unit testing alone cannot verify the functionality of a piece of software, but rather is used to ensure that the building blocks of the software work independently from each other.

    Unit testing is a software development process that involves synchronized application of a broad spectrum of defect prevention and detection strategies in order to reduce software development risks, time, and costs. It is performed by the software developer or engineer during the construction phase of the software development lifecycle. Rather than replace traditional QA focuses, it augments it. Unit testing aims to eliminate construction errors before code is promoted to QA; this strategy is intended to increase the quality of the resulting software as well as the efficiency of the overall development and QA process.

    Depending on the organization's expectations for software development, unit testing might include static code analysis, data flow analysis, metrics analysis, peer code reviews, code coverage analysis and other software verification practices.

    Monday, 25 May 2015

    Why, How and When to Automate Software Testing?

    Why, How and When to Automate Software Testing?




    Being as a Software Test Engineer you might have come across a situation where “Why, How and When to Automate Software testing?” Some testers are thinking of Automate Software testing is no more advantages than manual testing and we also hear from some readers on using Automation to test the application.
    Listening from different opinions, I thought instead of answering questions individually it would be better to some logical discussion here. The automation testing is such a huge topic to discuss on, so I suggest our smart readers from different areas to contribute your experience on automation testing in below comments.
    The manual testing is more preferable over automation testing in following cases:
    • If UI of the system under test is changing more frequently, so after every time when the UI changes then the Automated script needs to be updated accordingly.
    • If you have tight release deadlines and no enough time to automate the system then I prefer to go with manual testing instead of Automation testing.
    • If Identify the test cases which are going to be executed initially once and do not automate such test case.
    • To do automation testing you should have the skilled resources with having sufficient programming knowledge. If you do not have the skilled resources to automate the application under test or if you are not ready to invest time and money to educate peoples on automation testing to build good automation team then don’t go for Automation testing.

    Testing Tips

    Is it possible to test the application manually whenever the application gets update? Yes it is possible, but will take longer time during test or some time it is not possible. Also, it won’t be effective in terms of company cost, resources, Time etc.
    The best way to automate the test as many as the application version gets change and you get lots of regression work.
    So, to keep the application bug free, tester needs to test the application frequently. Completion of test automation process is totally depends on application whether; it is small, big or how many bugs are introduced in the application.

    4) Increase Test Coverage:

    Automated software testing more focus on the depth and scope of tests which increases the quality of software. Automated software testing process works on thousands of different complex test cases which is not possible with manual testing. If the software is huge and complex, manual testers are scare to test that software but testers who do automation testing can easily work on that particular software, automation testing also facilitates testers to test the software on multiple computers with different configurations. This testing process is capable to check the application inner database, data table, memory, and file containing the application to determine the application is performing as estimated.

    5) Increases Speed, Efficiency, Quality and Decreases the Cost:

    When we start developing the software, our main goal is to release the software on time. Although, Automation testing process uses same module in different test scenario, run fast. Automated regression test provides the non-stop system steadiness and functionality after changes to the software were completed main to shorter development cycles joint with better quality software and thus the welfare’s of automated testing fast out gain the initial costs.

    6) Testers get Motivated which increases the efficiency:

    In case on manual testing, testers do not get any new technique and tools, they apply manual tricks to test the software, that’s why they don’t get motivation which affects the manual tester’s efficiency. But, in case of automation testing, testers always get different tools with testing software which makes them to work fast with increasing efficiency.

    7) Helpful in testing complex web application:

    Automation testing process is helpful for those web applications where millions of users interact together. If we go for manual testing process, creating those many users manually and simultaneously are difficult or impossible.
    So, to test those web applications go for load automation testing and create virtual users to check load capacity of the web application.
    Automation testing process can also be used on that software where GUI will always be same and functionalities gets changed always due to source code changes.

    Conclusion:

    In the fast moving world, the Automation testing plays an vital role to achieve most of the testing goals with effective use of resources and time. However before start automating the test tasks you should be careful about choosing automation tool.
    Make sure that you have skilled people before deciding When to Automate Software testing? If you are not doing so, you will not get ROI which you invested in expensive automation tool, which leads to frustration. You should list down all requirements before going for automation tool. Every tool cannot support all requirements, so to overcome the limitations for automation tool you need to go with Manual testing techniques. If you don’t have budget to get the paid version then you first start with open source tool. Open source tool is also a good option to start with automation.
    Many of our readers are working in Manual and/or Automation testing field. If you need share your experience then feel free to express your views in comment section below.

    Software Testing Tools List


    Software Testing Tools List

    Now days we can get lots of Software Testing Tools in the market. Selection of tools is totally based on the project requirements & commercial (Proprietary/Commercial tools) or free tools (Open Source Tools) you are interested. Off Course, free Testing Tools may have some limitation in the features list of the product, so it’s totally based on what are you looking for & is that your requirement fulfill in free version or go for paid Software Testing Tools.
    The tools are divided into different categories as follows:

    • Test Management tools
    • Functional Testing Tools
    • Load Testing Tools

    Here you can get a most popular free and paid Testing Tools list used in the actual testing of the software application.


    Software Testing Tools List

    1) Open Source Tools


    a) Test Management tools

    • TET (Test Environment Toolkit)
      • The goal behind creating the Test Environment Toolkit (TET) was to produce a test driver that accommodated the then current and anticipated future testing needs of the test development community. To achieve this goal, input from a wide sample of the community was used for the specification and development of TET’s functionality and interfaces.
    • TETware
      • The TETware is the Test Execution Management Systems which allows you to do the test administration, sequencing of test, reporting of the test result in the standard format (IEEE Std 1003.3 1991) and this tools is supports both UNIX as well as 32-bit Microsoft Windows operating systems, so portability of this is with test cases you developed. The TETware tools allow testers to work on a single, standard, test harness, which helps you to deliver software projects on time. This is easily available for download on ftp download.
    • Test Manager
      • The Test Manager is an automated software testing tool is used in day to days testing activities. The Java programming language is used to develop this tool. Such Test Management tools are used to facilitate regular Software Development activities, automate & mange the testing activities. Currently Test Manager 2.1.0 is ready for download. If you want to learn more information of Test Manager, Click here to get a latest copy for free.
    • RTH
      • RTH is called as “Requirements and Testing Hub”. This is a open source test management tool where you can use as requirement management tool along with this it also provides the bug tracking facilities. From here you can download the latest version of RTH.

    b) Functional Testing Tools


    c) Load Testing Tools


    2) Proprietary/Commercial tools


    a) Test Management tools


    b) Functional Testing Tools


    c) Load Testing Tools