1.what are the some reasons not automate?a. no design documentb. No Formal Test Planc. Testing Budget is minimald. All above2.What are the some problem with automation Tools?a. Tools always have bugs of their own.b. Some application do not run well with automation toolsc. Bleeding edge applications tend to cost more to automated. All above3.Wha do you understand term quality?a. Bug Freeb. Reasonably bug freec. Enough budgetd. none of above4.Major focus of white box testing?a. Internal structureb. Logic pathc. Control flowd. All above5.which are the reviews are required in order to ensure proper tracking of software between phases of a project ?Product feasibilitySoftware requirementSoftware designAcceptance testa. 1 and 2 onlyb. 2 and 3 onlyc. 1,2 and 3 onlyd. 2,3 and 4 only6. a software firm has just signed a contract to deliver an inventory tracking /online transaction system for use by 500 entry clerks in the contract. The client has demanded a schedule of rigorous checkpoints but the requirements for the project are poorly defined .which of the following would be most suitable as a developmental model ?spiralTop downRapid PrototypeWater fall7.what happened to the relative cost of fixing software errors from the requirements phase through the test phase ?a. it decrease linearlyb. it Remains fairly constantc. It Increases linearlyIt increases exponentially8.which of the following is not an expected code inspection technique?Domine analysisItem-Item paraphrasingMental code executionConsistency analysis9.The defect density of a computer program best defined asRatio of failure reports received per unit of timeRatio of discovered errors per size of code.Number of modifications made per size of codeNumber of failures reported against the code10.A module includes a control flow loop that can be executed 0 or more times. The test which is most likely to reveal loop initialization defects executes the loopbody ?0 time s1 times3 times4 times11.Based on the table below which of the following represents the total number of defect escapes from the coding phase ?phase Number of defects introduced defects found and removedRequirement 12 9Design 25 16Code 47 4259121712. software reliability is normally defined in terms ofthe probability of failure free operationThe defect density of the software productThe operational profile of the systemThe mean time to repair a defect.13.Defect containment is defined asdefect should be found on the same stagedefect should move to the next stagedefect found either in the same stage or the next stagenone of above14. which of the following cyclomatic complexity of the pseudo code below ?do while recodes remainread records;if record field #1=0then process record; store in buffer;incremental counter ; store in filereset counter ; end if end do346115.use cases and noun lists are primarily associated with which of the following requirements analysis methodologies ?information engineeringObject- Oriented analysisStructured analysisFunctional analysis16.According to a good testing strategy it is less costly tolet the customer find the defectsdetect defects the to prevent them.Prevent defect then to detect themIgnore minor defects.17. In which cases it would be better not to choose automationa. when budget is not adequateb. when there are only one time activitiesc. when the project is very smalld. all the above18. A review is what category of cost quality ?a. Preventiveb. Appraisalc. Failured. None of above19.The purpose of software testing is to:Demonstrate that the application works properlyDetection the existence of defectsValidation the logical designNone above20.Defects are least costly to correct at what stage of the development cycle ?RequirementAnalysis & DesigningConstructionImplementation21.Which of the following tool is used for scalability testing ?Load RunnerTest DirectorWin runnerQuick test pro22. At what stage of the life cycle testing begin ?Requirement & Analysis stage.PlanningDesignCoding23. What is correct sequence of a bug tracking system ?Assigning, logging fixing retesting , closureLogging, assigning , fixing ,retesting ,closureAssigning ,logging, retesting fixing ,closureLogging assigning, retesting, fixing24.Which of the following does not come under technical testing?Volume , and performanceOperational testingDeployment testingIntegration testing
Posted by Prasanna Babu.V at 10:10 PM 0 comments
Tuesday, September 25, 2007
AUTOMATION
QUICK TEST PROFESSIONAL
There are 2 ways of testing1. Manual testing2. Automation testing1. Manual testingIt is a process in which all the phases of software testing life cycle like test planning test development, test execution, result analysis, bug tracking and reporting are accomplished successfully manually with human efforts.Drawbacks of manual testing1. More no. of people are required2. Time consuming3. Human errors4. Tired ness5. Repeating the things is critical6. simultaneous actions are not possible2. Automation testIt is a process in which all the drawbacks of manual testing are addressed properly and provide speed and accurate to existing testing process.Note: Automation testing is not a replacement of manual testing“Automation tool is an assistant of an a test engineer”Journal framework: – To learn any automated tool:A test engineer should know the following things to work with any automated tool1. How to give the instructions2. How to give the information3. How to use its recording facility4. How to make the tool repeats the action5. How to analyze the resultTypes of automated tools: – There are 3 types of automated tools1. Functional tool (qtp, Win runner)2. Management Tool (Quality Center or Test Director)3. performance Tool (Load Runner)Quick Test Professional (QTP)History of QTPType of the tool functional toolCompany: Mercury Interactive incorporationInitial Versions: 5.5, 7.6, 8.0, 8.2, 9.0, and 9.1.Operating systems for QTP: Win 2000 server, advanced server and 2000proressional
Ex:1Vbwindow(“Emp”).vbedit(“Emp name”).set “prasanna”Vbwindow(“Emp”).vbedit(“Emp age”).set “20”Vbwindow(“Emp”).vbedit(“Emp sal”).set”25000”Vbwindow(“Emp”).vbedit(“Emp desg”).set”Tester”Vbwindow (“Emp”).vbbutton(“submit”). ClickEX: 2Vbwindow(“form1”).vbedit(“val1”).set”10”Vbwindow(“form1”).vbedit(“val2”).set”20”Vbwindow(“form1”).vbbutton(“sub”).clickVbwindow(“form1”).vbbutton(“mul”).clickVbwindow(“form1”).vbbutton(“div”).clickVbwindow(“form1”).vbbutton(“clear”).clickAnatomy or QTP: –
Ad-inn Manager: – It is a feature provided by qtp used for making the qtp compatible with a specified environment by default the qtp provides 3 add-ins1. Visual Basic2. Activex3. WebApart from these 3 add-ins qtp is always compatable with standard windows environmentQTP serene is divided in to 5 parts1. Test pane area2. Active Screen3. Data table4. Debug weaver pane5. Tool option1. Test pane:– It is an area provided by QTP used for viewing the text script. It also allows the user to do any kind of modifications on the text script.Test pane represents the script in 2 ways.² Expert View² Keyword Viewa. Expert View:–Expert view represents the script in vb script format.b. Keyword View:–It is represents the script using a graphical user interface which is divided in to 4 Parts.± Item± Operation± Value± Documentation2. Active screen:–It is a feature provided by QTP which holds the snap shots of application state of each and every script statement.Features:–It is used for understand the script easily.It is used for enhancing the script easily.3. Datatable:– It is also called as formula one sheet developed by the 3rd party and integrated with the QTP.Features:–- It is used for holding the test data.- It provides a facility to import the test data from different data sources like database, excel sheet or flat file.- It allows the user to enter the data and modify the data directly in it.- It isolates the text script from the data source.QTP maintain 2 copies of data tables1. Design time datatable2. Run time datatable4. Debug Viewer pane:– It is used for viewing, modifying, or setting the current values of the variables and command tabs.5. Too potion:– The option that are available in menu bars and tools are known as tool options.Recording and runningRecord and run settings:– It is a feature provided by QTP used for making the QTP aware on which applications a test engineer is about to perform record and run operations.A test engineer has to compulsorily use this option for every new test. For doing the same this feature has provided 2 Options.o Record and run on any open windows applicationo Record and run on these applicationsNavigationsü Activate the menu item testü Select the option record and run settingsü Select one of the following actiono Record and run on any open windows applicationo Record and run on these applicationsü If at all the second option is selected click on add buttonü Browse the desire file to be addedü Click on okü Click on apply and ok.Operational Overview of recording:During recording the QTP will be doing the following things.v Every user action is converted in to script statementv The corresponding object information is stored in the object repository.Operational Overview of running:v QTP tries to understand the script statementv After understanding what action to be performed or which object is realized that it needs to identify that object.v In order to identify the object it needs the corresponding information.v For that it will go to the object repository and search for the information.v If at all the information is found using that information it will try to identify the object.v If at all the object is identified then it will perform the actionTypes of Recording Modes1. Context sensitive recording mode:– It is used for recording the operations perform on the standard GUI objects.2. Analog Recording:– It is a special recording mode provided by QTP to record the continuous actions performed on the application.3. Low level recording:– It is used for recording at least minimum operations on the applications which are developed with known supported environment.Navigationü Keep the tool under normal recording modeü Activate the menu item textü Select the option analog recordingü Select one of the following actions.o Record relative to the screen.o Record relative to the following window.If at all the second option is selected specify the window title using the hand icon CClick on start analog record button.OBJECT REPOSITORY It is storage place where one can store the objects information and it also acts as a interface between the text script and the AUT in order to identify the object during execution.Types Of Object RepositoryThere are 2 types of object repository.1. per-action object repository:–For each and every action in a test the QTP will create and manage the individual repository.Dis Advantage: Difficult to maintenance2. shared repository:–It is a Common place where one can store the objects information and it can be associated to multiple tests.Advantage: Easy to maintenance and can be used by multifull test caseNote: The shared repository has to be created manually and associated manually to the test.Object Identification:-It is a concept is based on four types of properties and ordinal identifier.Type of properties:1. Mandatory property2. Assistive property3. Base filter property4. Optional filter propertyA test engineer can specify list of Mandatory, Assistive, Base filter, and Optional filter properties.QTP Learns The Properties In The Following Sequence
1 First of all QTP learns the complete list of mandatory properties and then it will check whether these properties are sufficient for identifying the objects uniquely.2 If it sufficient it will stop learning otherwise it will learn the first assistive properties and then it will once again check whether these properties are sufficient are identifying the object uniquely.3 If it feels not sufficient then it will learn the second assistive properties once again it will check whether these properties are sufficient if it feels not sufficient the above processor continue till the QTP feels satisfied or up to the end of the assistive properties list.4 If it is still feels not sufficient then it will learn the ordinal identifier.
Note1If at all the smart identification mechanism is invoked then the QTP will learn the information as same as above but with mandatory properties it will also learn base filer properties and optional filter properties even though it learn the BFP and OFP it will not consider them while learning, just it learns and store them secretly in the object repository.Note2Other then BFP and OFP all the remaining properties and ordinal identifier will be stored in the object repository.QTP uses the information in order to identify the object during execution in the following wayFirst of all QTP will consider all the properties present in the object repository( i.e., Mandatory properties , optional, Assistive properties) and tries to identify the object if it fails then it will use the smart identification mechanism in the following way.
It will consider the complete list of base filter properties and prepares the list of objects which are matched with all these properties. If at all the list contains only one object then that is the object, otherwise it will take the help of first optional filter properties and prepares a new list of objects which are matched with this ofp (Optional Filter Properties), If at all the list contains more then one object it will consider the second optional filter properties and tries to match with all the objects present in the new list. The object that are not matched will be filtered and a new list objects is formed if at all the new list is containing more then one object then it will proceed with the above procedure till the list contains one object or up to the end of the ofp list. It still qtp is unable to identify the object, if at all the ordinal identifier is learned then it will use the ordinal identifier and identify the object.Ordinal IdentifierThere are 3 Types of Ordinal identifiers.1. Location2. Index3. Creation time
1. LocationQTP will generate sequence of numbers from 1,2,3,4…based on the sequence of the object located in the AUT2. IndexQTP will generate sequence number from 0,1,2,3.. based on the sequence of the program written for those objects.3. Creation TimeQTP will generate the sequence of number from 0,1,2,3.. based on the loading time of the web pages.
Object SpyIt is a feature provided by QTP which shows the complete object information like list of properties, list of methods, syntax of methods and description of methods for both test objects as well as run time objects on the spot immediately.QTP LIFE CYCLE1. Test Planning2. Generating basic test3. Enhancing the test4. Debugging the test5. Executing the test6. Analyzing the results1. test planningThe automation test lead will do the following in this phase.He will understand the requirements.He will identify the areas to be automated.He will analyze both the positive and negative flow of the application.Based on all these analysis he will prepare the test plan.He will prepare the tool ready with all pre configurationally settings for further operations.2. generate the basic testA test engineer will generate the basic test for both the positive and negative flow of the application.3. Enhancing the testOne can enhance the test on the following ways.ð Inserting the checkpoints.ð Synchronizing the testð Parametarising the test (Data driven testing)ð Inserting the output Valuesð Measuring transactionsð Enhancing the test with programmatic statementð Adding commentsð Inserting the script statements manually.ð Inserting the checkpoints:-“Checkpoint” is defined as validation point or test point which checks the object state or bit map state or data state during the execution phase at any point of time.Operational overview of checkpointCheck point works in 2 phases.1. pre execution phase Capture the expected value, Generate the basic test2. While execution phase Capture the actual value with the expected value displays the result.types of checkpoints· Standard checkpoints· Bit map checkpoints· Text checkpoints· Text area checkpoints· Database checkpoints· XML checkpoints· Page checkpoints· Table checkpoints· Image checkpoints· Accessibility checkpoints· Standard Checkpoints: – Standard checkpoint is used for checking the properties values of standard GUI objects.Navigation through Applicationü Keep the tool under recording modeü Activate the menu item insertü Go to checkpointü Select the option standard checkpointü Click on the desired objectü Ensure that the corresponding object is selected in the object hierarchy and click on ok.ü Select the desired properties to be checked EX. Width and height..ü Stop recording.Navigation through Active Screenü Keep the curser on the desired statement so that the corresponding snap shot is available in the active screen.ü Go to active screenü Right click on the desired object select the option insert standard checkpoint.ü Ensure that the corresponding object is selected in the object hierarchy and click on ok.ü Select the desired propertied to be checked ex: X,Y and width and height.ü Select one of the following options.à Before current stepà After current stepü Click on ok· Bit map checkpoint: –It is used for checking the complete bitmaps or part of the bit maps· Text check point: – It is used for checking the text present on a specified object.· Text area checkpoint: – It is used for checking the text present in a specified areaNote: Text area checkpoint cannot insert through active screen it can be inserted only through application.· Database checkpoint: – It is used for checking the contains of a database.· Xml Checkpoint: – xml is a universal understandable language used for data transformation.Xml checkpoint is used for checking the contents of al xml file.· Page checkpoint: – It is used for checking the properties of a page like load time number of images and number of links.· Table checkpoint: – It is used for checking the contents of al web table· Image checkpoint: – It is used for checking the properties of an image.· Accessibility checkpoint: – It is used to checking the www.(world wide web) consortium standards.ð Synchronizing the testIt is a process of matching the speeds of both the tool and application in order to keep them in sync with each other to obtain proper results.Here the main concept is making the tool to wait till the application finishes the task.This can do it 3 ways.1. Inserting the synchronization point.2. Inserting the wait statement.3. Increasing the default time.Navigation for inserting synchronization pointü Keep the curser under desired locationü Keep the tool under recording modeü Activate the menu item insert.ü Go to step and select the option synchronization point.ü Click on the desired object ensure that the corresponding object is selected in the object hierarchy and click on ok.ü Specify the desired property name and value (true) specify the extra time in milliseconds.ü Click on ok.ü Stop recording.In order to avoid the above navigation one can directly insert the following statement in the script in desired location.Syntax –object hierarchy. wait property “wait property name” “property value”, extra time in milliseconds.Ex–windows.(“Flight reservation”).winbutton (“delete order”).wait property “enable” , true,10000.Wait statementIt is used for making the tool to wait till the specified time is elapsed.Syntax – wait (20) time in seconds.Navigation for Increasing the Default Timeü Activate the menu item testü Select the option settings and go to run tabü Specified the desired time in mille seconds in object synchronization time out field.ü Click on apply and okParameterising The Test Or Data Driven Testing:-Data driven testing is a concept provided in QTP in order to implement retesting.Navigation for data driven testingü Collect the required data into the data tableü Generate the basic testü Parameterize the testü Analysis the resultParameterizationIt is a process of replacing the consistence values with parameters or variables in order to increase the scope of the testParameterization can be done in 3 ways1. Data Driven wizard2. Keyword view3. Manual viewNavigation through data driven wizardü Activate the menu item toolsü Select the option data drivenü Select the desired consistent value to be parameterizeü Click on parameterize buttonü Click on nextü Click on parameter option buttonü Specify the desired column nameü Click on ok and finishNavigation through keyword viewü Go to keyword viewü Select the desired consistent valueü Click on the configure value buttonü Select the option parameterü Specify the desired column nameü Click on okIn order to avoid the above navigation one can directly write the script in the following way.Vbwindow(“form1”).vbedit(“val1”).setdatatable(“v1”,1”)Vbwindow(“form1”).vbedit(“val2”).setdatatable(“v2”,1”)Vbwindow(“form1”).vbbutton(“add”).clickVbwindow(“form1”).vbedit(“res”).checkpoint(“res”)Navigation For Parameterizing the Checkpointü Right click on the checkpoint statement.ü Select the option checkpoint propertiesü Select the desired property whose value to be parameterizeü Select the option parameterü Click on parameter buttonü Specify the desired column nameü Click on okEx: Object hierarchy, setdatatable (ordno,”1”)Output ValueIt is feature provide by QTP used for capturing a value from the application or from the database or from the xml file during execution and store in under a specified column in the run time data table.Operational Overview Of Output ValueIt is divided in to 2 phases.1. pre-execution phase2. while- execution phase1. pre execution phase – Capture the field name whose value is to be captured2. While execution phase – Capture the actual value from the field and stores the captures value under a specified column in the run time data table.Types Of Output Values1. Standard output value2. Text output value3. Text area output value4. Data table output value5. XML output valueNavigation for Output Valueü Keep the tool under recording modeü Activate the menu item insert and go to output value and select standard output valueü Click on the specified object or fieldü Click on okü Select the desired property value whose value to be captured.ü Click on modify button and specify the desired column name.ü Click on ok, click on okMeasuring TransactionsIt is a concept provided by QTP in order to calculate the execution taken by a block of statement or the time taken by an application to accomplish a task.To do the same QTP has provided 2 options1. Start transaction2. End transactionNavigation for Inserting the start transactionü Keep the curser in the desired locationü Activate the menu item insertü Select the option start transactionü Specify the desired name (any thing)ü Select one of the following optionà Before current stepà After current stepü Click on okIn order to avoid the above navigation one can insert the following statements directly in the scriptEX: Services. Start transaction “trans”Services.end transction “trans”Inserting the Programmatic StatementProgrammatic statements are 5 types1. Normal statement or object2. Conditional state3. Comments4. Utility statementsNavigationü Go to keyword viewü Activate the menu item insert go to stepü Select the desired optionRepository Utility ObjectIt is used for reporting a user defined massage to the result window.Syntax –Reporter.reportevent mic(status), “reportname” “message”Ex:– This program is demo for looping and constituentFor i=1 to 10If (i=10) thenMessage box “hai”Reporter.reportevent micpass, “myrep”,”con is satisfied”elseMsgbox “bye”Reporter. Reportevent micfail, “myrep”, “cond is not satisfied”End ifNext
Posted by Prasanna Babu.V at 11:07 PM 0 comments
Monday, September 24, 2007
i-Info
What Is a Bug?You've just read examples of what happens when software fails. It can be inconvenient, as when a computer game doesn't work properly, or it can be catastrophic, resulting in the loss of life. It can cost only pennies to fix but millions of dollars to distribute a solution. In the examples, above, it was obvious that the software didn't operate as intended. As a software tester you'll discover that most failures are hardly ever this obvious. Most are simple, subtle failures, with many being so small that it's not always clear which ones are true failures, and which ones aren't.Why Do Bugs Occur?Now that you know what bugs are, you might be wondering why they occur. What you'll be surprised to find out is that most of them aren't caused by programming errors. Numerous studies have been performed on very small to extremely large projects and the results are always the same. The number one cause of software bugs is the specification (see Figure 1.1).The Cost of BugsAs you will learn in Chapter 2, software doesn't just magically appear—there's usually a planned, methodical development process used to create it. From its inception, through the planning, programming, and testing, to its use by the public, there's the potential for bugs to be found. Figure 1.2 shows an example of how the cost of fixing these bugs can grow over time.This final definition is very important. Commit it to memory and refer back to it as you learn the testing techniques discussed throughout the rest of this book.NOTEIt's important to note that "fixing" a bug does not necessarily imply correcting the software. It could mean adding a comment in the user manual or providing special training to the customers. It could require changing the statistics that the marketing group advertises or even postponing the release of the buggy feature. You'll learn throughout this book that although you're seeking perfection and making sure that the bugs get fixed, that there are practical realities to software testing. Don't get caught in the dangerous spiral of unattainable perfection.What Makes a Good Software Tester?In the movie Star Trek II: The Wrath of Khan, Spock says, "As a matter of cosmic history, it has always been easier to destroy than to create." At first glance, it may appear that a software tester's job would be easier than a programmer's. Breaking code and finding bugs must surely be easier than writing the code in the first place. Surprisingly, it's not. The methodical and disciplined approach to software testing that you'll learn in this book requires the same hard work and dedication that programming does. It involves very similar skills, and although a software tester doesn't necessarily need to be a full-fledged programmer, having that knowledge is a great benefit.Today, most mature companies treat software testing as a technical engineering profession. They recognize that having trained software testers on their project teams and allowing them to apply their trade early in the development process allows them to build better quality software. Unfortunately, there are still a few companies that don't appreciate the challenge of software testing and the value of well-done testing effort. In a free market society, these companies usually aren't around for long because the customers speak with their wallets and choose not to buy their buggy products. A good test organization (or the lack of one) can make or break a company.The goal of this chapter isn't to teach you everything about the software development process—that would take an entire book! The goal is to give you an overview of the all the pieces that go into a software product and a look at a few of the common approaches in use today. With this knowledge you'll have a better understanding of how best to apply the software testing skills you learn in the later chapters of this book.The highlights of this chapter includeProduct ComponentsWhat exactly is a software product?Many of us think of it as simply a program that we download from the Internet or install from a DVD that runs on our computer. That's a pretty good description, but in reality, many hidden pieces go into making that software. There are also many pieces that "come in the box" that are often taken for granted or might even be ignored. Although it may be easy to forget about all those parts, as a software tester, you need to be aware of them, because they're all testable pieces and can all have bugs.What Effort Goes Into a Software Product?First, look at what effort goes into a software product. Figure 2.1 identifies a few of the abstract pieces that you may not have considered.Figure 2.1. A lot of hidden effort goes into a software product.· Project managers, program managers, or producers drive the project from beginning to end. They're usually responsible for writing the product spec, managing the schedule, and making the critical decisions and trade-offs.· Architects or system engineers are the technical experts on the product team. They're usually very experienced and therefore are qualified to design the overall systems architecture or design for the software. They work very closely with the programmers.· Programmers, developers, or coders design and write software and fix the bugs that are found. They work closely with the architects and project managers to create the software. Then, they work closely with the project managers and testers to get the bugs fixed.· Testers or QA (Quality Assurance) Staff are responsible for finding and reporting problems in the software product. They work very closely with all members of the team as they develop and run their tests, and report the problems they find. Chapter 21, "Software Quality Assurance," thoroughly covers the differences between software testing and software quality assurance tasks.· Technical writers, user assistance, user education, manual writers, or illustrators create the paper and online documentation that comes with a software product.· Configuration management or builder handles the process of pulling together all the software written by the programmers and all the documentation created by the writers and putting it together into a single package.As you can see, several groups of people contribute to a software product. On large teams there may be dozens or hundreds working together. To successfully communicate and organize their approach, they need a plan, a method for getting from point A to point B. That's what the next section is aboutFrom a testing perspective, the waterfall model offers one huge advantage over the other models presented so far. Everything is carefully and thoroughly specified. By the time the software is delivered to the test group, every detail has been decided on, written down, and turned into software. From that, the test group can create an accurate plan and schedule. They know exactly what they're testing, and there's no question about whether something is a feature or a bug.But, with this advantage, comes a large disadvantage. Because testing occurs only at the end, a fundamental problem could creep in early on and not be detected until days before the scheduled product release. Remember from Chapter 1, "Software Testing Background," how the cost of bugs increases over time? What's needed is a model that folds the testing tasks in earlier to find problems before they become too costly.Chapter 3. The Realities of Software TestingIN THIS CHAPTER· Testing Axioms· Software Testing Terms and DefinitionsTesting AxiomsThis first section of this chapter is a list of axioms, or truisms. Think of them as the "rules of the road" or the "facts of life" for software testing and software development. Each of them is a little tidbit of knowledge that helps put some aspect of the overall process into perspective.It's Impossible to Test a Program CompletelyAs a new tester, you might believe that you can approach a piece of software, fully test it, find all the bugs, and assure that the software is perfect. Unfortunately, this isn't possible, even with the simplest programs, due to four key reasons:· The number of possible inputs is very large.· The number of possible outputs is very large.· The number of paths through the software is very large.· The software specification is subjective. You might say that a bug is in the eye of the beholderSoftware Testing Terms and DefinitionsThis chapter wraps up the first section of this book with a list of software testing terms and their definitions. These terms describe fundamental concepts regarding the software development process and software testing. Because they're often confused or used inappropriately, they're defined here as pairs to help you understand their true meanings and the differences between them. Be aware that there is little agreement in the software industry over the definition of many, seemingly common, terms. As a tester, you should frequently clarify the meaning of the terms your team is using. It's often best to agree to a definition rather than fight for a "correct" one.Many software testers have come into a project not knowing what was happening around them, how decisions were being made, or what procedure they should be following. It's impossible to be effective that way. With the information you've learned so far about software testing and the software development process, you'll have a head start when you begin testing for the first time. You'll know what your role should be, or at least know what questions to ask to find your place in the big picture.For now, all the process stuff is out of the way, and the next chapter of this book begins a new section that will introduce you to the basic techniques of software testing.When Testing should start:Testing early in the life cycle reduces the errors. Test deliverables are associated with every phase of development. The goal of Software Tester is to find bugs, find them as early as possible, and make them sure they are fixed.The number one cause of Software bugs is the Specification. There are several reasons specifications are the largest bug producer.In many instances a Spec simply isn’t written. Other reasons may be that the spec isn’t thorough enough, its constantly changing, or it’s not communicated well to the entire team. Planning software is vitally important. If it’s not done correctly bugs will be created.The next largest source of bugs is the Design, That’s where the programmers lay the plan for their Software. Compare it to an architect creating the blue print for the building, Bugs occur here for the same reason they occur in the specification. It’s rushed, changed, or not well communicated.Coding errors may be more familiar to you if you are a programmer. Typically these can be traced to the Software complexity, poor documentation, schedule pressure or just plain dump mistakes. It’s important to note that many bugs that appear on the surface to be programming errors can really be traced to specification. It’s quite common to hear a programmer say, “ oh, so that’s what its supposed to do. If someone had told me that I wouldn’t have written the code that way.”The other category is the catch-all for what is left. Some bugs can blamed for false positives, conditions that were thought to be bugs but really weren’t. There may be duplicate bugs, multiple ones that resulted from the square root cause. Some bugs can be traced to Testing errors.When to Stop TestingThis can be difficult to determine. Many modern software applications are so complex, and run in such as interdependent environment, that complete testing can never be done. "When to stop testing" is one of the most difficult questions to a test engineer. Common factors in deciding when to stop are:Deadlines ( release deadlines,testing deadlines.)Test cases completed with certain percentages passedTest budget depletedCoverage of code/functionality/requirements reaches a specified pointThe rate at which Bugs can be found is too smallBeta or Alpha Testing period endsThe risk in the project is under acceptable limit.Practically, we feel that the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -Measuring Test Coverage.Number of test cycles.Number of high priority bugs.Test Strategy:How we plan to cover the product so as to develop an adequate assessment of quality.A good test strategy is:Specific PracticalJustifiedThe purpose of a test strategy is to clarify the major tasks and challenges of the test project.Test Approach and Test Architecture are other terms commonly used to describe what I’m calling test strategy.Example of a poorly stated (and probably poorly conceived) test strategy:"We will use black box testing, cause-effect graphing, boundary testing, and white box testing to test this product against its specification."Test Strategy: Type of Project, Type of Software, when Testing will occur, Critical Success factors, Tradeoffs Test Plan - WhyIdentify Risks and Assumptions up front to reduce surprises later.Communicate objectives to all team members.Foundation for Test Spec, Test Cases, and ultimately the Bugs we find.Failing to plan = planning to fail.Test Plan - WhatDerived from Test Approach, Requirements, Project Plan, Functional Spec., and Design Spec.Details out project-specific Test Approach.Lists general (high level) Test Case areas.Include testing Risk Assessment.Include preliminary Test ScheduleLists Resource requirements.Test PlanThe test strategy identifies multiple test levels, which are going to be performed for the project. Activities at each level must be planned well in advance and it has to be formally documented. Based on the individual plans only, the individual test levels are carried out.Entry means the entry point to that phase. For example, for unit testing, the coding must be complete and then only one can start unit testing. Task is the activity that is performed. Validation is the way in which the progress and correctness and compliance are verified for that phase. Exit tells the completion criteria of that phase, after the validation is done. For example, the exit criterion for unit testing is all unit test cases must pass.Unit Test Plan {UTP}The unit test plan is the overall plan to carry out the unit test activities. The lead tester prepares it and it will be distributed to the individual testers, which contains the following sections.What is to be tested?The unit test plan must clearly specify the scope of unit testing. In this, normally the basic input/output of the units along with their basic functionality will be tested. In this case mostly the input units will be tested for the format, alignment, accuracy and the totals. The UTP will clearly give the rules of what data types are present in the system, their format and their boundary conditions. This list may not be exhaustive; but it is better to have a complete list of these details.Sequence of TestingThe sequences of test activities that are to be carried out in this phase are to be listed in this section. This includes, whether to execute positive test cases first or negative test cases first, to execute test cases based on the priority, to execute test cases based on test groups etc. Positive test cases prove that the system performs what is supposed to do; negative test cases prove that the system does not perform what is not supposed to do. Testing the screens, files, database etc., are to be given in proper sequence.Basic Functionality of UnitsHow the independent functionalities of the units are tested which excludes any communication between the unit and other units. The interface part is out of scope of this test level. Apart from the above sections, the following sections are addressed, very specific to unit testing.Unit Testing ToolsPriority of Program unitsNaming convention for test casesStatus reporting mechanismRegression test approachETVX criteriaIntegration Test PlanThe integration test plan is the overall plan for carrying out the activities in the integration test level, which contains the following sections.What is to be tested?This section clearly specifies the kinds of interfaces fall under the scope of testing internal, external interfaces, with request and response is to be explained. This need not go deep in terms of technical details but the general approach how the interfaces are triggered is explained.Sequence of IntegrationWhen there are multiple modules present in an application, the sequence in which they are to be integrated will be specified in this section. In this, the dependencies between the modules play a vital role. If a unit B has to be executed, it may need the data that is fed by unit A and unit X. In this case, the units A and X have to be integrated and then using that data, the unit B has to be tested. This has to be stated to the whole set of units in the program. Given this correctly, the testing activities will lead to the product, slowly building the product, unit by unit and then integrating them.System Test Plan {STP}The system test plan is the overall plan carrying out the system test level activities. In the system test, apart from testing the functional aspects of the system, there are some special testing activities carried out, such as stress testing etc. The following are the sections normally present in system test plan.What is to be tested?This section defines the scope of system testing, very specific to the project. Normally, the system testing is based on the requirements. All requirements are to be verified in the scope of system testing. This covers the functionality of the product. Apart from this what special testing is performed are also stated here.Functional Groups and the SequenceThe requirements can be grouped in terms of the functionality. Based on this, there may be priorities also among the functional groups. For example, in a banking application, anything related to customer accounts can be grouped into one area, anything related to inter-branch transactions may be grouped into one area etc. Same way for the product being tested, these areas are to be mentioned here and the suggested sequences of testing of these areas, based on the priorities are to be described.Acceptance Test Plan {ATP}The client at their place performs the acceptance testing. It will be very similar to the system test performed by the Software Development Unit. Since the client is the one who decides the format and testing methods as part of acceptance testing, there is no specific clue on the way they will carry out the testing. But it will not differ much from the system testing. Assume that all the rules, which are applicable to system test, can be implemented to acceptance testing also.Since this is just one level of testing done by the client for the overall product, it may include test cases including the unit and integration test level details.A sample Test Plan Outline along with their description is as shown below:Test Plan Outline1. BACKGROUND – This item summarizes the functions of the application system and the tests to be performed.2. INTRODUCTION 3. ASSUMPTIONS – Indicates any anticipated assumptions which will be made while testing the application.4. TEST ITEMS - List each of the items (programs) to be tested.5. FEATURES TO BE TESTED - List each of the features (functions or requirements) which will be tested or demonstrated by the test.6. FEATURES NOT TO BE TESTED - Explicitly lists each feature, function, or requirement which won't be tested and why not. 7. APPROACH - Describe the data flows and test philosophy.Simulation or Live execution, Etc. This section also mentions all the approaches which will be followed at the various stages of the test execution.8. ITEM PASS/FAIL CRITERIA Blanket statement - Itemized list of expected output and tolerances9. SUSPENSION/RESUMPTION CRITERIA - Must the test run from start to completion?Under what circumstances it may be resumed in the middle?Establish check-points in long tests.10. TEST DELIVERABLES - What, besides software, will be delivered?Test reportTest software11. TESTING TASKS Functional tasks (e.g., equipment set up)Administrative tasks12. ENVIRONMENTAL NEEDSSecurity clearanceOffice space & equipmentHardware/software requirements13. RESPONSIBILITIESWho does the tasks in Section 10?What does the user do?14. STAFFING & TRAINING15. SCHEDULE16. RESOURCES17. RISKS & CONTINGENCIES18. APPROVALSThe schedule details of the various test pass such as Unit tests, Integration tests, System Tests should be clearly mentioned along with the estimated efforts.Risk Analysis:A risk is a potential for loss or damage to an Organization from materialized threats. Risk Analysis attempts to identify all the risks and then quantify the severity of the risks.A threat as we have seen is a possible damaging event. If it occurs, it exploits vulnerability in the security of a computer based system.Risk Identification:1. Software Risks: Knowledge of the most common risks associated with Software development, and the platform you are working on.2. Business Risks: Most common risks associated with the business using the Software3. Testing Risks: Knowledge of the most common risks associated with Software Testing for the platform you are working on, tools being used, and test methods being applied.4. Premature Release Risk: Ability to determine the risk associated with releasing unsatisfactory or untested Software Prodicts.5. Risk Methods: Strategies and approaches for identifying risks or problems associated with implementing and operating information technology, products and process; assessing their likelihood, and initiating strategies to test those risks.Traceability means that you would like to be able to trace back and forth how and where any workproduct fulfills the directions of the preceeding (source-) product. The matrix deals with the where, while the how you have to do yourself, once you know the where.Take e.g. the Requirement of UserFriendliness (UF). Since UF is a complex concept, it is not solved by just one design-solution and it is not solved by one line of code. Many partial design-solutions may contribute to this Requirement and many groups of lines of code may contribute to it.A Requirements-Design Traceability Matrix puts on one side (e.g. left) the sub-requirements that together are supposed to solve the UF requirement, along with other (sub-)requirements. On the other side (e.g. top) you specify all design solutions. Now you can connect on the crosspoints of the matrix, which design solutions solve (more, or less) any requirement. If a design solution does not solve any requirement, it should be deleted, as it is of no value.Having this matrix, you can check whether any requirement has at least one design solution and by checking the solution(s) you may see whether the requirement is sufficiently solved by this (or the set of) connected design(s).If you have to change any requirement, you can see which designs are affected. And if you change any design, you can check which requirements may be affected and see what the impact is.In a Design-Code Traceability Matrix you can do the same to keep trace of how and which code solves a particular design and how changes in design or code affect each other.Demonstrates that the implemented system meets the user requirements.Serves as a single source for tracking purposes.Identifies gaps in the design and testing. Prevents delays in the project timeline, which can be brought about by having to backtrack to fill the gaps
Posted by Prasanna Babu.V at 4:24 AM 0 comments
Definitions
Acceptance TestingTesting the system with the intent of confirming readiness of the product and customer acceptance. Acceptance testing, which is a black box testing, will give the client the opportunity to verify the system functionality and usability prior to the system being moved to production. The acceptance test will be the responsibility of the client; however, it will be conducted with full support from the project team. The Test Team will work with the client to develop the acceptance criteria.Ad Hoc Testing Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an adjunct to formal testing. If carried out by a skilled tester, it can often find problems that are not caught in regular testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed. Sometimes ad hoc testing is referred to as exploratory testing. Alpha Testing Testing after code is mostly complete or contains most of the functionality and prior to users being involved. Sometimes a select group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department. Automated Testing Software testing that utilizes a variety of tools to automate the testing process and when the importance of having a person manually testing is diminished. Automated testing still requires a skilled quality assurance professional with knowledge of the automation tool and the software being tested to set up the tests.Beta Testing Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large in hopes that they will buy the final product when it is released.Black Box Testing Testing software without any knowledge of the inner workings, structure or language of the module being tested. Black box tests, as most other kinds of tests, must be written from a definitive source document, such as a specification or requirements document.Compatibility Testing Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested. Configuration Testing Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software. End-to-End Testing Similar to system testing, the 'macro' end of the test scale involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. Functional Testing Testing two or more modules together with the intent of finding defects, demonstrating that defects are not present, verifying that the module performs its intended functions as stated in the specification and establishing confidence that a program does what it is supposed to do. Independent Verification and Validation (IV&V) The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software. A term often applied to government work or where the government regulates the products, as in medical devices. Installation Testing Testing with the intent of determining if the product will install on a variety of platforms and how easily it installs. Testing full, partial, or upgrade install/uninstall processes. The installation test for a release will be conducted with the objective of demonstrating production readiness. This test is conducted after the application has been migrated to the client's site. It will encompass the inventory of configuration items (performed by the application's System Administration) and evaluation of data readiness, as well as dynamic tests focused on basic system functionality. When necessary, a sanity test will be performed following the installation testing. Integration Testing Testing two or more modules or functions together with the intent of finding interface defects between the modules or functions. Testing completed at as a part of unit or functional testing, and sometimes, becomes its own standalone test phase. On a larger level, integration testing can involve a putting together of groups of modules and functions with the goal of completing and verifying that the system meets the system requirements. (see system testing) Load Testing Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation. Parallel/Audit Testing Testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly. Performance Testing Testing with the intent of determining how quickly a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing. Pilot Testing Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Often is considered a Move-to-Production activity for ERP releases or a beta test for commercial products. Typically involves many users, is conducted over a short period of time and is tightly controlled. (see beta testing) Recovery/Error Testing Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems. Regression Testing Testing with the intent of determining if bug fixes have been successful and have not created any new problems. Also, this type of testing is done to ensure that no degradation of baseline functionality has occurred. Sanity Testing Sanity testing will be performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It will normally include a set of core tests of basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etcSecurity Testing Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers. Software Testing The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The organization and management of individuals or groups doing this work is not relevant. This term is often applied to commercial products such as internet applications. (contrast with independent verification and validation) Stress Testing Testing with the intent of determining how well a product performs when a load is placed on the system resources that nears and then exceeds capacity. System Integration Testing Testing a specific hardware/software installation. This is typically performed on a COTS (commercial off the shelf) system or any other system comprised of disparent parts where custom configurations and/or unique installations are the norm. Unit Testing Unit Testing is the first level of dynamic testing and is first the responsibility of the developers and then of the testers. Unit testing is performed after the expected test results are met or differences are explainable / acceptable. Usability Testing Testing for 'user-friendliness'. Clearly this is subjective and will depend on the targeted end-user or customer. User interviews, surveys, video recording of user sessions, and other techniques can be used. Programmers and testers are usually not appropriate as usability testers. White Box Testing Testing in which the software tester has knowledge of the inner workings, structure and language of the software, or at least its purpose.FAQQ1 What is verification? Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings. You CAN learn to do verification, with little or no outside help.Q2 What is validation? Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.Q3 What is a walkthrough? A walkthrough is an informal meeting for evaluation or informational purposes. A walkthrough is also a process at an abstract level. It's the process of inspecting software code by following paths through the code (as determined by input conditions and choices made along the way). The purpose of code walkthroughs is to ensure the code fits the purpose. Walkthroughs also offer opportunities to assess an individual's or team's competency.Q4 What is an inspection? An inspection is a formal meeting, more formalized than a walkthrough and typically consists of 3-10 people including a moderator, reader (the author of whatever is being reviewed) and a recorder (to make notes in the document). The subject of the inspection is typically a document, such as a requirements document or a test plan. The purpose of an inspection is to find problems and see what is missing, not to fix anything. The result of the meeting should be documented in a written report. Attendees should prepare for this type of meeting by reading through the document, before the meeting starts; most problems are found during this preparation. Preparation for inspections is difficult, but is one of the most cost-effective methods of ensuring quality, since bug prevention is more cost effective than bug detection.Q5 What is quality? Quality software is software that is reasonably bug-free, delivered on time and within budget, meets requirements and expectations and is maintainable. However, quality is a subjective term. Quality depends on who the customer is and their overall influence in the scheme of things. Customers of a software development project include end-users, customer acceptance test engineers, testers, customer contract officers, customer management, the development organization's management, test engineers, testers, salespeople, software engineers, stockholders and accountants. Each type of customer will have his or her own slant on quality. The accounting department might define quality in terms of profits, while an end-user might define quality as user friendly and bug free. Q6 What is a Test Case? A test case is usually a single step, and its expected result, along with various additional pieces of information. It can occasionally be a series of steps but with one expected result or expected outcome. The optional fields are a test case ID, test step or order of execution number, related requirement(s), depth, test category, author, and check boxes for whether the test is automatable and has been automated. Larger test cases may also contain prerequisite states or steps, and descriptions. A test case should also contain a place for the actual result. These steps can be stored in a word processor document, spreadsheet, database or other common repository. In a database system, you may also be able to see past test results and who generated the results and the system configuration used to generate those results. These past results would usually be stored in a separate table.Q7 What is a Test Suite and a Test Script? The most common term for a collection of test cases is a test suite. The test suite often also contains more detailed instructions or goals for each collection of test cases. It contains a section where the tester identifies the system configuration used during testing. A group of test cases may also contain prerequisite states or steps, and descriptions of the following tests. Collections of test cases are sometimes incorrectly termed a test plan. They may also be called a test script, or even a test scenario. A test plan is the approach that will be used to test the system, not the individual tests. Most companies that use automated testing will call the code that is used their test scripts.Q8 What is a Test Scenario? A scenario test is a test based on a hypothetical story used to help a person think through a complex problem or system. They can be as simple as a diagram for a testing environment or they could be a description written in prose. The ideal scenario test has five key characteristics. It is (a) a story that is (b) motivating, (c) credible, (d) complex, and (e) easy to evaluate.They are usually different from test cases in that test cases are single steps and scenarios cover a number of steps. Test suites and scenarios can be used in concert for complete system tests.What Is a Bug? You've just read examples of what happens when software fails. It can be inconvenient, as when a computer game doesn't work properly, or it can be catastrophic, resulting in the loss of life. It can cost only pennies to fix but millions of dollars to distribute a solution. In the examples, above, it was obvious that the software didn't operate as intended. As a software tester you'll discover that most failures are hardly ever this obvious. Most are simple, subtle failures, with many being so small that it's not always clear which ones are true failures, and which ones aren't.Why Do Bugs Occur? Now that you know what bugs are, you might be wondering why they occur. What you'll be surprised to find out is that most of them aren't caused by programming errors. Numerous studies have been performed on very small to extremely large projects and the results are always the same. The number one cause of software bugs is the specification (see Figure 1.1).The Cost of BugsAs you will learn in Chapter 2, software doesn't just magically appear—there's usually a planned, methodical development process used to create it. From its inception, through the planning, programming, and testing, to its use by the public, there's the potential for bugs to be found. Figure 1.2 shows an example of how the cost of fixing these bugs can grow over time.
Posted by Prasanna Babu.V at 3:49 AM 0 comments
Manual Testing
MANUAL TESTING
‘Testing’ is a Process in which the defect are identified, isolated, subjected for rectification and ensure that the product is defect free in order to produce a quality product in the end and hence customer satisfaction.BIDDING THE PROJECT
‘Bidding the Project’ is defined as request for proposal, estimation and Sign-Off.KICKOFF MEETING:‘Kick-Off Meeting’ is the initial meeting held in the software company soon after the project is Sign-Off in order to discuss the overall view of the Project and to select a Project Manager.NOTE: Project Managers (PM), Team Managers (TM), Software Quality Managers, Test leads and High level Management will be involved in this meeting. SDLC (SOFTWARE DEVELOPMENT LIFE CYCLE)There are 6 phases in the development cycle.
1. Initial Phase / Requirement Phase.2. Analysis Phase.3. Design Phase.4. Coding Phase.5. Testing Phase.6. Delivery & Maintenance Phase.
1. INITIAL PHASE:TASK : Interacting the customer and gathering the Requirements.ROLES: Business Analyst (BA), Engagement Manager (EM).PROCESS:First of all the Business Analyst will take an appointment from the customer. Collect the template from the company and meets the customer on the appointed date and gather the requirements with the help of ‘Template’.If at all any requirements are given by the customer then the Engagement Manager is responsible for the excess cost of the Project. He also responsible for prototype demonstration.‘Template’ is a predefined format which is used to prepare any document.‘Prototype’ is a roughly and rapidly developed model which is used for gathering the clear requirements and to win the confidence of a customer.PROOF: The proof document of initial phase is FRS (Functional Requirement Specification). This doc can also be called asCRS (Customer Requirement Specification)BRS (Business Requirement Specification)BDD (Business Development Document)BD (Business Document)NOTE: Some Company will be putting the Business overall flow of the application in the BRS and detailed requirement information in FRS.2. ANALYSIS PHASE:TASK: Feasibility Study, Tentative Planning, Technology Selection, Requirement AnalysisROLES: SA (System Analyst), PM (Project Analyst), TM (Team Manager)‘Feasibility Study’ is a detailed study of the requirement in order to check whether the requirements are possible or not.‘Tentative Planning’ is Resource planning and time planning is temporarily done here in this session.‘Technology Selection’– All the technologies that are required to accomplish the project successfully will be selected and listed out in this session.‘Requirement Analysis’ What are all the requirements that are require to accomplish the project successfully will be analyzed and listed out here in this session.SRS (System Requirement Specification):– The proof Document of Analysis Phase is SRS; requirements may be Hardware requirements or Software requirements.3. DESIGN PHASE:TASK : High Level Designing (HLD), Low Level Designing (LLD)ROLES : HLD is done by CA (Chief Architect)LLD is done by TL (Technical Lead)PROCESS:‘High Level Designing’ is a process of dividing the whole project in to the modules with the help of some diagrams.‘Low Level Designing’ is a process of dividing a module in to sub modules with the help of some diagrams.NOTE: These diagrams are designing by using a language called Unified Modeling Language (UML).The Proof Document on this phase is ‘Technical Diagram Document’ (TDD). TDD contains some diagrams and ‘Pseudo Code’.PSEUDO CODE:‘Pseudo Code’ is not a real code, but a set of English statements which are very much used by the developers to develop the actual code.4. CODING PHASE:TASK : Developing.ROLES: Developers.PROCESS:The developers will develop the actual code using the pseudo code and following the coding standards with proper indentation, color coding and proper commenting, etc.5. TESTING PHASE:TASK : Testing.ROLES: Test Engineers.PROCESS:1. FRS Review.2. While reviewing if at all the test engineers gets any doubts, he will list out all the doubts in review report.3. He will send the review report to the Author of the Document for the clarification.4. After Understanding the requirements very clearly he will take the test case template and write the ‘Test Cases’.5. Once ‘Built’ is released, he will execute the Test Cases.6. If at all any defects are identified, he will isolate them. Once the defect profile is ready he will send it to the development department.7. Once the next Built is released, he will ensure whether the product is defect free by re executing the Test Cases. This process continues till the product is defect free.THE PROOF OF TESTING PHASE IS QUALITY PRODUCT.6. DELIVERY & MAINTENANCE PHASE:DELIVERY:–TASK : Development (Installation)ROLES : Deployment Engineer or Senior Test Engineer.PROCESS : A Deployment Engineer will deploy the application in to the client’s environment by following the guidelines by given by the development department in the deployment department.MAINTENANCE:–Whenever the problem arrases that problem becomes a task depending upon the problem the corresponding role is appointed based on the problem. The problem will be defined and the problem is solved.Q) Where exactly testing comes in to practice? What sort of testing you are expecting?A: There are 2 sorts of Testing.1. Unconventional Testing. 2. Conventional Testing.‘Unconventional Testing’ is a process of testing conducted on each and ever out come document right from the initial phase by the Quality Assurance people.‘Conventional Testing’ is a process of testing applications in the testing phase by the engineers.‘Test Case’ is a check list of all the different presumptions of a test engineer to test a specific future of functionality.TESTING METHADOLOGYTESTING METHODS OR TESTING TECHNIQUES:These are 3 methods of testing.1. BLACK BOX TESING:If one performs testing only on the functional part of an application without having any structural knowledge then that method of testing is known as ‘Black Box Testing’. It is usually done by Test Engineers.2. WHITE BOX TESTING:If one performs testing on the structural part of an application then that method of testing is know as ‘White Box testing’. It is usually done by Developers of Test Engineers.3. GRAY BOX TESTING:If one performs testing on both functional and as well as structural part of the application is known as ‘Gray Box Testers’.NOTE: Gray Testing is done by the Test Engineers with structural knowledge.LEVELS OF TESTINGThere are 5 levels of Testing.1. Unit level Testing.2. Module level Testing.3. Integration Testing.4. System level Testing.5. User Acceptance Testing. (UAT)1. Unit level Testing:–‘Unit’ is defined as a smallest part of an application.If one performs Testing on a Unit, then that level is known as ‘Unit Level Testing’. It is a White Box Testing and usually done by White Box Testers or Developers.2. Module level Testing:–If one performs Testing on a Module then that level of Testing is known as ‘Module level Testing’. It is a Black Box Testing and usually done by Test Engineers.3. Integration Testing:–Once the Modules are ready they will be integrated with the help of inter phases (linking Program) by the developers and those inter phases are tested by the developers in order to check whether the modules are integrated properly or not.It is a white Box Testing and usually done by the developers. The developers will integrate the modules in following approaches. Top Down Approach. (TDA) Bottom Up Approach. (BUA) Hybrid Approach. Big Bang Approach.Top Down Approach Bottom Up Approach Hybrid Approach(TDA) (BUA)• In TDA the parent module are linked with the sub modules.• In BUA the sub modules are linked with the parent module.• In Hybrid Approach is a mixture of both TDA and BUA.Big Bang Approach: – Once all the modules are ready at a time integrating the modules is known as ‘Big Bang Approach’STUBWhile integrating the modules in TDA if at all any mandatory module is missing then that module is replaced with a temporary program known as ‘STUB’.DRIVERSWhile integrating the modules in BUA if at all any mandatory module is missing then that module is replaced with a temporary program is known as ‘DRIVER’4. System Level Testing:–If one performs testing on a complete application after deploying in to the environment then it is known as System Level Testing.5. User Acceptance Testing:–If one performance the same system testing in the presence of user then it is known as ‘User Acceptance Testing’ (UAT). This is a Black Box Testing and done by Test Engineers.Verification:Verification is a process of checking whether the product is being developing in a right manner or not.Validation:–Validation is a process of checking whether the product is right or not. ENVIRONMENTEnvironment is a combination of 3 layers.A. Presentation LayerB. Business LayerC. Database LayerTYPES OF ENVIRONMENTThere are 4 types of Environment.1. Stand alone Environment [or] One tier Architecture.2. Client Server Environment [or] Two tier Architecture.3. Web Environment [or] Three tier Architecture.4. Distributed Environment [or] N-tier Environment.1. Stand alone Environment [or] One tier ArchitectureIf at all the three layers are present in a single system of single then it is known as ‘Stand alone environment’PLBLDBL2. Client Server EnvironmentIn this environment clients resides in one tire and the Database Server resides in another tire. Client will be containing the presentation layer as well as the Business layer. So that the corresponding layer logic will be installed. The Database server contains the Database layers. So that the corresponding logics can be installed.PL+BL3. Web EnvironmentThis environment contains 3 tires. Client resides in the first tire, application server resides in the middle tire and the Database server resides on the other tire. Client contain presentation layer. Application contain Business Layer, Database Layer contains Database. So, that corresponding logics are installed.
4. Distributed Environment‘Distributed environment’ is just similar to Web environment. But number of Application servers is increased in order to distribute the business logic. So, that number of layers will be distributed.NOTE: Each and every application server should be representing in one tire.N= Number of Application Servers + 2 TYPES OF TESTING1. Build Verification Testing (BVT) [or] sanity testingBVT is a type of testing in which a Test engineer will perform the overall testing on the released Build in order to check whether every thing is available and proper for further detailed testing.2. Regression Testing‘Regression Testing’ is a type of testing in which one will perform testing on the already testing functionality once again. It is usually done in 2 scenarios.a. Whenever the test engineers raise the defect, developer rectifies it. Next build is released to the testing department then the test engineer to the testing department then the test engineer will test the rectified functionality as well as the related functionality once again in order to ensure while rectifying the defect related functionality is not affected.b. Whenever the new changes are proposed by the customer incorporated by the developers and the build is released to the testing department. Then the test engineer will test the already tested related functionality in order to ensure that the old functionality remains same despite of new change.3. Re-TestingIt is a type of testing in which one will perform testing on the already tested functionality again and again with multiple sets of data in order to ensure the functionality is working fine or the defect is reproduced with multiple sets of data.4. Alpha (α) TestingIt is a type of User Acceptance Testing done in the company by our test engineers.Advantage: –If at all any defects are identified, there is a chance of rectified them immediately.5. Beta (β) TestingIt is also a type of User Acceptance Testing done in the clients place either by the 3rd party test engineers or by the end users.Disadvantage:–If at all any defects are identified there is a chance to rectify them immediately.6. Static TestingIt is a type of testing in which one will perform testing on application or its related factors whenever it is not been executed.Ex: Doc Testing, Code Analysis, GUI Testing7. Dynamic TestingIt is a type of testing in which one will perform testing on the application whenever it is being executed.Ex: Functionality Testing.8. Installation TestingIt is a type of testing in which a test engineers will try to install the application in to the environment by following the guidelines given in the deployment by developers. If the installation is successful then he will come to a conclusion that the guide lines are correct. Otherwise he will conclude that there are some problems in the guidelines.9. Compatibility TestingIt is a type of testing usually done for the products in which a test engineer may have to deploy the application in to the environments prepare with multiple combination on environmental components in order to check whether it is compatible with those environment or not.10. Monkey Testing [or] Gorilla TestingIt is a type of testing in which one will perform abnormal actions intentionally on the application in order to check the stability.11. End to End TestingIt is a type of testing in which one will perform testing on a complete transaction or an end to end scenario.Ex : Login → Balance Enquiry → Withdraw → Balance Enquiry → Logout12. Usability TestingIt is a type of testing in which one will concentrate on the user friendliness of the application. 13. Exploratory TestingIt is a type of testing in which one will perform testing on the application with out any requirement document support by exploring the functionality usually it is done by the domain experts.14. Port TestingIt is a type of compatibility testing done at the clients place after deploying the application in order to check whether it is compatible with that environment or not.15. Security TestingIt is a type of testing in which one will concentrate on the following areas.Authentication, Direct URL testing, Firewall Testing.16. Reliability Testing (Soak Testing)It is a type of testing in which one will perform testing for longer period of time in order to check stability.17. Mutation TestingIt is a White Box Testing done by the Developers where they do some changes to the program and check for its performance. Since it is associated with multiple mutations, it is known as ‘Mutation Testing’.18. AD HOC TestingIt is a type of testing in which one will perform testing on the application in his own style after understanding the requirements very clear.19. Functional Testing:Testing developed application against business requirements. Functional testing is done using the functional specifications provided by the client or by using the design specifications like use cases provided by the design team. Functional testing covers STLC[Software Testing Life Cycle]STLC contains 6 phases.(1) TEST PLANNING(2) TEST DEVELOPMENT(3) TEST EXECUTION(4) RESULT ANALYSIS(5) BUG TRACKING(6) REPORTINGTEST PLANNINGPlan: – It is a strategic document which describes how to perform a task in an effective, efficient and optimized way.Test Plan: – It is a strategic document which describes how to perform testing on an application in an effective and optimized way.Optimization: – It is a process of reducing the inputs and gathering the same output or even more output.NOTE: Test Plan is prepared by the Test Lead.TEST PLAN INDEX [or] CONTENTS
1.0 INTRODUCTION1.1 Objective.1.2 Reference Doc.2.0 COVERAGE OF TESTING2.1 Feature to be tested.2.2 Feature not to be tested.3.0 TEST STRATEGY3.1 Levels of Testing.3.2 Types of Testing.3.3 Test Design Techniques.3.4 Configuration Management.3.5 Test Metrics.3.6 Terminology.3.7 Automation Plan.3.8 List of Automated Tools.4.0 BASE CRITERIA4.1 Acceptance Criteria.4.2 Suspension Criteria.5.0 TEST ENVIRONMENTRESOURCE PLANNING8.0 SCHEDULING9.0 STAFFING & TRAINING10.0 RISKS & CONTIGENSIS11.0 ASSUMPTIONS12.0 APPROVAL INFORMATION
1.0 INTRODUCTIONObjective: – The purpose of the test plan doc is clearly describes here in this session.Reference Document: – The list of all the documents that are referred to prepare the test plan will be listed out here in this session.2.0 COVERAGE OF TESTINGFeatures to be Tested: – The lists of all the features that are with in the scope are listed out here in this session.Features not to be Tested: – The list of all the features that are not planned for testing based on the following criteria will be listed out here in this session.o Features out scopeo Low risk featureso Features that are to be skipped based on the time constraints.o Features functionality.3.0 TEST STRATEGYIt is an organization level term which is used for testing all the projects in the organization.NOTE: Usually test strategy is common for all the projects. But upon costumer request there may be slight changes in it.Test Plan: – Test Plan is defined as project level term which is used for testing a specific project.Levels of Testing: – The list of all the levels of testing that are followed by that company are listed out here in this sessionTypes of Testing: – The lists of all the types of testing that are followed by that company are listed out here in this session.Test Design Techniques: – Technique is some thing that is used for accomplish a complex task in easy manner.The lists of all the techniques that are followed by that company are listed out here in this session.Boundary Value Analysis (BVA)Equivalence Class partition (ECP)Configuration Management: –Test Metrics: – The list of all the metrics that are maintained in the organization that are listed out here in this session.Terminology: –The list of all the terns that are followed in that company along with the meaning will be listed out here in this session.Automation Plan: – The list of all the areas that are planned for automation listed out here in this session.List of Automated Tools: – The list of all the automated tools that are used by the company are listed out here in this session.4.0 BASE CRITERIA4.1 Acceptance Criteria: – When to stop testing in a full-fledged manner is clearly describe here in this session.4.2 Suspension Criteria: – When to reject or suspend testing is clearly described here in this session.5.0 TEST DELIVERABLES – The List of all the documents that are about to be delivered are listed out here in this session.EX: TEST CASE DOC, REVIEW REPORT, DEFECT PROFILE DOC…….TEST ENVIRONMENT – The client specified environment is clearly described here in this session.7.0 RESOURCE PLANNING – ‘Who have to do what?’ is clearly describes in this session.8.0 SCHEDULING – The starting dates and the ending dates of each and every task is clearly describes in this session.9.0 STAFFING & TRAINING – How much staff is to be recruited what kind of training should be provided for the newly recruited staff and for the existing employee to accomplish this project successfully will be clearly described in this session.10.0 RISKS & CONTINGENCES – The list of all the potential risks and the corresponding solutions are listed out here in this session.RISKS – Unable to deliver the project with in the deadline.– Customer imposed deadlines.– Employees leave the company in the middle of the project.– Unable to test all the features with in the time lake of expertisation.CONTINGENCES FOR SOLUTION– Proper plan ensure.– What not to be tested will be increased in case of customer imposed deadlines.– People should be maintaining on the Bench.– Severity priority based execution.– Training should be provided.11.0 ASSUMPTIONSWhat are all the things that a test engineer should assure is mentioned here in this session.12.0 APPROVAL INFORMATIONWho has to approve what is clearly describe here in this session.TEST DEVELOPMENT PHASE‘Use case’ is a description of functionality of certain feature of an application in terms of actors, actions and responses.INPUT INFORMATION REQUIRED FOR PREPARING THE USE CASESAPPLICATIONFunctional Requirements:1. ‘LOGIN’ screen should contain Username, Password and Connect to fields, Login, Clear and Cancel Buttons.2. ‘Connect To’ is not a mandatory field but it should allow the user to select a database object.3. Upon entering the valid user name, password and click on ‘Login’ button the corresponding page must be displayed.4. Upon entering some information into any of the fields and clicking on ‘Clear’ button all the fields must be clear and the cursor should be placed in the user name field.5. Upon clicking on ‘Cancel’ button login screen should be closed.Special Requirements [or] Validations [or] Business Rules:1. Initially whenever the login screen is opened ‘Login’ and ‘Clear’ buttons must be disabled.2. ‘Cancel’ button must be always enable.3. Upon entering the user name and password the ‘login’ button must be enabled.4. Upon entering information into any of the fields clear button must be enabled.5. The tabbing order must be User Name, Password, Connect to, Login, Clear and Cancel.TEMPLATE OF THE ‘USECASE’1. Name of the Use Case.2. Brief description of the Use Case.3. Actors involving.4. Special Requirements.5. Pre Conditions.6. Post Conditions.7. Flow of Events.USE CASE DOCUMENT1. Name of the Use Case: – ‘Log in’ Use case2. Brief Description Of the Use Case: – This is Use Case is used for describing the functionality of all the features in the Login screen.3. Actors Involved: – Admin, Normal User.4. Special Requirements: –a) Explicit Requirement – Copy the requirements which are given by the client.b) Implicit Requirement – Implicit requirements are the requirements that are analyzed by the Business Analyst in order to provide the value to the Application.Ex: Once the login screen is invoked, the cursor should be placed in the user name field.5. Preconditions – Login screen must be available.6. Post conditions – Either Home page or Admin page for the valid user and error message for the invalid user.7. Flow of EventsMAIN FLOWAction− Actor invokes the application.− Actor enters valid user name, Password and clicks on login button.− Actor enters valid username valid password selects a database option and click on the login button.− Actor enters invalid username, valid password and clicks on login button.− Actor enters valid username, invalid password and click on login button.− Actor enters invalid user name, password and click on login button.− Actor enters some information into any of the fields and click on ‘clear’ button.− Actor clicks on ‘Cancel’ button.Response− Login screen is displayed with the following fields.User Name 2. Password 3. Connect to.− Authentication either Home page or Admin page is displayed depends on the actor entered.− Authentication either Homepage of Admin page is displayed with a mentioned data base connection depending upon the actor entered.− Go to Alternative flow Table ‘1’.− Go to Alternative flow Table ‘2’.− Go to Alternative flow Table ‘3’.− Go to Alternative flow Table ‘4’.− Go to Alternative flow Table ‘5’.Alternative Table 1: [Invalid User Name]Action− Actor enters invalid username valid password and click on login button. Response− Authenticates Error message is displayed. “Invalid User name, Please Try Again”Alternative Table 2: [Invalid Password]Action− Actor enters valid username invalid password and click on login button. Response− Authenticates Error message is displayed. “Invalid Password, Please Try Again”Alternative Table 3: [Invalid Username & Password]Action− Actor enters invalid username and invalid password and click on login button. Response− Authenticates Error message is displayed. “Invalid Username & Password, Please Try Again”Alternative Table 4:Action− Actor enters some information in to any of the fields and click on ‘Clear’ button. Response− All the fields are cleared and the cursor is placed in the username field.Alternative Table 5:Action− Actor clicks on ‘Cancel’ button. Response− Login screen is closed.The Guide Lines to be followed by a test engineer to develop the Test case from a given Use Case.1. Identify the Module to which the Use case belongs to.– Security Module.2. Identify the functionality of the Use case with respect to the total functionality of the application.– Authentication.3. Identify the functional points and prepare the functional point doc.4. Identify the action involved.– Admin, Normal User5. Identify the inputs require to perform the use case.– Valid and Invalid Input.6. Identify whether the Use Case is linked with any other Use Case.– Home Page, Admin Page.7. Identify the ‘Pre condition’– Login screen must be available.8. Identify the ‘Post condition’– Home page or Admin page for valid users and ‘Error message’ for in valid Users.9. Understand the main flow of the Use case.10. Understand the alternative flow of the use cases.11. Understand the special requirements of Business rules.12. Document the Test Cases for the Main flow.13. Document the Test Cases for the Alternative flow.14. Document the Test Cases for the Special requirements.15. Prepare the cross reference Matrix. (Traceability Matrix).Functional Point: –FRSFRS Functional Point Document Master Test Case Doc Detailed Test Case Doc Defect Profile DocUCDUser Name EntryPassword OnlyD.B entryValidation login, connect to, cancel, clearTraceability MatrixUCD FPD MTCD DTD DPDFunctional Point: – The point where the user can perform action can be considered as functional point.Traceability / Cross requirements: – Traceability is a table which contains some information used for tracing back for the reference by linking the corresponding documents in any kind of obvious situation.Types of Test Cases: – The Test Cases are broadly classified in to 2 types.(i) User interface Test Cases. (ii) Functional Test Cases.The functional Test cases are further classified in two types.(i) +ve Test Cases. (ii) -ve Test Cases.Guidelines for developing the user interface Test Cases.1. Check for the availability of all the objects.2. Check for the alignments of the objects.3. Check for the consisting of the objects [Size, Color, Font, Type.]4. Check for the Spelling & Grammar.Guidelines for developing the +ve Test Cases.1. A Test engineer should have the +ve perception.2. A Test engineer should consider the +ve flow of the application.3. He should always use only the valid inputs.Guidelines for developing the –ve Test Cases.1. A Test engineer should have –ve perception.2. A Test engineer should consider the –ve flow of the application.3. He should use the invalid inputs. TEST CASE TEMPLATE1. Test Objective.2. Test Scenario.3. Test Procedure (Functional level term used to test particular functionality).4. Test Data.5. Test Cases.1. Test Objective: – The main purpose of the document is described in this session.2. Test Scenario: – The situation that are to be tested described in this session.3. Test Procedure: – It is a functional level term which describes how to perform testing on functionality.4. Test Data: – The Data that is required for testing is described in this session.5. Test CasesTEST CASEST.C T.C Type Description Expected Value Actual Value Result Savoir Priority Reference1 UI Check for the availability of all the objects. All the objects must be available as per the OBJ Tab All the objects are available as per the OBJ TabPass2 UI Check for the consistence of all the objects. All the objects must be consistence. All the objects are consistent. Pass3 UI Check for the spellings of all the objects. All the objects must be spelled properly as per the OBJ Tab All the objects spelled properly as per the OBJ Tab. Pass4 UI Check for the enable property of login, clear and cancel buttons. Login, clear buttons must be disabled and cancel button must be enable. Login, clear and cancel buttons are enable Pass5 UI Check for the curser placement in the application Curser must be placed in the username field. Curser is placed in the user name field Pass6 +ve Enter username , password as per the V I T and click on login button Corresponding page must be displayed as per the V I T. Corresponding page displayed as per the V I T. Pass7 +ve Enter Username, password as per the V I T, select a database option and click on login button. Corresponding page must be displayed as per the VIT with the mentioned database connection. Corresponding page displayed with the mentioned database connection. Pass8 +ve Enter username, password and check for the enable property of login button. Login button must be enabling. Login button is enabling. Pass9 +ve Enter some information in to any of the fields and check for the enable property of clear button. Clear button must be enabling. Clear button is enable. Pass10 +ve Enter some information in to any of the fields and click on clear button. All the fields must be clear and the curser should be placed in the username field. All the fields are cleared but the curser is not placed in the username field. Pass11 +ve Click on the cancel button. Login screen must be closed. Login screen is closed. Pass12 +ve Check for the tabbing order. Tabbing order must be username, Password, connect to, login, clear and cancel. Tabbing order is working properly. Pass13 –veEnter username, password as per the IVIT and click on login button. Corresponding error message should be displayed as per the IVIT. Corresponding pages are displayed as per the IVIT. Pass14 –ve Enter either username or password or select a database option and check for the enable property of login button. Login button must be disabled. Login button is enabled. FailOBJ TABLES.no Object Type1 User Name Text Box2 Password Text Box3 Connect To Combo Box4 Login Button5 Clear Button6 Cancel ButtonVALID INPUTS TABLES.no User Name Password Expected Page Actual Page1 Suresh QTP Admin Admin2 Santos Bunny Homepage Home page3 Admin Admin Admin Admin4 Madhav Mmd Home page Home pageINVALID INPUTS TABLES.no User Name Password Expected Page Actual Page1 Sures QTP Invalid Username Please try again Error message displayed.2 Santos Bun Invalid Pass word Please try again3 Test Test Invalid Username & Password Please try againTEXT EXCUTION PHASEIn this phase the test engineers will do the following actions.1. He will perform the action that is described in the description column.2. He will observe the actual behavior of the application.3. He will document the observed value under the actual value column of the test case document. RESULT ANALYSIS PHASEIn this phase the Test engineer will compare the actual value with the expected value and both are matched he will mention the result as pass otherwise failed.BUG TRACKING‘Bug tracking’ is a process in which the defects are identified, isolated and maintained.1 2 3 4 5 6 7 8 9 10 11(1) Defect ID.(2) Defect Description.(3) Steps for reproducibility.(4) Submitter.(5) Date of Submission.(6) Build Number.(7) Version Number.(8) Assigned to.(9) Severity.(10) Priority.(11) Status.(1) Defect ID: – The sequence of defect numbers will be there in this session.(2) Defect Description: – What exactly the defect is clearly describes here in this session.(3) Steps for reproducing: – The lists of all the steps that are followed by the test engineer to identify the defect are listed out here in this session. So, that the developer will follows the same steps in order to reproduce the defects.(4) Submitter: – The name of the test engineer who submitted the defect is mentioned here in this session.(5) Date of Submission: – The date on which the defect is submitted is mentioned.(6) Build Number: – The corresponding build number will be mentioned here in this session.(7) Version Number: – The corresponding version will be mentioned here in this session.(8) Assigned to: – This field is not filled by the Test engineer, but it is filled by the Project Manager or Project Leader with the name for where the defect is assigned.(9) Severity: – Severity describes how serious the defect is this is classified in to 4 types.a) Fatal b) Major c) Minor d) Suggestion.a) Fatal: – If at all the problem is related to the navigational block or unavailability of the functional then such types of problems is treated to be ‘Fatal’.b) Major: – If at all the major functionality is not working fine then such type problems treated to be ‘Major’ defects.c) Minor: – If at all the problems is related to the feel of ‘Look and Feel’ of the application. Such type of defects is treated to be ‘Minor’ defects.(10) Priority: – The priority defines the sequence in which the defects has to be rectified. Priority is classified in to 4 types.a) Critical. b) High. c) Medium. d) Low.Usually the ‘Fatal’ defects are given ‘Critical Priority’.‘Major’ defects are given ‘High Priority’.‘Minor’ defects are given ‘Medium Priority’.‘Suggestion’ defects are given ‘Low Priority’.But there are some situations where in the priority change.Case 1: – Low severity High Risk. In case of client visit all the look and feel defects are given highest priority.Case 2: – High severity Low priority. Whenever the functionality unavailable the test engineer will raise it as a ‘Fatal defect’. But, if that functionality is under development and it taken some more time and that such situation is ‘Low priority’ given by Project Manager or Project Leader. BUG LIFE CYCLENew/ open: – Whenever the test engineer identifies the defect newly for the first time then he will set the states as new/open. But some companies will say it as new and once the developer accept as defect he will set as open.Fixed for verification: – Whenever the test engineer raises the defects and the developer rectifies it the he will set the status as of the defect as ‘Fixed for verification’ before releasing the next build.Reopen and Closed: – Whenever the defects are rectified and next build is releasing to the testing department then the test engineer will check whether the defects rectified properly or not if they feel the defects is not rectified properly then they will wet the status as ‘Reopen’ if at all they feel that the defect is rectified properly then they will set status as ‘Close’.Hold: – Whenever the developer is in a confusion situation to accept or reject the defect in such situation he will set the status as ‘Hold’.As per Design: – Whenever the new requirements are given by the development team and the developers incorporated those new fields and release to the testing department. As the test engineers are not aware of those new changes he will raise as them defects. But the developers will set the status as ‘As per Design’.Testers Error: – If at all the developer feels it is not at all a defect then he will set the status as ‘Tester Error’. REPORTING PHASE(1) Classical Bug Reporting Process:Drawbacks: – Time consuming, Redundancy, insecurity.(2) Common Repository oriented Bug reporting Process:Drawbacks: – Time consuming, Redundancy.(3) Bug Tracking Tool oriented Bug Reporting Process:Bug Tracking Toll is software which can be accessed by the authorized persons only. It is used for complete bug tracking process.TEST DESIGN TECHNIQUESWhenever the test engineer is developing the test cases he may face some difficulties. In order to overcome those difficulty and complete the tasks easily there are some techniques known as ‘Test Design Techniques’(1) Boundary Value analysis: – Whenever the test engineer need to develop the test cases for a range kind of the input, then he will use a technique called ‘Boundary value analysis’.Using this technique one will(2) Equivalence Class Partition: – It is a technique used by the test engineer in order to develop the +ve and –ve test cases easily for a functionality which has more number of validations. Using this technique one can divide the valid class inputs and invalid class inputs.Case Study: – Develop the test cases to test a Text Box which has the following validation.(a) It should accept minimum 4 characters and maximum of 20 characters.(b) It should accept small a – z only.(c) It should accept special symbols ‘@’ and ‘_’ only.
Tuesday, October 2, 2007
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment