#23 User Acceptance Test (UAT)

Introduce about User Acceptance Test, What, Why and How?

User Acceptance Test
 Is often the final step before rolling out the application
 Do by customer.
 AT environment, for most aspect should represent for the production environment.
 AT concentrates on validation types of testing to determine whether the system is fit for use.
 AT may occurs at many levels due to the scope of project (after UT, IT, ST).
 Time scale of AT may overlap on ST.

UAT - Objective
 Validate system set-up for transactions and user access
 Confirm use of system in performing business processes
 Verify performance on business critical functions
 Confirm integrity of converted and additional data, for example values that appear in a look-up table
 Assess and sign off go-live readiness

UAT –Why, When
- Why
 Accept product based on acceptance criteria
 Ensure function needed and expected by customers is present in a software product
-When:
 After software product is released and system tested

UAT – How To Test
 User Acceptance Test (UAT) Planning
 Designing UA Test Cases
 Selecting a Team that would execute the (UAT) Test Cases
 Executing Test Cases
 Documenting the Defects found during UAT
 Resolving the issues/Bug Fixing
 Sign Off.

UAT – Planning
 Outlines the User Acceptance Testing Strategy
 Identify the key focus areas
 Identify entry and exit criteria

UAT Plan Sample

UAT – Designing UA Test Cases
 Based on User Case
 Create test case for real-world flow
 Is described in a simple language the precise steps to be taken to test something.

 Is reviewed by Business Analysts

UAT – Select a Team…
 Is an important step
 The UAT Team is generally a good representation of the real world end users
 The Team thus comprises of the actual end users who will be using the application

UAT – Executing Test Case
 The Testing Team executes the Test Cases and may additional perform random Tests relevant to them
 The Team logs their comments and any defects or issues found during testing
 The issues/defects found during Testing are discussed with the Project Team, Subject Matter Experts and Business Analysts. The issues are resolved as per the mutual consensus and to the satisfaction of the end users

UAT – Sign Off
 This step is important in commercial software sales
 Indicates that the customer finds the product delivered to their satisfaction

User Acceptance Test Types
 User Acceptance Testing
 Operational Acceptance Testing
 Contract and Regulation Acceptance Testing
 Alpha and Beta Testing

Operational Acceptance Testing
 The acceptance of the system by system administrators, including:
 Backup/ restore
 Disaster recovery
 Maintenance tasks
 Security weakness

Contract and Regulation AT
 Contract acceptance testing is performed against a contract’s acceptance criteria for producing
custom-developed software. Acceptance criteria should be defined when the contract is agreed
 Regulation acceptance testing is performed against any regulations that must be adhered to,
such as governmental, legal or safety regulations

Alpha and Beta (or Field)
Testing
 Alpha test: do by users in development environment.
 Beta test: do by users in real world environment.

AT and Configuration Management
 The focus of CM during AT is similar to during ST
 The major difference are that the system is released to end-users and any problem reports and CRs are generated by end-users
 A help desk application can be useful tool for AT.



Continue Reading →

#22 Automation Testing Tools

What is Test Automation
 Test automation is the use of software to control the execution of tests, the comparison of actual outcomes to predicted outcomes, the setting up of test preconditions, and other test control and test reporting functions. Commonly, test automation involves automating a manual process already in place that uses a formalized testing process.

Why do we need test automation
 Run existing tests on a new version of a program
 A lot of programs are frequently always modified in the same environment
 Run more tests more often
 Giving possibilities to run more tests which are run more often
 Perform tests that would be difficult or impossible to do manually
 Better use of resources
 Saving time
 Better use of resources
 Testers can do better jobs, machines can be run all the time.

Benefits of automated testing
a. Manual testing
 Time-consuming
 Tedious
 Depend on tester’s emotions => Serious defects may be undetected

b. Automated Testing
 Fast
 Reliable
 Repeatable
 Programmable
 Comprehensive
 Reusable

Test Tools
 Application test tools
 Source test tools (depends on program language)
 Functional test tools
 Performance test tools
 Embedded test tools
 Web test tools
 Functional test tools
 Performance test tools
 Test management tools
 Bug tracking tools

Test Tools Comparison
 Major features in Automated test tools
 Record & play back
 Web Testing
 Data function
 Image testing
 Extensible language
 Environment support
 Tools in comparison
 Rational Robot
 Ruby
 OpenSTA
 Quick Test Pro
 …

Record & Playback
 Details how easy it is to record & playback a test.
 Does the tool support low-level recording (mouse drags, exact screen location)
 Is there object recognition when recording and playing back or does it appear to record ok but then on playback (without environment change or unique id’s, etc changes) fail?
 How easy is it to read the recorded script

Web Testing
 Are there functions to tell me when the page has finished loading?
 Can I tell the test tool to wait until an image appears?
 Can I test whether links are valid or not?
 Can I test web based objects functions like is it enabled, does it contain data, etc
 Are there facilities that will allow me to programmatically look for objects of a certain type on a web page or locate a specific object?
 Can I extract data from the web page itself? E.g. the title? A hidden form element?

Data Function
 Does the tool allow you to specify the type of data you want?
 Can you automatically generate data?
 Can you interface with files, spreadsheets, etc to create, extract data?
 Can you randomise the access to that data?
 Is the data access truly random?

Image Testing
 Does the tool provide OCR (optical character recognition)?
 Can it compare one image against another?
 How fast does the compare take?
 If the compare fails how long does that take?
 Does the tool allow you to mask certain areas of the screen when comparing?

Type of Automation Test
 Functional Testing
 Performance Testing

Performance Testing Features (1)
 Performance Test Recording
 Flexible User Scenarios: To emulate the load test of realuser activities, an easy, complete and customizable building of user scenarios are provided.
 Real-world Performance Testing: Real-world web load testing allows you to dynamically vary the number of virtual users hitting your web applications/web sites to study the load testing/stress testing conditions under varying load.
 Dynamic Data Replacement: Modifying the performance test scripts to reflect dynamic data requires no programming skills.

Performance Testing Features (2)
 Server Performance Monitors: Load test your web sites/web applications with integrated monitoring of server machine's system resources (cpu % and memory usage).
 Configurable Playback Options: Wide range of play options that affects the playback of load test scripts.
 Proxy Support: Peformance testing can be done through proxy servers, supporting authentication and proxy exclusions for local networks.
 Distributed Load Testing: Option to simulate thousands of simultaneous users working from a single machine or distributed on multiple machines. Centralized coordination and reporting of distributed load testing results.

Performance Testing Features (3)
 Datapools for Parameterization: To use unique data for each user in load testing/stress testing
 Configurable Think Time: Configurable think times to perform real-world performance testing where each user spends thinking time (wait time) before performing the next action in the web page.
 Browser Simulation: Support for simulating exact browser behavior (MSIE, Firefox and Mozilla simulation).
 Random Delay: Random Delay for Virtual user start to simulate user visit to the web site/web application in irregular intervals.
 Bandwidth Simulation: Option to emulate different network speeds during playback.

Functional Testing Features (1)
 Playback Options
 Automatic/Customizable Error Recovery for unattended testing to handle unexpected window, like an ASSERT box, pop- ups, etc.
 Playback Synchronization to handle the variation in time the application takes to load a new page.
 Option to chain scripts for controlling the order of script execution.
 Multiple Playback Options that enables testers to debug test errors while creating and maintaining test scripts.
 Allows execution of individual and/or groups of tests from one or many workstations.

Functional Testing Features (2)
 Scripting Capabilities
 Simplified Script Creation with recording provision.
 Records parent/child frames, PopUp's, modal and modeless dialogs.
 Learn mode to learn objects into the object repository.
 Keyword-driven testing allows automation experts to have full access to the underlying object repository to author and debug scripts using pre-defined keywords.

Functional Testing Features (3)
 Scripting Capabilities …
 Standard Scripting Language allows you to easily create/update scripts without much programming knowledge.
 Object Repository eliminates the need to modify all the scripts each time the application is modified.
 Data-driven Testing enables you to perform functional testing in different scenarios by just changing the test data in an external data source.
 Unicode Support allows you to test multi-language deployments of your applications..


Functional Testing Features (4)
 Portability
 Allows you to record scripts in Windows and replay it in Linux without re-creating the scripts.
 Browser Abstraction Layer allows scripts recorded in one browser to be replayed in all other supported browsers.
 Runtime locale option lets you to simultaneously test all language versions of your application with a single script.

Functional Testing Features (5)
 Reporting Capabilities
 Clear and Powerful Reports are provided to indicate the status of the test execution.
 Hyperlinks allow easy navigation through the report. This helps you to quickly identify application failures and clearly assess application quality.
 Supported Environments
 Internet Explorer 6, Firefox 1.5.x and Mozilla 1.7.x, Windows XP, 2000 and Linux …


Continue Reading →

#21 Practical Guide to Software Unit Testing

1 Unit Test - What?
2 Unit Test - Why?
3 Unite Test - When?
4 Unit Test - How? (method, technique)

Unit Test – What
 Unit Testing Actions:
 Validate that individual units of software program are working properly
 A unit is the smallest testable part of an application (In procedural programming a unit may be an individual program, function, procedure, etc., while in object-oriented programming, the smallest unit is always a method)
 Units are distinguished from modules in that modules are typically made up of units
 Unit Testing Deliverables:
 Tested software units
 Related documents (Unit Test case, Unit Test Report)

Unit Test – Why ?
 Ensure quality of software unit
 Detect defects and issues early
 Reduce the Quality Effort & Correction Cost


Unit Test – When ?
 After Coding
 Before Integration Test

Unit testing
 Modules can be tested statically or dynamically
 A basis path set will execute every independent path through a module
 Modules can be instrumented to gather statistics on their execution
 Modules can be unit tested top-down, bottom-up or in isolation
 Test drivers and stubs are required to unit test modules
=> The motivation is to ensure that value is not added to defective products


Source code reviews (static testing)
 Everyone’s code is reviewed
 Do not review code to measure individual performance
 Drawing a flow graph and calculating complexity could be used as part of the inspection
 Portability and maintainability can be given special attention during reviews

Unit testing (Dynamic testing)
 Functional test cases
 Black box test cases
 Structural test cases
 White box test cases
 Steps to develop a basis set of a module:
 Create annotated Pseudo code
 Draw Control Flow Graph
 Select a baseline path through the program (should be the most common paths that is followed)
 A second path is added by varying the outcome of the first decision along the baseline path while keeping all of the other decision outcomes as same as the baseline path
 This process is repeated until all of the decisions along the baseline path have had their outcomes altered.
 The number of paths in a basis set is always equal to the cyclomatic complexity of the module

Unit testing strategy
- Start with simple tests that get the module running;
- Run functional tests designed to validate that the module does what it is supposed to (positive test);
- Run negative tests that are designed to make the
module fail;
- Run tests associated with specific ISO 9126 quality attributes (performance, security, etc.) if required; and finally
- Add additional structurally tests to complete a basis set that achieves coverage of all independent paths;

Unit testing strategy (cont.)
- Both Functional and structural test techniques identify positive test cases
- Negative tests are designed to demonstrate that a module does not work as expected. Unfortunately,
there are no simple techniques for identifying negative test cases. Error guessing, based on experience, remains the best approach.

- Some common types of error include:
 Array sizes;
 Counter maximums;
 Physical boundaries;
 Variable field sizes;
 Arithmetic operation;
 Switch variable values;
 Pointer variables; and
 Null values.

Order of Testing modules
 Top-Down Unit Testing: stubs are required
 Bottom-up Unit Testing: drivers are required
 Isolation Unit Testing: drivers and stubs are required for each module

Advantages and Disadvantages


Unit Test – How? Techniques
Black box test (Functional)
• Specification derived tests
• Equivalence partitioning
• Boundary value analysis
White box (Structural)
• Statement coverage
• Decision (branch) coverage
• Path coverage

Unit Test – Black box technique

 Black-box testing
 Functional testing: ensure each unit acts right as its design
 Business testing: ensure the software program acts right as user requirement

Unit Test –White box technique


 White-box testing
 Check syntax of code by compiler to avoid syntax errors
 Run code in debug mode, line by line, through all independent paths of program to ensure that all statement of codes has been executed at least one time
 Examine local data structure to ensure that data stored temporarily maintains its integrity during all all steps of code execution
 Check boundary conditions to ensure that code will run properly at the boundaries established as requirements
 Review all error handling paths

BBT – Specification derived test
 You can choose all or some statements in the specification of software
 Create test cases for each statements of specification
 Execute test cases to check test result will output as the specification

Example Specification
 Input - real number
 Output - real number
 When given an input of 0 or greater, the positive square root of the input shall be returned.
 When given an input of less than 0, the error message "Square root error - illegal negative input" shall be displayed and a value of 0 returned.

Example Test Cases
 Test Case 1: Input 4, Return 2
 Use the first statement in the specification
 ("When given an input of 0 or greater, the positive square root of the input shall be returned.").
 Test Case 2: Input -10, Return 0, Output "Square root error - illegal negative input“
 Use the second and third statements in the specification
 ("When given an input of less than 0, the error message "Square root error - illegal negative input" shall be displayed and a value of 0 returned.”).

BBT: Equivalence partitioning
 Divide the input of a program into classes of data from which test cases can be derived. This might help you to reduce number of test cases that must be developed.
 Behavior of software is equivalent for any value within particular partition
 A limited number of representative test cases should be chosen from each partition



Example Test Cases


White box test : Node



WBT: Statement Coverage
WBT: Decision/Branch Coverage


WBT: Path Coverage


Example: White box test case



White box test: Comparison


Unit Testing Tools


Continue Reading →

#20 Software Quality & Risk

Risk Overview:

 Is possibility of a negative or undesirable outcome
 It is a possibility, not a certainty
The level of risk associated with its possible negative consequences
Risk is classified into 2 types: Product Risk and Project Risk

Where to look for risks?
 Dependencies: HR, tool, equipment, etc.
 Assumptions: may not actually be true.
 Project characteristics: objectives, requirement, design, implementation, testability, etc.
 Activities on the critical path
 Team spirit and attitude
 Outside project: organization, policies, rules, standards, etc.
 ….

Product Risk
 Product risks/Quality risks: the possibility that the system or software might fail to satisfy some reasonable customer, user, or stakeholder expectation
 Unsatisfactory software might:
 Omit some key functions that the customers specified
 Unreliable and frequently fail to behave normally
 Fail in ways that cause financial or other damage to a user or the company that user works for
 Have problems related to a particular quality characteristic, which might not be functionality, but rather security, reliability, usability, maintainability or performance
 Project risks: apply to testing. The same concepts we apply to identifying, prioritizing and managing product risks.
 What project risks affect testing?
 Direct risks:
 Late delivery of the test items to the test team
 Availability issues with the test environment
 Indirect risks
 Excessive delays in repairing defects found in testing
 Problems with getting professional system administration support for the test environment
 For any risk, product or project, you have four typical
options:
 Mitigate: Take steps in advance to reduce the likelihood (and possibly the impact) of the risk.
 Contingency: Have a plan in place to reduce the impact should the risk become an outcome.
 Transfer: Convince some other member of the team or project stakeholder to reduce the likelihood or accept the impact of the risk.
 Ignore: Do nothing about the risk, which is usually a smart option only when there's little that can be done or when the likelihood and impact are low

Software Quality and Risk
 Contrary to popular beliefs, testing cannot demonstrate that software works
 Software testing must be viewed as a risk mitigation activity designed to reduce the risk of defects in software
 Standard lists of risk factors are useful for identifying potential risks
 Risk analysis priorities risks based on the likelihood that they will occur & their potential impact

Software testing
To prove the software works correctly
 Executing all paths => Only possible for a simplest of software
 Every combination of input & output => Only possible if the executing the tests could be performed automatically
Testing and Risk:
 There will always be a real possibility that software will contain defects no matter how well it is tested
 The goal of software testing is to minimize the risk of defects Risk-based Testing
 Uses risk to prioritize and emphasize the appropriate tests during test execution
 Starts early in the project, identifying risks to system quality and using that knowledge of risk to
guide testing planning, specification, preparation and execution
 Involves both mitigation and contingency
 Mitigation - testing to provide opportunities to reduce the likelihood of defects, especially high-impact defects
 Contingency - testing to provide opportunities to reduce the likelihood of defects, especially high-impact defects

Minimizing Risks
 Risk assessment:
 Identify what potential risks exist
 Determine the likelihood of a risk occurring & the impact if it occurs
 Risk control: identify & perform activities to
 Minimizing the likelihood of a risk occurring
 Minimizing the impact if the risk occurs

Risk Statement template
Given the <condition>, there is a possibility that
<consequence> will occur
 Condition: describes the situation that gives rise to the risk
 Consequence: describes a potential undesirable outcome related to the situation

Analyzing Risks


Quality Risk Dimensions


Prioritizing Risks
 Compare risks with the software quality characteristics described in ISO 9126 and estimate the potential impact that each risk could have each characteristic
 Example: (open excel file for reference)

Risk Factor Influence on Software Quality Characteristics


Example
Risk Control
 Risk can be controlled by planning, specifying & executing activities designed to:
 Minimize the likelihood of a risk occurring
 Minimize the impact of the risk if it does occur
 The results of executing risk control activities is recorded for three reasons:
 The record provides auditable evidence that the risk control activities were performed
 The data can be used to measure the efficiency of the risk control activities
 The data can be used o decide if an acceptable level of risk has been achieved


Continue Reading →

#19 Introduce about CAR Process, DPC, DP at project level


Purpose:
1 Understand CAR and CAR process
2 Responsibilities of DPC, DP Teams
3 How to conduct Causal Analysis meeting and identify preventive actions
4 How to create/update DP log
5 Group assignment: Causal analysis & resolution meeting

Misconceptions on CAR
 During Project Execution -> From Project Planning
 To fix problem -> to minimize the cause (Prevention)
 Apply only for Defects -> Issue/Incident/Defects/NC/Customer complaint
NC= Non Conformity

CAR process & How?

 Plan for CAR periodically at organization and project level (DP Plan)
 Follow goal and objective of organization/project
 Determine resource
 Plan for budget & effort …
 + How?
Problems can be:
 Defect -> Common defects/Common cause
 Use Pareto chart ( 80/20 rules)/ to select some defect type to analyze further
 Should consider
 Impact of defects
 Frequency of occurrence
 Similarity between defects
 Cost for analysis
 NC/Issue/Customer complaints
 State the problem out and log it to tracking system
How to determine problem?
 Pareto: To determine the priorities of the problem to be analyzed. It helps in identifying this vital few pictorially
 Common issues , defects/Common causes
 80/20 Rule : What 20% of sources are causing 80% of the problems?
 Where should we focus our efforts to achieve the greatest improvements?
=> Answer
 Analyse cause
 Hold CAR meeting
 Brainstorming to identify Cause
 Use Cause and Effect (fish bone/ Ishikawa) techniques
 Use 5 why rule
 Groups Causes to categories
 Log common defects to common defect list
 Define action proposals
 Log DP actions to DP logs
 Log analysis and actions to Issue list for problems
How to find the root cause?

 Fish born ( see picture)
 Cause can be branched to categories
 Ask why and put bone to the fish till we find the root cause
 5 why
 Ask why 5 times till we see the root cause Why A? Answer : B, Why B? answer: C, Why C?....
 5 why can be used separately or with fish bone
 Reference for 5 why?
 Ask why 5 time for each categories below
 Investigate why problem happen ( specific root cause)?
 Investigate why the problem was not detected:
 Investigate why system let the problem happen ( Systemic root cause): such as current process, procedure,….



Example – 5 Why
Cấp trên giao trách nhiệm cho bạn tìm văn phòng mới vì hàng tháng nay các nhân viên phòng mua hàng phàn nàn việc thiếu chỗ làm việc. Trước khi đưa ra dự án, bạn quyết định nghiên cứu vấn đề. Vậy bạn nên tiến hành trao đổi với các nhân viên trong phòng
1. Tại sao các anh muốn có văn phòng mới?
Câu trả lời nhận được một cách tức thì: thiếu chỗ, mỗi văn phòng có những 3 người
2. Nhưng tại sao lại có 3 người trong mỗi văn phòng
Câu trả lời: vì chúng tôi phải lưu kho ngay cả những máy tính cần kiểm tra trước khi gửi chúng đến các phòng ban khác. Cả 1 tầng đầy máy tính
3. Tại sao các anh phải kiểm tra máy tính ngay tại đây?
Trả lời: Trước kia nhà cung cấp họ tự kiểm tra chất lượng, bây giờ chúng tôi phải làm việc này
4. Tại sao nhà cung cấp lại ko tự kiểm tra chất lượng nữa?
Trả lời: Đi mà hỏi người quản lý mua hàng. Khi bạn hỏi ông ta cũng câu hỏi đó, ông ta trả lời rằng nhà cung cấp đã tăng lệ phí cho việc kiểm tra này lên 70%
5. Bạn gọi cho nhà cung cấp để biết được quan điểm của họ "tại sao các ông lại tăng lệ phí cho việc kiểm tra này?"
Trả lời: đó là do người quản lý chất lượng của công ty có những yêu cầu hoàn toàn thiếu thực tế. Việc kiểm tra mà ông ấy yêu cầu phải mất 4 hours cho mỗi máy tính.
=> Sau "5 tại sao" cuối cùng bạn đã tìm ra được thực chất của vấn đề: những yêu cầu phi thực tế của phòng quản lý chất lượng. Vì vậy bạn đã yêu cầu thay đổi những yêu cầu này và nhà cung cấp lại bắt đầu tiếp tục công việc kiêm tra. Kể từ đó phòng mua hàng lại có chỗ làm việc của họ

How?
 Implement actions
 Assign person in charge
 Implement and track to closure
 Logs info in DP logs or Issue actual actions
 Evaluate the effect of change
 Evaluate the benefits of actions
 Log benefits in DP log of action
 Records:
 All records are kept for later process improvement in organization level
 Sharing:
 Common defects of most used program languages in organization
 Technical knowledge to share experience, practice or lessons
 DP Norm
 DP Database

DPC
 Board of Management (BOM): responsible for problem analysis and resolution at organization level. BOM decides that Group/Project team should conduct Causal analysis meeting for issues/problems
 Defect Prevention Council (DPC): responsible for Defect Prevention actions at organization level
 Responsibility:
 Prepare DP plan for organization.
 Review and approve DP plan of groups.
 Defect report monthly
 Identify and provide necessary resource for defect and problem prevention.
 Evaluate performance of DPC biannually.
 Control the implementing of DP activities of groups through review and internal audit.

DPC members
 DPC head: QA branch will take responsible for DPC at organization level
 DPC lead of OGs: QA leaders take this role.
 DP Specialists: Each OG will has plan and list of DP specialists
 Qas

CAR Meeting
 When: Quarterly/BOM decides or critical problem occurs
 Purpose: To decide preventive actions and implement at organization level
 Responsibility: DPC Head/DPC Lead/Group Leaders/QA Leaders
 Content:
 Identify problems and major defects
 Analyze the root causes of problems/defects
 identify most expensive, most frequent defects or problems
 Give preventive activities for problems/defects
 Assign member and give deadline for preventive action

DPC at Group level
-  DP Plan
 Set DP target for the group basing on special requirements of groups/customers
 Members of DPC at Group: Coordinator
 DP Specialists
 DP Actions of groups (to achieve the targets)
 In the Initiation stage of the project, Project manager establishes DP Team for the project
 DP team sets targets and plans DP activities
 DP Plan of a project includes 3 items :
 DP targets: The targets should follow goals of DP at Group level.
 DP activities: give actions to avoid defects and meet DP target
 DP Team
- Review DP Plan
 DP plan is reviewed by PM, project QA and approved by DP leader.
 DP Plan is approved in Kick-off meeting
 Causal analysis meeting
 When:
 As planned DP plan.
 Defect density is over the norm
 Problem that affects to quality or productivity occurs
 Causal Analysis meeting can be a part of project meeting
 Purpose
 Discuss and evaluate the effectiveness of Defect/Problem preventive actions defined in previous meeting
 Identify most common defects/problems of the project and root causes of those defects/problems.
 A root cause is a source of a defect such that if it is removed, the defect is decreased or removed
 Find out actions to prevent similar errors/problems
 Preparation
 DP Teams/PM/PTLs review defect/problem data and list some common defects/problems
 Update results of actions in DP log or Issue list
 Invite all members and DPC representative to the meeting

- In the meeting:
 Report status of planned DP actions and actual result or report the problems and the impacts/effectiveness
 Discuss about common defects/problems in last phase
 Analyze the root causes of defects/problems
 Prioritize defect/problem types
 Identify possible actions to prevent occurrence of similar defects or problems
 Assign member and give deadline to implement preventive action.
 Records
 Meeting minutes
 Preventive action is logged in DP log
 Common Defects : record all common defects of the project with root causes.
 Issues: record all issues, causes, solutions and actions to remove the causes.

Continue Reading →

#18 Exercise to create test report

Exercise 1
 Create test report for each function: include following items:
1. Number of test case
2. Number of passed test case
3. Number of fail test case
4. Number of untested test case
5. Number of N/A test case
6. Percentage of test coverage
7. Percentage of test successful coverage

Answer 1:


=> Conclusion:
Function ‘Manage student mark’ has many fail test case.
The system can not calculate student mark



Exercise 2
 Create general defect report:
 Open defects: base on severity (Fatal, serious, Medium,
Cosmetic) and status of defect ( Error, Assigned, Fixing,
Corrected, Confirming)
 Fixed defects: base on severity (Fatal, serious, Medium,
Cosmetic) and status of defect Delivered, Validated,
Approved, Accepted, Canceled, Closed)
 Total weight defect:
 1 cosmetic defect = 1 w.def
 1 medium defect = 3 w.def
 1 serious defect = 5 w.def
 1 fatal defect = 10 w.def

Answer 2:



Execise 3:
 Create Defect distribution test report (include following information)
 Number of defects for each process ( Requirement, Design, Coding, Test, Other)
 Number of defects for each QC activities ( Unit test, Integration test, System test, Acceptance test, Code review, Document review, Final Inspection, Baseline audit
 Total weight defect:
 1 cosmetic defect = 1 w.def
 1 medium defect = 3 w.def
 1 serious defect = 5 w.def
 1 fatal defect = 10 w.def

Answer 3:




Exercise 4
 Create Defect trend
 Draw graph to know found defects and fixed defect trend From date….. To date

Answer 4:




Continue Reading →

# 17 Test Report/ Guide to create test report and defect report

Overview
1 Test report Process
2 When test report is created
3 Categories of the Test report (examples)
4 Practices to creating good Test Report
5 Test report shows the status of testing
6 What tests were already performed
7 The status of these tests
8 What is remaining
9 Overall progress
10 Test report should be documented and visible for project team

Test Reporting Process - "When Test Report should be created?"
 Pre-defined checkpoints (milestones)
 The end of testing stage
 Whenever significant problems are detected

Collect Test Status Data
Categories of data:
 Test results data
 Test case results and test verification results
 Defects

Test Result Data
 Results pictures: Pass/fail result pictures
 Software components, applications for test
 Platform – the hardware and software environment in which the software system will operate. Test case & Test verification results
 Results describe any variance between the expected and actual outputs
 They are results of below test techniques used by the test team to perform testing
 Test cases – The type of tests that will be conducted during the execution of tests, which will be based on software requirements
 Inspections – A verification of process deliverables against deliverable specifications.
 Review checklists – Verification that the process deliverables/phases are meeting the user’s true
needs.

Defects
=>  The description of defects:
Name of defect
Status of the defect
Severity of the defect
Type of defect
Module code which contains defect
How the defect was discovered
Data the defect uncovered
Test case of defect

Defect Report
 Defect Status Report: see current status of all defect in period of time
 Defect Distribution: see defect distribute to QC Activities
 Defect Re-Open: see the number defects are re-opened
 Defect Summary: see number defect based on Product Type
 Defect Trend: see trend of defect

Defect Report - Sample
General Status Report
 Defect Distribution
 Defect Re-Open
 Defect Summary
 Defect Trend

Analyze the data
Testers should answer these questions
What information does the project/customer need?
Which metrics are used in Reporting?

Reporting
Testers should answer these questions
How can testers present that information in an easy-tounderstand format?
How can I present the information so that it is believable?
What can I tell the project management/customer that would help in determining what action to take?

Categories of the Test Report (examples)

=> Function test report


=> Test case report:


=> Defect distribution report


=> Defect statement report




Continue Reading →

#16 Test Report Overview

Overview
 Test report shows the status of testing
 What tests were already performed
 The status of these tests
 What is remaining
 Overall progress
 Test report should be documented and visible for project team

When Test Report should be created?
 Pre-defined checkpoints (milestones)
 The end of testing stage
 Whenever significant problems are detected

Test Result Data
 Results pictures: Pass/fail result pictures
 Software components, applications for test
 Platform – the hardware and software environment in which the software system will operate.

Test case & Test verification results
 Results describe any variance between the expected and actual outputs
 They are results of below test techniques used by the test team to perform testing
 Test cases – The type of tests that will be conducted during the execution of tests, which will be based on software requirements
 Inspections – A verification of process deliverables against deliverable specifications.
 Review checklists – Verification that the process deliverables/phases are meeting the user’s true
needs.

Defects
The description of defects:
Name of defect
Status of the defect
Severity of the defect
Type of defect
Module code which contains defect
How the defect was discovered
Data the defect uncovered
Test case of defect

Defect Report
 Defect Status Report: see current status of all defect in period of time
 Defect Distribution: see defect distribute to QC Activities
 Defect Re-Open: see the number defects are re-opened
 Defect Summary: see number defect based on Product Type
 Defect Trend: see trend of defect

Defect Report - Sample
General Status Report
 Defect Distribution
 Defect Re-Open
 Defect Summary
 Defect Trend

Analyze the data
Testers should answer these questions
What information does the project/customer need?
Which metrics are used in Reporting?

Reporting
Testers should answer these questions
How can testers present that information in an easy-tounderstand format?
How can I present the information so that it is believable?
What can I tell the project management/customer that would help in determining what action to take?

Categories of the Test Report (examples)
 Function test report



Continue Reading →

#15 Bugs Managerment


What is a defect?

 A defect is any error found by testing and reviewing activities (All errors found by internal reviewer, external reviewer and customer).
 If you consider any recommendations to fix, this recommendation should be logged as a defect in defined tool (an excel file, a defect management tool).

What should be done after a defect is found?
 The defect needs to be logged and assigned to developers that can fix it
 After the problem is resolved, fixes should be re-tested, and determinations made regarding requirements for regression testing to check that fixes didn’t create problems elsewhere
 Defect management should encapsulate these processes.

Concepts
 Defect logging: the initial reporting and recording of a discovered defect.
 Defect tracking: monitors and records what happened to each defect after its initial discovery, up until its final resolution. Tracking via defect status
 Defect Status: describe current status of logged defect


Defect Lifecycle


Defect Status:

 OPPENED
The defect is not fixed, or fixed but not satisfactorily as required
 ASSIGNED
The defect is reviewed and assigned to fix it
 FIXED
The defect is already fixed and waiting to retest
 CLOSED
The defect is fixed satisfactorily as required
 ACCEPTED
The defect has not been fixed satisfactorily as required, but it’s accepted by concession of authority or customer
 CANCELLED
It’s not a defect or defect is removed by actions other than bug fixing




>>>> DEFECT MANAGERMENT

 Using template to log and track defect
 Log defect effectively

Template defects Log
 Defect ID: index the defect to manage easily
 Module: address defect location (module name, function name)
 Description: descript defect found (as detail as much)
 Type: select defect type
 Severity: select defect severity
 Priority: select defect priority
 Status: select defect status
 Created Date: input the date defect found
 Assigned To: name of person, who is assigned to fix defect
 Corrective Action: actions to fix assigned defect


Continue Reading →

#14 Basic SQL Statement

Basic SQL statement:
 CREATE TABLE table_name (column_def_list)
 DROP TABLE table_name
 SELECT column_list FROM table_list where {where_clause}
 INSERT source INTO target(columns) VALUES(value_list)
 UPDATE target SET col_val_pair_list {WHERE where_clause}
 DELETE FROM table {WHERE where_clause}

Create table:
 The CREATE TABLE statement is used to create a table in a database.
 SQL CREATE TABLE Syntax
CREATE TABLE table_name
(
column_name1 data_type,
column_name2 data_type,
column_name3 data_type,
....
)
 Explanation: Create a table with name ‘table_namE’ and have column1, column2

 Example:
CREATE TABLE Customer (First_Name char(50), Last_Name char(50), Address char(50), City char(50), Country char(25), DOB date);
 Note:
To specify a default value, add "Default [value]" after the data type declaration. In the above example, if we want to default column
"Address" to "Unknown" and City to “Hanoi", we would type in:
CREATE TABLE customer (First_Name char(50), Last_Name char(50),
Address char(50) default 'Unknown', City char(50) default ‘Hanoi', Country char(25), DOB date);

SELECT statement
 The SELECT statement is used to select data from a database.
 The result is stored in a result table, called the result-set.
 SQL SELECT Syntax
SELECT column_name(s)
FROM table_name and
SELECT * FROM table_name
 Note: SQL is not case sensitive. SELECT is the same as select.

Comparson Operators
Comparision Operators in conditions include:
> where Mark1 > 9;
< where Mark1 < 5;
>= where Mark1 >= 9;
<= where Mark1 <=5;
= where StudentName = ‘Minh’;
!= where StudentName != ‘Hoa’;
<> where StudentName <> ‘Tu’

Logic Operators:
Logic Operators in conditions include:
and
Where Mark1 >9 and StudentName = ‘Hao’;
Or
Where Mark1 > 9 or Mark2 > 9
Not: Phep toan phu dinh
Where Mark2 is not null;
Between
Where Mark1 between 5 and 10;
Like
Where StudentName like ‘%Hao’;
In
Where Mark1 in (8,9,10);

AND Operators:
 The AND operator displays a record if both the first condition and the second condition is true
 Example:
SELECT * FROM Marks
WHERE Mark1 >9 AND StudentName = ‘Hao’;

OR Operator:
 The OR operator displays a record if either the first condition or the second condition is true.
 Example:
SELECT * FROM Marks
Where Mark1 > 9 OR Mark2 > 9

Between Statement
 The BETWEEN operator selects a range of data between two values. The values can be numbers, text, or dates.
 SQL BETWEEN Syntax
SELECT column_name(s)
FROM table_name
WHERE column_name
BETWEEN value1 AND value2

LIKE Satement:
 The LIKE operator is used to search for a specified pattern in a column.
 SQL LIKE Syntax
SELECT column_name(s)
FROM table_name
WHERE column_name LIKE pattern
 LIKE ’pattern-to-match’Where the pattern can include special wildcard characters:
 % 0 or more arbitrary characters
 _ any one character
 Example:
SELECT * FROM Students
WHERE StudentName LIKE ’%hoa%’;

IN Operator:
 The IN operator allows you to specify multiple values in a WHERE clause.
 SQL IN Syntax
SELECT column_name(s)
FROM table_name
WHERE column_name IN (value1,value2,...)

AS Clause:
 You can give a table or a column another name by using an alias. This can be a good thing to do if you have very long or complex table names or column names.
 An alias name could be anything, but usually it is short.
 SQL Alias Syntax for Tables
SELECT column_name(s)
FROM table_name
AS alias_name

TOP Clause:
 The TOP clause is used to specify the number of records to return.
 The TOP clause can be very useful on large tables with thousands of records. Returning a large number of records can impact on performance.
 Note: Not all database systems support the TOP clause.
 SQL Server Syntax
SELECT TOP number|percent column_name(s)
FROM table_name

ORDER BY Satement:
 The ORDER BY keyword is used to sort the resultset by a specified column.
 The ORDER BY keyword sort the records in ascending order by default.
 If you want to sort the records in a descending order, you can use the DESC keyword.
 SQL ORDER BY Syntax
SELECT column_name(s)
FROM table_name
ORDER BY column_name(s) ASC|DESC

DISTINCT
 In a table, some of the columns may contain duplicate values. This is not a problem, however, sometimes you will want to list only the different (distinct) values in a table.
 The DISTINCT keyword can be used to return only distinct (different) values.
 SQL SELECT DISTINCT Syntax
SELECT DISTINCT column_name(s)
FROM table_name

GROUP BY
 The GROUP BY statement is used in conjunction with the aggregate functions to group the result-set by one or more columns.
 SQL GROUP BY Syntax
SELECT column_name, aggregate_function(column_name)
FROM table_name
WHERE column_name operator value
GROUP BY column_name

SUBQUERY
 A SELECT query can be used within another SELECT condition and is then known as a sub query
 A sub query can return only one attribute having zero or more values
 The use of a view may provide a simpler query format than using techniques such as self-joins
 Operations with sub query: >, <, >=,<= , ALL, ANY, IN, NOT
IN, EXISTS, UNION, MINUS, INTERSEC..

 Example: list all Students that younger than ‘Hoa’
SELECT StudentName FROM Students
WHERE DateofBirth > ( SELECT DateofBirth FROM Students
WHERE StudentName = ‘Hoa’)

INSERT Statement:
 The INSERT INTO statement is used to insert a new row in a table.
 SQL INSERT INTO Syntax
INSERT INTO table_name
VALUES (value1, value2, value3,...)

 Note: There is an other way to insert data:
INSERT INTO table_name (column_name1, column_name2,…)
VALUES (value1, value2, value3,...)

DELETE Statement:
 The DELETE statement is used to delete rows in a table.
 SQL DELETE Syntax
DELETE FROM table_name
WHERE some_column=some_value
 Note:
Notice the WHERE clause in the DELETE syntax. The WHERE clause specifies which record or records that should be deleted. If you omit the WHERE clause, all records will be deleted!

UPDATE statement
 The UPDATE statement is used to update existing records in a table.
 SQL UPDATE Syntax
UPDATE table_name
SET column1=value, column2=value2,...
WHERE some_column=some_value
 Note:
Notice the WHERE clause in the UPDATE syntax. The WHERE clause specifies which record or records that should be updated. If you omit the WHERE clause, all records will be updated!


DROP statement
 The DROP INDEX statement is used to delete an index in a table.
 DROP INDEX Syntax for MS Access:
DROP TABLE table_name

Excercise:

1. List all students that have name is ‘Hoa’
2. List all students has highest mark in maths
3. Add student(s) into table Students
4. Update student marks: if mark<5 then update to 3
5. List student with name have prefix is [Ng]








Continue Reading →

#13 How To Create A TestCase

Test Case Process
 Test Case: A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement.
 Test Script: scripts that use for testing or check the output or for automation testing
 Test Data: data is using for testing

Why Test Case?
 Detail of test design to describer how to deploy test design
 Predict expected result
 Help new testers familiar with existing application/system without reading requirement

Good Test Case
 Has a high probability of finding errors
 Clear of purpose
 Well organized
 Reviewable
 Maintainable
 Useful to other testers

Test Case Structure
 General information: cover sheet
 Project information
 Record of change
 GUI: List all screen information to check easily
 [Module Name] test cases
 Pictures: list all screen designs that should be tested.
These screens must be approved by customer
 Test Report Refer to Test Case template

GUI – Graphic User Interface
 Screen Name
 Field Name
 Expected Output:
 Type
 Mandatory
 Editable
 Default value
 Max Length
 Range Value
 Test Status
 Test Date
 Notes.

GUI -  example

[Module] Test Case
 ID
 Test case description:
 List cases/scenario that will be tested
 List cases, designed in test design document for the specific function/module
 Test case procedures:
 Test actions taken by the actor when executing a testcase
 Test input: the actual values/data input by the actor at each step/action
 Expected output: the expected response from the application for a given step/action
 Inter-test case dependence: List all test cases that must be done before performing this case
 Actual output: the actual response from the application for a given step/action
 Result: Pass; Fail; Untested; N/A
 Note

Test Script
 Check test output
 Recorded automatically
 Coded manually: using test tools or standard programming languages like VB, C/C++, Java or SQL
 Test stub: temporary implementation of part of a program for unit test purposes
 Test driver: program which sets up an environment to call a module (or function) for testing

Test Script [Example]
 Check test output
 Recorded automatically: using RUBY test tool

Continue Reading →

#12 How to create a decision table and example

Decision Tables definition
 Concept: test the rules that govern handling of transactional situations
 Model: table (or Boolean graph) connecting conditions with actions
 Test derivation: fulfill conditions, check actions
 Coverage criteria: at least one test per combination of conditions (DT column)
 Bug hypothesis: improper action or missing action

Example: Deriving Tests
 In the example just shown, each column of the table is a testcase
 We will create the conditions (which are the test’s inputs)
 We will verify the actions (which are the test’s expected results)
 In some cases, we might generate more than one test case per column (more later)
 In this case, some of the test cases don’t make much sense; e.g.:
 Account not real but account active?
 Account not real but account within limit?
 Maybe we don’t need all the columns in our decision table?

Collapsing a Decision Table
 If the value of one or more particular conditions can’t affect the actions for
two or more combinations of conditions, we can collapse the decision table
 This involves combining two or more columns
 Combinable columns often but not always next to each other
 Look for two or more columns that result in the same combination of
actions (for all the actions in the table)
 Replace the conditions that are different in those columns with “-” (for
don’t care/doesn’t matter/can’t happen)
 Repeat this process until no further columns share the same combination
of actions or where collapse would erase an important distinction
 Be careful with tables that have non-exclusive rules



Continue Reading →

#11 Decision table testing

How to design test cases from given software models using the following test design techniques.

Part 1: Decision table testing
 A Typical Structure of a Decision Table



Why use decision tables
 Equivalence partitioning and boundary value analysis tend to be more focused on the user interface.
 The other two specification-based techniques, decision tables and state transition testing are more
focused on business logic or business rules.

Definition




Steps to Create a decision table
1. List All Stub Conditions
2. Calculate the Number of Possible Combinations (Rules)
3. Place all of the Combinations into the Table
4. Reduce test combinations
5. Check covered combinations
6. Fill the Table with the Actions

Step 1: List All Stub Conditions (Causes)

Hints:
 Write down the values the cause/condition can assume
 Cluster related causes
 Put the most dominating cause first
 Put multi valued causes last


Step 2: Calculate combinations


Step 3: Place all of the Combinations into the


Step 4: Reduce combinations


Step 5: Check covered combinations


Step 6: Fill the Table with the Effects (Actions)


Sample: Specification Create a decision table

 Step One – List All Stub Conditions
 The condition stubs for the table would be:
• a, b, c form a triangle?
• a = b?
• a = c?
• b = c?
 Step Two – Calculate the Number of Possible
Combinations (Rules)
• Number of Rules = 2Number of Condition Stubs
• So therefore, Number of Rules = 24 = 16

 Step Three&Four – Place all of the Combinations into the Table and reduce the combinations


 Step Five – Check Covered Combinations
• This step is a precautionary step to check for errors and redundant and inconsistent rules.
 Step Six – Fill the Table with the Actions


 Decision Table for the Triangle Problem








Continue Reading →

#10 Test Design/ How to design testcase

 Requirement: If we are testing a program that
generates the report card for a group of 10000students
 Problem: execute 1000 cases for 10000 students
 Impossible
 Solution: we can divide the test data as per the grades
A grade 80-100
B Grade 60-80
C grade 40-60 ........
Then test the application by picking up the scores of 2-3 students belonging to each grade class. In this way we can test the application for the entire set of test data.
System
Outputs
Invalid inputs Valid inputs

Part 1: Equivalent Partitioning


Equivalent Partitioning
 Equivalent partitioning is test case design strategy in black box testing
 This method is typically used to reduce the total number of test cases to a finite set of testable test
cases, still covering maximum requirements
 Inputs to the software or system are divided into groups that are expected to exhibit similar
behavior, so they are likely to be processed in the same way. Equivalent Partitioning
 Equivalence partitions (or classes) can be found for both valid data and invalid data, i.e. values that should be rejected
 Tests can be designed to cover partitions. Equivalence partitioning is applicable at all levels
of testing
 Equivalence partitioning as a technique can be used to achieve input and output coverage. It can
be applied to human input, input via interfaces to a system, or interface parameters in integration
testing.

How to divide equivalence class
 Input data should be divided into equivalence classes based on consideration of
 Valid vs. invalid input values
 Valid vs. invalid output values
 Similarity and difference of input values
 Similarity and difference of output values
 Similarity and difference of expected processing

Designing Test Cases Using EP
To use equivalence partitioning, you
will need to perform two steps:
1. Identify the equivalence classes
2. Design test cases
STEP 1: IDENTIFY EQUIVALENCE CLASSES
Take each input condition described in the specification and derive at least two equivalence classes for it.
One class represents the set of cases which satisfy the condition (the valid class) and one represents cases which do not (the invalid class )
STEP 2: DESIGN TEST CASES
List selected cases in to test design document

Example
 A savings account in a bank earns a different rate of interest depending on the balance in the account
If a balance in the range $0 up to $100 has a 3% interest rate, a balance over $100 and up to $1000 has a 5% interest rate, and balances of $1000 and over have a 7% interest rate, we would initially identify three valid equivalence partitions and one invalid partition

Example: Define Test Cases
 Case 1: Valid Partition (For 3% interest) - input one or some values from 0.00 to 100.00 (e.g. 55)
 Case 2: Valid Partition (For 5% interest) - input one or some values from 100.01 to 999.99 (e.g. 345.67)
 Case 3: Valid Partition (For 7% interest) - input one or some values > 1000.00 (e.g. 2345)
 Case 4: Invalid Partition - input one or some values < 00.00 (e.g. -0.01)

Part 2: Value Boundary Analysis
Value Boundary Analysis
 Value Boundary analysis (BVA) is based on testing at the boundaries between partitions
 Behavior at the edge of each equivalence partition is more likely to be incorrect, so boundaries are an area where testing is likely to yield defects
 The maximum and minimum values of a partition are its boundary values
 A boundary value for a valid partition is a valid boundary value; the boundary of an invalid partition is an invalid boundary value
 Tests can be designed to cover both valid and invalid boundary values. When designing test cases, a test for each boundary value is chosen Value Boundary Analysis
 Boundary value analysis can be applied at all test levels. It is relatively easy to apply and its defect
finding capability is high; detailed specifications are helpful
 This technique is often considered as an extension of equivalence partitioning. It can be used on
equivalence classes for user input on screen as well as, for example, on time ranges (e.g. time out,
transactional speed requirements) or table ranges (e.g. table size is 256*256).
 Boundary values may also be used for test data selection.

Example
Sample: consider a printer that has an input option of
To apply boundary value analysis, we will take the minimum and maximum (boundary) values from the valid partition (1 and 99 in this case) together with the number of copies to be made, from 1 to 99
The first or last value respectively in each of the invalid partitions adjacent to the valid partition (0 and 100 in this case). In this example we would have three equivalence partitioning tests (one from each of the three partitions) and four boundary value tests.

Why Both EP and BVA?
=> Because:
 Every boundary is in some partition, if you did only boundary value analysis you would also have tested every equivalence partition.
 If only testing boundaries we would probably not give the users much confidence as we are using extreme values rather than normal values Extending EP & BVA
 Sample: If you are booking a flight, you have a choice of Economy/Coach, Premium Economy, Business or First Class tickets. Each of these is an equivalence partition in its own right and should be tested, but it doesn't make sense to talk about boundaries for this
type of partition, which is a collection of valid things
=> Only use Equivalent Partitioning for the sample

Designing Test Case
Identified the conditions that you wish to test by using equivalence partitioning and boundary value analysis,
The more test conditions that can be covered in a single testcase, the fewer test cases will be needed in order to cover all the conditions. This is usually the best approach to take for positive tests and for tests that you are reasonably confident will pass. However if a test fails, then we need to find out why it failed - which test condition was handled incorrectly? We need to get a good balance between covering too many and too few test conditions in our tests.

Part 3: Case study 1
 If an internal telephone system for a company with 200 telephones has 3-digit extension numbers from 100 to 699, we can identify the following partitions and boundaries
 Time: 5’

Case study 2
 Calculate Payment Application
 Time: 15’
Continue Reading →

translate

Hôm nay đọc gì

Lưu trữ

view

view