Chapter4 Q A Software Testing Techniques
Chapter4 Q A Software Testing Techniques
Choose the answer that mostly suits each of the sentences given: 1. Principles of software testing should include, All tests should be traceable to customer requirements. Tests should be planned long before testing begins. Testing should begin in the small and progress toward testing in the large. d. All of the above e. None of the above 2. Principles of software testing should include, a. A technical assessment of a work product created during the software engineering process b. Testing should begin in the small and progress toward testing in the large. c. A software quality assurance mechanism d. All of the above e. None of the above 3. In software testing, the Pareto Principle means a. 20% of all errors uncovered during testing will likely be traceable to 80% of all program components b. 40% of all errors uncovered during testing will likely be traceable to 60% of all program components c. 60% of all errors uncovered during testing will likely be traceable to 40% of all program components d. 80% of all errors uncovered during testing will likely be traceable to 20% of all program components 4. In software testing, exhaustive testing is possible 6. True 7. False a. b. c.
5. In testable software characteristics, operability means a. b. c. d. e. The degree to which testing can be automated and optimized It operates cleanly Few changes are requested during testing All of the above Non of the above
6. In testable software characteristics, observability means a. Testing can be targeted b. Incorrect output is easily identified c. The software system is built from independent modules that can be tested independently d. Reduce complex architecture and logic to simplify tests
7. In testable software characteristics, controllability means a. Incorrect output is easily identified b. The better we can control the software, the more testing can automated and optimized c. Testing can be targeted d. The degree to which testing can be automated and optimized
8. In testable software characteristics, decomposability means a. The more information of the design we have, the smarter we will test b. Reduce complex architecture and logic to simplify tests c. The software system is built from independent modules that can be tested independently d. Incorrect output is easily identified 9. In testable software characteristics, simplicity means a. Reduce complex architecture and logic to simplify tests b. Few changes are requested during testing c. The more information of the design we have, the smarter we will test d. The results of each test case are readily observed 10. In testable software characteristics, understandability means
a. b. c. d.
It operates cleanly Incorrect output is easily identified Reduce complex architecture and logic to simplify tests The more information of the design we have, the smarter we will test
11. In testable software characteristics, stability means a. b. c. d. The degree to which testing can be automated and optimized Few changes are requested during testing Reduce complex architecture and logic to simplify tests Testing can be targeted
12. A good test should a. b. c. d. e. have a high probability of finding an error be not redundant. be best of breed All of the above None of the above
13. A good test should a. b. c. d. be t oo s i m pl e be too complex be neither too simple nor too complex be either too simple or too complex
14. White-Box testing derive test cases that: a. Guarantee that all independent paths within a module have been exercised at least once. b. Exercise all logical decisions on their true and false sides. c. Execute all loops at their boundaries and within their operational bounds . d. All of the above e. None of the above 15. White-Box testing derive test cases that: a. Inspect a fraction ai of each software work product, i. Record the number of faults, fi found within ai. b. Develop a gross estimate of the number of faults within work product i by multiplying fi by 1/ai.
c. Sort the work products in descending order according to the gross estimate of the number of faults in each. d. All of the above e. None of the above 16. Black-Box testing finds errors in categories such as a. b. c. d. e. Logical decisions on their true and false sides Loops at their boundaries and within their operational bounds Interface errors All of the above None of the above
17. Black-Box testing finds errors in categories such as a. b. c. Logical decisions on their true and false sides Incorrect or missing functions All independent paths within a module have been exercised at e least once d. All of the above e. None of the above 18. Black-Box testing finds errors in categories such as a. b. c. d. e. Exercise internal data structures to ensure its validity Exercise all logical decisions on their true and false sides Errors in data structures or external data base access. All of the above None of the above
19. Black-Box testing finds errors in categories such as a. b. c. d. e. Initialization and termination errors Behavior or performance errors Incorrect or missing functions. All of the above None of the above
20. In unit testing, some considerations should be taken, such as a. Module interface is tested to ensure information flow b. Local data structures are examined to ensure that local data maintains its integrity c. All independent paths are exercised to ensure all statements are executed at least once. d. All of the above
e. None of the above 21. In unit testing, some considerations should be taken, such as a. Modules operate properly at boundaries established to limit or restrict processing. b. Error handling paths are tested. c. Local data structures are examined to ensure that local data maintains its integrity d. All of the above e. None of the above 22. In unit testing, some considerations should be taken, such as a. Use common sense and organizational sensitivity when interpreting data. b. Provide regular feedback to the individuals and teams who collect measures. c. Work with practitioners and teams to set clear goals and metrics that will be used to achieve them. d. All of the above e. None of the above
23. Test cases should uncover errors such as a. Comparison of different data types. b. Incorrect logical operators or procedures. c. Expectation of equality when precision errors make equality unlikely. d. All of the above e. None of the above 24. Test cases should uncover errors such as a. b. c. d. e. Incorrect comparison of errors Improper or nonexistent loop termination. Improperly modified loop variables. All of the above None of the above
25. Some integration testing strategies options could be a. The quality related approach b. The big bang approach
c. The productive approach d. All of the above e. None of the above 26. Some integration testing strategies options could be a. b. c. d. e. The defect removal strategy An incremental construction strategy The analyzing risk strategy All of the above None of the above
27. System Testing is a. Estimation of resources, cost, and schedule for a software engineering effort b. A series of tests whose primary purpose is to fully exercise the computer based system c. The stability of product requirements and the environment that supports the software engineering effort d. All of the above e. None of the above
28. In system testing, recovery testing is defined as a. Verifying the protection mechanisms built into a system will protect it from improper penetration. b. Forcing the software and verifying the recovery is properly performed. c. Executing the system in a manner that demands resources in abnormal quantity, frequency, or volume. d. Testing the run-time performance of software within the context of an integrated system. 29. In system testing, security testing is defined as a. Testing the run-time performance of software within the context of an integrated system b. Executing the system in a manner that demands resources in abnormal quantity, frequency, or volume. c. Testing the run-time performance of software within the context of an integrated system.
d. Verifying the protection mechanisms built into a system will protect it from improper penetration 30. In system testing, performance testing is defined as a. Testing the run-time performance of software within the context of an integrated system b. Executing the system in a manner that demands resources in abnormal quantity, frequency, or volume. c. Testing the run-time performance of software within the context of an integrated system. d. Verifying the protection mechanisms built into a system will protect it from improper penetration