Skip to main content

1000 Interview questions part 1



Test Case Design – Interview Questions & Answers (1–50)

1. What is a test case?
A test case is a set of actions executed to verify a particular feature or functionality of your application.

2. What are the components of a test case?
Test case ID, Description, Preconditions, Steps, Test Data, Expected Result, Actual Result, Status, Comments.

3. What is test case design?
It's the process of creating a set of inputs, execution conditions, and expected results to verify if the system meets requirements.

4. Why is test case design important?
It ensures effective testing coverage, reduces testing time, and helps find more defects.

5. Name some common test case design techniques.
Equivalence Partitioning, Boundary Value Analysis, Decision Table Testing, State Transition Testing, Error Guessing, Use Case Testing.

6. What is Equivalence Partitioning?
A technique that divides input data into valid and invalid partitions to reduce the number of test cases.

7. Give an example of Equivalence Partitioning.
For an input age field (18–60), valid: 25; invalid: 17, 61.

8. What is Boundary Value Analysis (BVA)?
Testing values at the boundaries of input domains (e.g., 17, 18, 60, 61).

9. What's the main difference between EP and BVA?
EP tests representative values from partitions; BVA focuses on edge values.

10. What is Decision Table Testing?
A technique that deals with combinations of inputs and their corresponding outputs.

11. When is Decision Table Testing used?
When system behavior depends on different input combinations or business rules.

12. What is State Transition Testing?
A method used to test system behavior for various input states and transitions.

13. What’s an example of State Transition Testing?
ATM machine states: Card Inserted → PIN Entered → Transaction.

14. What is Error Guessing?
A technique based on intuition, experience, and past defects to guess likely error-prone areas.

15. What are Use Case Tests?
They verify that a user can perform a specific task or goal from start to end.

16. What is a positive test case?
A test to check if the system works as expected with valid inputs.

17. What is a negative test case?
A test to verify the system's behavior with invalid inputs.

18. What is test data?
Input data used during test case execution.

19. What is a test suite?
A collection of test cases grouped for execution.

20. What is test coverage?
A metric that shows how much of the application is tested.

21. How do you measure test coverage?
By mapping test cases to requirements or code coverage tools.

22. What is a test scenario?
A high-level test idea or functionality to be tested.

23. How is a test scenario different from a test case?
Scenarios are broad; test cases are detailed and specific.

24. What is test case prioritization?
Arranging test cases in order of execution based on risk, frequency, or impact.

25. What are high-priority test cases?
Those that cover core functionality or high-risk areas.

26. What are low-priority test cases?
Those that test less-used features or cosmetic elements.

27. What is a traceability matrix?
A document that maps test cases to requirements to ensure coverage.

28. What is exploratory testing?
Simultaneous learning, test design, and execution without pre-defined test cases.

29. What is the advantage of writing test cases before testing?
It improves clarity, structure, and test coverage.

30. What tools are used for writing test cases?
Excel, TestRail, Zephyr, TestLink, Xray, qTest.

31. Can automated scripts replace manual test cases?
No, both serve different purposes—manual is more exploratory; automation is best for regression.

32. How do you ensure your test cases are effective?
By reviewing, updating, aligning with requirements, and ensuring traceability.

33. What is test data-driven testing?
Creating test cases where input data is separated from the test logic.

34. What is parameterized testing?
A technique where the same test logic runs with multiple sets of input values.

35. What are acceptance criteria?
Conditions that a product must meet to be accepted by users or stakeholders.

36. What is smoke testing?
Initial testing to ensure basic functionality works before detailed testing.

37. What is sanity testing?
A quick test to check if a specific bug is fixed or a small feature works.

38. What is regression testing?
Testing existing functionality to ensure it works after changes or bug fixes.

39. How do you handle test cases for frequently changing requirements?
Use modular, reusable test cases and keep them updated through reviews.

40. What is peer review of test cases?
A process where other testers or QA leads review test cases for quality.

41. What is a reusable test case?
A test that can be applied across modules with minor changes.

42. Should test cases include expected results?
Yes, it helps in validating outcomes clearly.

43. How do you test UI elements in test cases?
Include checks for layout, alignment, responsiveness, and element visibility.

44. What is test case versioning?
Tracking changes to test cases over time.

45. How do you avoid redundant test cases?
By reviewing the test suite and using traceability to check overlaps.

46. Can you give an example of a test case format?
Yes:

  • ID: TC_001

  • Title: Login with valid credentials

  • Steps: Enter username/password, click login

  • Expected: Redirect to dashboard

  • Actual: (to be filled post-execution)

47. What is test case maintenance?
Updating test cases when requirements change.

48. What are flaky test cases?
Tests that fail intermittently without changes in the code.

49. How do you write test cases for non-functional requirements?
Focus on performance, usability, reliability, etc., with measurable criteria.

50. What is the role of test cases in Agile?
They are lean, often scenario-based, and evolve during sprints. 

Comments

Popular posts from this blog

30 Manual Testing interview questions from glass door

Here are 30 manual testing interview questions commonly encountered in interviews, compiled from various sources including Glassdoor: What is the difference between Quality Assurance (QA), Quality Control (QC), and Software Testing? QA focuses on improving the processes to deliver Quality Products. QC involves the activities that ensure the verification of a developed product. Software Testing is the process of evaluating a system to identify any gaps, errors, or missing requirements. Can you explain the Software Testing Life Cycle (STLC)? The STLC includes phases such as Requirement Analysis, Test Planning, Test Case Development, Environment Setup, Test Execution, and Test Closure. What is the difference between Smoke Testing and Sanity Testing? Smoke Testing is a preliminary test to check the basic functionality of the application. Sanity Testing is a subset of regression testing to verify that a specific section of the application is still worki...

Part 1-Interview questions for Manual testing

1. What is Software Testing? Answer: Software testing is the process of evaluating a software application to identify any discrepancies between expected and actual outcomes, ensuring the product is defect-free and meets user requirements. ​ GUVI 2. What are the different types of Software Testing? Answer: The main types include: ​ Software Testing Material +1 LinkedIn +1 Functional Testing: Validates the software against functional requirements. ​ Non-Functional Testing: Assesses aspects like performance, usability, and reliability. ​ Manual Testing: Test cases are executed manually without automation tools. ​ Software Testing Material +2 LinkedIn +2 Katalon Test Automation +2 Automation Testing: Utilizes scripts and tools to perform tests automatically. ​ 3. What is the difference between Verification and Validation? Answer: Verification: Ensures the product is designed correctly, focusing on processes and methodologies. ​ Validation: Ensures the bui...