Skip to main content

2nd part of 1000 interview questions Manual Testing Interview Q&A: Severity, Priority, Bug Reports, Test Cases, Test Suites, Test Reports

Manual Testing Interview Q&A: Severity, Priority, Bug Reports, Test Cases, Test Suites, Test Reports


I. Severity and Priority (20 Q&A)

1. What is the difference between severity and priority?

  • Severity refers to the impact of a defect on the functionality.

  • Priority refers to the urgency of fixing the defect.

2. Who decides severity and who decides priority?

  • Severity is usually decided by the QA/tester.

  • Priority is usually set by the product manager or project manager.

3. Can a high severity bug have low priority? Give an example.

  • Yes. Example: A crash in a rarely used admin report that isn't needed for the upcoming release.

4. Can a low severity bug have high priority? Give an example.

  • Yes. Example: A typo in the company's name on the homepage.

5. What are the different levels of severity?

  • Critical, Major, Moderate, Minor, Trivial

6. What are the different levels of priority?

  • High, Medium, Low

7. How do you handle a critical severity bug reported in the last minute?

  • Notify the team immediately, assess risk, involve the release manager, and determine if the release should be halted.

8. How do you prioritize bugs in a release cycle?

  • By evaluating business impact, customer need, frequency of occurrence, and severity.

9. What is the impact of high severity and high priority bug?

  • Immediate attention required; the bug is critical and affects major functionality.

10. How do developers and testers communicate about severity vs priority?

  • Through defect triage meetings, comments in bug tracking tools, and collaborative discussion.

11. What tools help in managing severity and priority?

  • JIRA, Bugzilla, TestRail, Quality Center (HP ALM), Mantis

12. How do you determine severity if the bug affects only one module?

  • Assess how important that module is and how critical the bug’s impact is within that scope.

13. Can severity change over time? Explain with a case.

  • Yes. A low-severity UI bug might become high-severity if it causes user confusion during a key demo.

14. Who is responsible if a high severity bug is missed?

  • QA team is typically accountable, but root cause analysis helps identify exact responsibility.

15. How do stakeholders influence bug priority?

  • They define business needs and user expectations, directly influencing bug prioritization.

16. What’s an example of a cosmetic issue with high priority?

  • Wrong logo on a product launch landing page during a marketing campaign.

17. How does customer impact affect severity or priority?

  • Direct customer impact can increase both severity (if it's critical) and priority (due to urgency).

18. How do you justify a priority change to a product manager?

  • Use customer complaints, analytics, and potential business impact to explain the reasoning.

19. Can automated tests determine severity or priority?

  • No. They can detect failures, but assessment requires human analysis.

20. How do you track bugs based on severity over time?

  • Use filters, charts, and dashboards in bug tracking tools to monitor severity trends.


II. Bug Reports (20 Q&A)

21. What is a bug report?

  • A document that describes a defect found during testing.

22. What are the main elements of a bug report?

  • Title, Description, Steps to Reproduce, Expected Result, Actual Result, Severity, Priority, Environment, Attachments, Status.

23. Why is a good bug report important?

  • It helps developers understand and fix the issue efficiently.

24. What tools are used to file bug reports?

  • JIRA, Bugzilla, Mantis, Redmine, GitHub Issues

25. What makes a bug report effective?

  • Clarity, completeness, and reproducibility.

26. What is the difference between open and closed status?

  • Open means the bug is reported and pending action; Closed means it is resolved and verified.

27. What is reproducibility in a bug report?

  • Ability to consistently reproduce the defect using the steps provided.

28. How do you handle a non-reproducible bug?

  • Gather more logs, check environment/configuration, and involve the reporter.

29. What is the impact of unclear steps in a bug report?

  • Developers may not be able to fix the issue due to confusion.

30. Who reviews bug reports?

  • Developers, QA leads, and sometimes product managers.

31. What happens if a bug is marked as "Won’t Fix"?

  • It won’t be fixed due to business or technical constraints.

32. What’s a duplicate bug?

  • A bug already reported earlier.

33. What is regression testing in relation to bugs?

  • Retesting fixed bugs to ensure the issue is resolved and hasn’t caused new issues.

34. Can users report bugs? How?

  • Yes, through support tickets or in-app feedback systems.

35. What is a blocker bug?

  • A bug that stops further testing or development.

36. What is the life cycle of a bug?

  • New → Assigned → Open → Fixed → Retest → Verified → Closed (or Reopened)

37. How do you ensure consistency in bug reporting?

  • Use templates and maintain standards.

38. What is bug triage?

  • The process of reviewing and prioritizing reported bugs.

39. What is severity in a bug report?

  • The impact level of the defect.

40. What is priority in a bug report?

  • The urgency to resolve the defect.


III. Test Cases (20 Q&A)

41. What is a test case?

  • A document that outlines inputs, execution conditions, and expected results.

42. What is the structure of a test case?

  • Test Case ID, Title, Description, Preconditions, Steps, Expected Result, Actual Result, Status.

43. What makes a good test case?

  • Clear, concise, repeatable, and traceable to requirements.

44. How do you write test cases from requirements?

  • Analyze requirements, identify scenarios, and derive steps to verify them.

45. What is positive and negative testing?

  • Positive tests check valid inputs; negative tests check invalid inputs.

46. What is boundary value analysis in test cases?

  • Testing edge values of input ranges.

47. What is equivalence partitioning?

  • Grouping inputs with similar behavior and testing one from each group.

48. How many test cases should be written for a feature?

  • As many as needed to cover all scenarios, edge cases, and risks.

49. What is a test case review?

  • Reviewing test cases with peers or leads to ensure coverage and accuracy.

50. How do you prioritize test cases?

  • Based on risk, criticality, frequency of use, and feature importance.

51. What is a reusable test case?

  • A test case that can be used across different modules or releases.

52. How do you link test cases to requirements?

  • Using traceability matrices or linking in test management tools.

53. What is a test case repository?

  • A centralized location for storing test cases.

54. Can a test case fail even if the application works?

  • Yes, due to incorrect expectations or test data.

55. What is exploratory testing in relation to test cases?

  • Ad hoc testing without predefined cases, based on tester experience.

56. How often should test cases be updated?

  • Whenever there are changes in requirements or functionality.

57. What is automation in test cases?

  • Writing scripts to execute test cases automatically.

58. What is a sanity test case?

  • Basic test to verify if a build is stable enough for further testing.

59. What is a smoke test case?

  • High-level tests to verify major functionalities work.

60. How do you handle failed tet cases?

Log defects, analyze root cause, retest after ixes.

Comments

Popular posts from this blog

30 Manual Testing interview questions from glass door

Here are 30 manual testing interview questions commonly encountered in interviews, compiled from various sources including Glassdoor: What is the difference between Quality Assurance (QA), Quality Control (QC), and Software Testing? QA focuses on improving the processes to deliver Quality Products. QC involves the activities that ensure the verification of a developed product. Software Testing is the process of evaluating a system to identify any gaps, errors, or missing requirements. Can you explain the Software Testing Life Cycle (STLC)? The STLC includes phases such as Requirement Analysis, Test Planning, Test Case Development, Environment Setup, Test Execution, and Test Closure. What is the difference between Smoke Testing and Sanity Testing? Smoke Testing is a preliminary test to check the basic functionality of the application. Sanity Testing is a subset of regression testing to verify that a specific section of the application is still worki...

Part 1-Interview questions for Manual testing

1. What is Software Testing? Answer: Software testing is the process of evaluating a software application to identify any discrepancies between expected and actual outcomes, ensuring the product is defect-free and meets user requirements. ​ GUVI 2. What are the different types of Software Testing? Answer: The main types include: ​ Software Testing Material +1 LinkedIn +1 Functional Testing: Validates the software against functional requirements. ​ Non-Functional Testing: Assesses aspects like performance, usability, and reliability. ​ Manual Testing: Test cases are executed manually without automation tools. ​ Software Testing Material +2 LinkedIn +2 Katalon Test Automation +2 Automation Testing: Utilizes scripts and tools to perform tests automatically. ​ 3. What is the difference between Verification and Validation? Answer: Verification: Ensures the product is designed correctly, focusing on processes and methodologies. ​ Validation: Ensures the bui...

1000 Interview questions part 1

Test Case Design – Interview Questions & Answers (1–50) 1. What is a test case? A test case is a set of actions executed to verify a particular feature or functionality of your application. 2. What are the components of a test case? Test case ID, Description, Preconditions, Steps, Test Data, Expected Result, Actual Result, Status, Comments. 3. What is test case design? It's the process of creating a set of inputs, execution conditions, and expected results to verify if the system meets requirements. 4. Why is test case design important? It ensures effective testing coverage, reduces testing time, and helps find more defects. 5. Name some common test case design techniques. Equivalence Partitioning, Boundary Value Analysis, Decision Table Testing, State Transition Testing, Error Guessing, Use Case Testing. 6. What is Equivalence Partitioning? A technique that divides input data into valid and invalid partitions to reduce the number of test cases. 7. Give an example...