13 min read

What are test cases and how to write them?

What are test cases and how to write them?


A software test case is a set of planned and executed actions to assess the proper functioning of an application, system, or website in accordance with specified requirements. To create effective test cases, it is crucial to cover all potential usage scenarios and follow a methodical and meticulous approach.

Throughout the software development process, the Quality Assurance (QA) team plays a key role in ensuring that digital products are launched to the market with the expected functionalities, aiming for user satisfaction.

A key responsibility in this area is the careful creation of test cases, which serve as a guide to evaluate software, identifying problems and errors, while enabling a documented, clear, and efficient tracking process.

This article covers different aspects of test cases: how they relate to user stories, tips on writing them, test cases examples, and tools you can use. We also give advice on designing them, and you will understand the importance of keeping them up-to-date.

In this article:

What is a test case?

A test case is a set of actions executed to determine **whether each software functionality aligns with the goals of its creators and meets the expectations of end-users.

Each test case focuses on a specific action, such as “performing a search”, and provides explicit instructions on how to carry out the action. These instructions include steps to follow, like entering a search term, pressing the enter key, or clicking the magnifying glass; and the expected result, which involves displaying the items that match the search term.

The primary objective is to identify errors, defects, or failures in the software, and in such instances, to promptly report these issues to the development team for their resolution. It is imperative that test cases cover all potential actions and check various aspects of the code, in terms of performance, compatibility, and security.

From the user story to the test case

During the initial phase of a project, the Project Management (PM) team engages in dialogue with the people responsible for the development of the digital product. They conduct a comprehensive assessment and break down the project into epics.

An epic comprises a complete software flow (login, profile management, search, payment, etc.), and from each epic, different user stories emerge. User stories detail the actions that users can perform in that particular flow or section (such as logging in, choosing the language of the text, searching for products in a specific category, selecting a payment method, etc). A user story represents a general explanation of a software function from the user’s perspective. It is articulated in a single sentence following the format: "As [profile], I want [software goal] to achieve [result]." For example: "As a diner, I want to see restaurants on the map so I can choose the closest one".

Although they are outlined at the beginning, these descriptions are collaboratively refined throughout the entire development process. The client contributes ideas, and then the team suggests software improvements, leading to a continuous evolution of user stories.

Once the product has progressed, it's time to test its functionality to ensure a flawless release to the market, free from errors, and with an intuitive interface. User stories represent the smallest units of work in this process and require the creation of at least one test case for each, to encompass all possible usage scenarios.

For example, some test cases for the "login" module might include: "use the code received by email to create a password", "copy and paste the received code", "try to reuse the same code after 24 hours".

How to organize information to create test cases?

Before starting to draft test cases, it's crucial to understand the product requirements. This implies arranging and analyzing project documentation, specifications, and all information provided by the people involved.

The PM team gathers the documentation, organizes it, and assigns tasks to development, design, and QA teams to carry out the software development process. These tasks are documented on tickets or cards, on virtual work boards like Jira or Trello, providing as much detail as possible for completing the job. This includes everything from the color range for the main screen, the buttons and sections, to the type of login (email, social media, etc), or the expected response from dropdown menus, search boxes, or any other interactive element.

The QA team uses this documentation as a starting point and a guide for creating test cases. Simultaneously, they visualize the interface design in Figma (or another collaborative tool) to go screen by screen, identifying buttons, images, text; specifying actions like clicks or double-clicks, etc. The objective is to consider every possible interaction with the software, covering all functionalities and scenarios.

The process moves from the most general to the most specific, resulting in many smaller tickets than those initially created by the PM team. These smaller tickets detail, step by step, the actions to be performed for testing in a clear and precise way.

How to write test cases

Once the information is organized and we know what needs to be tested, it's time to write. There is not a single way to do it. It is about finding the working methodology that is most practical according to the team's needs and the project's specifics.

At XOOR, the QA team uses tables that include the project name, the date, the person creating the test cases, and various columns with necessary information:

  1. Module to be tested: Specifies the part of the software under evaluation, for example, the shopping cart.

  2. Test scenario description (or submodule): Each test case focuses on a specific "test scenario", such as adding items to the cart.

  3. Test case description or action to execute: Defines the specific software feature or functionality being tested and what needs verification (e.g., clicking the "add to cart" button, to check if the item is added correctly; or view cart, to verify that the price is updated properly). As test cases refer to actions, they must start with an infinitive verb (e.g., enter, do, click, try), and it's crucial to specify both valid and invalid cases for each requirement.

Valid cases describe an action expected to work correctly, like "clicking the add item button after scrolling to the bottom of the page".

Invalid cases start with the verb "try" and are used to verify proper error handling. For example, to check that a form doesn't allow message submission with incomplete fields, the test case would state "try to submit a message with one of the fields empty".

  1. Preconditions: Specifies the prerequisites that must be met before running the test case, such as the person logging in under the superadmin role.

  2. Steps to follow: One by one, the actions to be performed to test the functionality, such as selecting a product and clicking "add to cart".

  3. Expected results: Refers to the results intended to be obtained when executing the test case (e.g., the total price in the cart should reflect the correct sum of prices for all added items).

  4. Actual results: These are the actual outcomes obtained when executing the test case. Following the previous example, if the prices were added correctly, it would be noted as "As expected". However, if not, the result would be written in detail "the product was not added to the cart, and an error message appeared saying 'an issue occurred, please try again'". It's essential to maintain a detailed record of results for various browser and device combinations required by the project.

What does executing a test case mean?

When executing a test case, we are comparing the expected results with those actually obtained. Any difference between the two is treated as a "bug" (software error).

For instance, when performing an action like logging in with a username and password or clicking on a product image, a specific result is generated. This result could be successfully accessing the website, adding an item to the shopping cart, receiving an error message, being redirected to another site, etc.

Each obtained result will correspond to a specific status: passed, failed, incomplete, blocked, and there could be others depending on the project or the team's working method. These statuses are clearly indicated in another column, according to a predefined criterion, such as the following:

  • If the obtained result matches the expected one, the test case is considered "passed” and is marked with a green color.

  • If they don't match, it implies a problem or error in the software that needs correction. It is labeled as "failed", shown in red, and documented in the corresponding ticket on the project board so that the development teams can address the issue.

  • If the software's behavior is not entirely wrong but also not exactly as expected, the test case is often labeled as "incomplete" and highlighted in another color, like yellow. For instance, if an error message is expected when leaving a field empty, but a popup appears without text, the test case would be categorized as incomplete. In situations where a failed test case blocks the functionalities of subsequent test cases, it is typically labeled as "blocked", represented in gray, and a link to the issue causing the blockage is provided in another column.

6 tips for designing test cases

To make the testing process smooth and effective, you should pay attention to how you write your test cases. Here are some helpful tips:

  1. Understand product requirements clearly

Before you start, make sure you understand the software's needs. For instance, when testing the search function on an e-commerce website, think of the characteristics that will be the most relevant to use as filters, according to the type of products for sale (if they are washing machines, it will be useful to search by brand, load type, spin speed, etc).

  1. Set clear and simple objectives for each test case

Know exactly what you want to check in each case. For example, ensure that searching by price shows products within the expected range.

  1. Write clearly and simply

Remember you're part of a team. Use clear language to minimize confusion. Instead of a vague description like "test adding an item to the cart", say "click on add to cart and check that the quantity updates to 1".

  1. Include negative cases

In addition to testing the scenarios in which the software is expected to work correctly (the "happy path"), it is essential to contemplate situations where things might go wrong. For example, for the payment processing functionality, a negative test case could be designed to verify what happens when we try to pay with an expired credit card.

  1. Prioritize

Not all tests are equally important. Sort them based on what's crucial. For example, in an application for buying goods or services, payment processes may be of high priority, user service rating functionality may be of medium priority, and modifying the appearance or colors of the interface may be of low priority.

  1. Perform updates and maintenance

It is essential to keep test cases up-to-date as software development advances through bug fixes, improvements, or the introduction of new features. This practice guarantees an accurate assessment of the application's status. For instance, if a new feature like social media registration is added to the system, it's important to update the corresponding test cases to reflect this change.

How to do test cases maintenance?

Test case maintenance is a crucial aspect of QA work. It involves regularly reviewing and updating existing tests to ensure their effectiveness as the software evolves and the code undergoes modifications.

To keep test cases up-to-date, we conduct different types of tests:

  • Integration Testing: It focuses on examining how two or more modules or components of the software interact. It is essential to run these tests when changes are made to them or new functionalities are added.

  • Regression Testing: It involves rerunning previously executed test cases across all flows of an app or feature. This is done to verify that new updates or modifications haven't introduced errors in areas that previously worked well.

  • Smoke Testing: It refers to a quick review of the software through exploratory and random actions to detect any unexpected or evident failures.

This way, test case maintenance ensures that the software continues to function correctly even after incorporating new features. This not only increases the reliability and quality of the product but also enhances user satisfaction by preventing unexpected issues.

What tools can be used to create test cases?

Several tools are useful for creating and managing test cases. At XOOR, we use Jira, a project management software developed by Atlassian. Other alternatives include TestRail, Zephyr, PractiTest, Trello, Asana, and many more, each with its own features and advantages.

The choice of the most suitable tool depends on the specific needs of each project and team, as well as the available financial resources.

Jira offers a wide range of features that simplify the creation, organization and tracking of test cases. It allows us to generate tasks and assign them to different professionals, set priorities, and monitor progress. Jira's integration with other development and testing tools enables closer collaboration between these areas, streamlining project advancement.

The most interesting aspect of Jira is the ability to generate "tickets." A ticket is a unit of work representing a specific task that requires attention. Each one includes detailed information about the task, such as its title, description, priority, current status, assignment to a team member, due date, and other relevant data.

The QA team also uses collaborative work tools like Google Drive and professional messaging platforms such as Slack to stay in communication with different teams and stakeholders. This facilitates fluid information exchange and more comprehensive feedback.

If you want to learn more about process optimization or quality assurance in software development, we invite you to read "What is QA testing and why is it so important" and watch the workshop "Manual testing: basics, tools, and methodologies (QA/QC)" on YouTube. You can also follow us on X, Instagram, and LinkedIn to stay updated on technology trends and news.