5x Scrum Team Velocity: How Test-Case Reviews Boosted Efficiency

Comments · 76 Views

Constructive critiques of software reviews can be highly beneficial in mitigating developer bias. Software engineers, whether creating architectural diagrams, coding units, or test cases, inherently harbor biases. Effective reviews of software artifacts typically require a fresh perspectiv

Introduction

Software reviews can be a game-changer in reducing developer bias. As software engineers, whether creating architectural diagrams, coding units, or test cases, we all have biases. Effective reviews require a fresh perspective, often provided by someone other than the developer. Given that test cases are designed to exercise code, the code reviewer is well-suited to review test cases as well, bringing a white-box tester's insight.

In this article, I'll share a recent experience where a newly formed Scrum team underwent significant growth and learning through its successes and setbacks. Test-case reviews played a twofold role. They brought technical expertise to both developers (gaining insight into testing) and testers (acquiring programming skills). Moreover, they served as a catalyst for strengthening bonds between team members, transforming the Scrum team into a cohesive unit.

A Path Towards Improvement

In our Scrum team of five developers and a tester, we recently initiated test-case reviews. Initially, the tester was solely responsible for test-case design, development, and execution. Unit testing began later, when the team's efficiency and effectiveness became a concern. A pressing question emerged: Who should test what, when, and how, to maximize our team's velocity and minimize software bugs?

The Newly Formed Team

When the Scrum team was formed, everyone was eager to produce working code and help the team become one of the company's best. The goal was straightforward: Produce functional code as quickly as possible. If working code wasn't delivered promptly, there was a fear that the Scrum team would disband and members would join other teams.

The Bottleneck of Testing Only by the QA Engineer

As all developers began coding, the majority of testing responsibilities fell on the tester. Without hesitation, the tester gathered all necessary data during Scrum ceremonies. All information required for designing, developing, prioritizing, and executing test cases was readily available. When a user story was developed and ready for QA testing, its status was set to QA. By then, the tester had finished designing and developing test cases and was awaiting test-case execution.

Learn more about boosting agile efficiency and the power of test-case reviews in Scrum teams at https://t8tech.com/it/methodologies/boosting-agile-efficiency-the-power-of-test-case-reviews-in-scrum-teams/.

Following the successful QA testing of a user story, its status was upgraded to code merging and eventually to done. This approach proved effective for the initial two sprints, each lasting two weeks. However, by the third sprint, a significant obstacle emerged. The number of user stories in QA status had skyrocketed, and the team's velocity was heavily reliant on the tester's ability to test rapidly. As scrum velocity is directly tied to the number of user stories released per sprint, it became clear that QA testing was hindering the release process.

Overcoming the Testing Bottleneck through Strategic Resource Allocation

In an effort to mitigate this issue, additional testers were temporarily reassigned from other scrum teams to assist with user-story testing. While this increased the team's velocity, it introduced complexity in synchronizing efforts between teams. When a tester was reassigned to support another team, it had a detrimental impact on their original team's velocity. Although we managed to test more user stories over several sprints, we inadvertently created a significant problem for other teams. The gain of a single scrum team came at the expense of the entire company, prompting us to discontinue tester sharing.

A Pivotal Realization

It soon became apparent that the path forward lay in collaborative testing. The tester trained the team on smoke testing user stories at the UI level. Discussions ensued about risk-based software testing [1-2], testing for rapid feedback [1-2], prioritizing critical test cases, and iteratively testing the remainder. For user stories lacking a UI component, unit testing [3-5] and API-level testing were introduced [1-2]. As unit testing evolved and each developer performed a minimum set of smoke tests whenever applicable, our bottleneck was alleviated, although not entirely eliminated. Our velocity improved, but there remained considerable room for growth.

Reverting to Inefficient Practices

The team's mindset was that all members should contribute to testing to enhance velocity. Whole-team testing was viewed as a temporary solution to overcome our challenges, rather than a development best practice that should form the foundation for the team's growth and improvement. When the velocity issue began to improve, the team reverted to their old habits. Testing was once again primarily the responsibility of the tester. In sprints with fewer story points planned, the team either took on more user stories or performed bug fixing. The bottleneck of QA-based testing began to resurface.

Transforming Testing: From Bottleneck to Development Excellence

A profound shift in mindset occurred when the team recognized testing as an indispensable best practice, rather than a temporary solution. Following comprehensive training, education, and retrospective meetings, team members assumed ownership of testing responsibilities in the long run. It was acknowledged that testing was an integral aspect of every individual's role, albeit with varying levels of involvement and expertise. The goal was to tailor testing activities to maximize individual productivity and team performance.

Unit testing evolved from a desirable activity to a mandatory one, guided by principles outlined in [3-5]. It became the norm to write unit tests to detect bugs early and enhance the overall software quality. Factors such as code's internal quality [3] became a regular topic of discussion. Code reviews also became a standard practice, leading to one of the primary drivers of our team's growth: test-case reviews. These reviews could be conducted between developers or between a tester and a developer.

Technical Breakthroughs

Initially, test-case reviews were introduced as a training exercise where the tester guided developers on best testing practices. Key questions addressed included: What are the different approaches to creating test cases based on testing objectives? How to determine the level of detail in a test case, the number of test steps, and the scope of testing.

Developers began sharing their code with the tester, explaining the underlying principles and testing strategies. They also demonstrated how user interface interactions could be translated into code-level interactions. As the tester was responsible for reviewing test cases, she required a high-level understanding of the code, which developers provided by training her on the programming language fundamentals.

Developers refined their testing skills, adopting best practices for efficient and effective testing. The tester, in turn, gained a deeper understanding of coding basics, making her an ideal candidate for a software development role in testing.

Once developers were confident in writing and executing their test cases, they took over test-case reviews, which were incorporated into code reviews. Each user story included reviewed test cases, and the team eventually reached a point where test cases at any level (unit, API, UI) were reviewed by anyone in the team.

Fostering Unity and Collaboration

The introduction of test-case reviews had a profound impact on the team's interpersonal dynamics, prompting a constructive reevaluation of the traditional roles of software testing and development. This led to a reassessment of the boundaries between these two disciplines, sparking questions about where software testing begins and ends, and where software development starts and concludes. Is it always beneficial to maintain a clear distinction between the two? The test-case review process ignited thought-provoking discussions and debates, cultivating a deeper understanding of the commonalities and differences in perspectives among team members.

Conclusion

What began as an initiative to address the team's velocity bottleneck evolved into a best practice in software development. This transformation led to a collective growth and learning experience for the entire team, as individual members developed and improved through shared goals, challenges, and achievements. The team's mentality and bonding grew stronger, driven by a shared sense of purpose and motivation to learn from one another.

Test-case reviews played a vital role in this transformation, facilitating constructive interactions and knowledge sharing between developers and testers. By leveraging the test-case artifact and the need for collaborative testing, test-case reviews became the catalyst for improved team bonding and performance.

References

  1. Agile Testing: A Practical Guide for Testers and Agile Teams, Lisa Crispin and Janet Gregory, 2008
  2. More Agile Testing: Learning Journeys for the Whole Team, Janet Gregory and Lisa Crispin, 2014
  3. Test-Driven Development: By Example, Kent Beck, 2002
  4. The Art of Unit Testing, 2nd Edition, Roy Osherove, 2013
  5. Effective Unit Testing, Lasse Koskela, 2013
Comments
Search
Categories