October 4, 2021
- veröffentlicht von
- published by
Involving users and customers in product development is a central pillar of the agile movement. Two of the 4 agile values are "people and interactions before processes and tools" and "customer collaboration before contract negotiation". In Scrum, one of the most popular agile frameworks for product development, the Sprint Review is the only "official" event where developers and users/customers interact and collaborate. This article provides an overview of the purpose of the Sprint Review in Scrum and describes a practical example of designing a modified Sprint Review in Large-Scale Scrum (LeSS) with multiple teams working remotely.
At the heart of Scrum is empirical process control, which distinguishes it from other agile frameworks. Instead of having a detailed defined process, Scrum uses short cycles for creating small shippable product slices, and after each cycle, it allows to inspect the product as well as the process and adapt both as necessary. The Sprint Review allows for the continuous inspection and adaptation of the product while the Sprint Retrospective does that for the process.
In the Sprint Review, the users/customers and other stakeholders inspect with the Product Owner and Team(s) the Product Increment, and the Product Backlog is adapted as necessary. This is done by allowing everyone to hands-on explore new items and discuss what’s going on in the market and with users. It is also the best moment to create or reaffirm the shared understanding of the participants whether the Product Backlog still reflects the needs of the users/customers and the market and to make changes to the Product Backlog in order to maximize the value delivered. Transparency, next to inspection and adaptation, is key for empirical process control, and therefore calling the event “Sprint Demo” and focusing just on a one-sided presentation without hands-on use of the product misses the point of the Sprint Review in Scrum or LeSS.
The 2017 Scrum Guide is quite explicit on the contents of the Sprint Review. It includes the following elements (quote):
The result of the Sprint Review is a revised Product Backlog that defines the probable Product Backlog items for the next Sprint. The Product Backlog may also be adjusted overall to meet new opportunities.
The 2020 version of the Scrum Guide is much shorter on the Sprint Review, as in general, it is shorter and less prescriptive. The new Scrum Guide says:
During the event, the Scrum Team and stakeholders review what was accomplished in the Sprint and what has changed in their environment. Based on this information, attendees collaborate on what to do next. The Product Backlog may also be adjusted to meet new opportunities. The Sprint Review is a working session and the Scrum Team should avoid limiting it to a presentation.
LeSS (like Scrum) is a “barely sufficient” framework for high-impact agility. It addresses the question of how to apply Scrum with many teams working together on one product. For more on the LeSS framework please visit https://less.works.
In LeSS there is one rule regarding the Sprint Review (quote):
In addition, there are two guides and a couple of experiments related to the Sprint Review:
This guide suggests keeping the cycles for creating new product slices as short as possible (and reasonable), sharing as much information about the customer market as possible, and discussing together what to do next, in order to maximize learning.
This guide suggests performing the first part of Sprint Review like a science fair: A large room with multiple areas, each staffed by team representatives, where the items developed are explored and discussed together with users, teams, etc. Beware: The Review Bazaar is not the full Sprint Review but only the first part! The second part of the Sprint Review consists of the critical discussions needed to reflect and decide on what to do next.
The guide provides some clear tips on how to organize the bazaar in a series of steps (quote):
After the bazaar, the most important steps of the overall meeting follow:
The second LeSS book provides some experiments related to the Sprint Review, like using video sessions, including diverge-converge cycles in large video meetings or experimenting with multisite Scrum meeting formats and technologies. In this article I put my focus on a detailed example of the experiment “Remote Sprint Review” and als combine many ideas from other experiments.
A LeSS rule related to the LeSS structure states that “Each team is (1) self-managing, (2) cross-functional, (3) co-located, and (4) long-lived”.
In current times, almost no team can work co-located, as every member is forced to work (most of the time) from home. So many teams are experimenting with different ways on how to do LeSS events remotely. Here is an approach of a Review Bazaar that we have done at one of my clients, Yello Strom AG. In the following description, I will share the steps that we used, and afterward, I will share what I still believe can be improved.
I would like to start with an important remark regarding the organizational setup: We did not use LeSS at this client, as the goal of the area I worked with was a bit different than the system optimising goal in LeSS. Nevertheless, our Sprint Review with multiple teams was very similar to how I would run a Sprint Review in LeSS.
In order to be very specific in the way we have performed the Sprint Review, I will give a brief introduction to the organizational setup, which is different from LeSS, but beware that I would not recommend this setup if you are working with multiple teams on one product. At the end of the article, I will describe the differences in the way I would perform the Sprint Review in LeSS.
The client, Yello Strom AG, is a German electricity and gas provider, a subsidiary of EnBW. The area that I worked with at Yello Strom AG was responsible for innovation and discovering new business models. It consisted of 4 teams, each focusing on a different product.
There was one team that was creating a revolutionary app that allows people to understand the difference between driving an internal combustion engine car and an electric car. This app was designed to be able to recommend an electric car that fits your personal needs, taking into consideration how often you are driving each week, what kind of trips you are regularly doing, if you have a charging possibility at home, and many other things.
Another team was working on a user-generated platform that should allow people to share personal reviews of electronic products that they are using. A third team was setting up a new electricity pricing model for people with electric cars. And the fourth team was writing and creating content for different (media) channels, like blogs, YouTube videos, Instagram, or Twitter posts.
Each team had a Product Owner and the department had a so-called Business Owner. Each team had its own Product Backlog, some run 2-weeks Sprints, other 1-week Design Sprints, but every 2-weeks we performed this shared Sprint Review.
The tool that we used as a group to communicate was Microsoft Teams, which unfortunately at that time didn’t support break out rooms, so we had to make some workarounds in this regard.
As most interaction between the participants should happen visibly, we were using Miro as an “endless” whiteboard. Here is where the agenda of the event was shared, where teams were preparing screenshots of the product parts they wanted to discuss in order to allow the participants to add comments and feedback, and also where other participant interaction happened.
One of the most essential parts of a remote Sprint Review is the preparation. Team members should always prepare the Sprint Review well, in order to get the most out of this inspect-and-adapt event. But, as we will see, for a remote event there is not only the product (increment) that needs to be prepared but also ways to allow participants to share feedback effectively. Our teams had to plan on average about an hour each sprint to prepare the Miro board for the Sprint Review to make this meeting effective. In addition, also the PO usually prepared relevant information to share with the teams and the other participants.
Here is how the overall board looked like for one of our Sprint Reviews:
You will see parts of the board in more detail below.
The next screenshot shows a typical Sprint Review agenda.
We had a time limit of no more than 2 hours, which worked for the most part, although sometimes I wish we had had a little more time.
Our agenda consisted of 7 blocks: a short welcome and check-in, 10 min for company and department updates, a short status update on the state of the teams, the pitching of the teams and then as the biggest block the Review Bazaar. This was followed by the collection and summary of the feedback, the discussion of the next steps and finally a short feedback round on the Sprint Review itself.
We usually started our event by mentioning separately the steps that were done during the sprint review and introducing changes to the process if they were decided in the previous event or Sprint Retrospective. We found that participants who attended this event for the first time benefited from an individual introduction to the process beforehand. This allowed us to keep the first item on the agenda short. This is also a good time to do a quick check-in, with participants typing a word into the chat box, posting a Giphy or perhaps a picture in an area of the whiteboard, for example.
This was followed by a few words from the business owner, where he could announce important company and department updates or introduce new hires.
The third item on our agenda was a short status overview reflecting where the teams were on their respective journeys. As mentioned earlier, we had 4 teams working on 4 different products and this was the opportunity for each team to verbally and visually share what was happening: updates on usage figures, market updates, impediments and the road ahead. Mostly this was done by the team POs and took no longer than 5 minutes per team.
As you can see in the screenshot below, we created a table on the board for this with 4 columns (Obstacles & Help Requests, Successes, Thanks, Next Steps).
After that (or sometimes during the updates) each team gave a short introduction to the product increments that had been prepared to be explored in the Review Bazaar.
Each team that had something new to share prepared one or more sections on the board with instructions on where to find the product increments and how to review them. Sometimes the instructions were supplemented with key questions for feedback. Most of the time the teams added screenshots of the new or updated parts of the product (website or app). This made it very easy for participants to link the feedback to a specific area of the product. For general remarks and feedback each team also created a section with 4 quadrants (What worked? / What made me confused? / What didn’t work? / Any new ideas?).
After about 45 minutes, the most exciting part of the review began. Participants started to move around the Miro board and explored the offerings. As mentioned earlier, the offerings were very diverse and the review was designed so that participants could choose which parts were most interesting to them and where they could contribute the most by rating what they saw and giving feedback.
The Content Team usually shared articles or YouTube videos that had been created and published in the Sprint. Next to the link to the actual article, the team added a screenshot and a feedback quadrant (see screenshot below).
In other cases, teams added a link to a new functionality on a web page or a whole new landing page. This could either be a link to the live system or to another (staging) environment, and general instructions for access were added.
As mentioned earlier, the feedback team usually prepared a section with screenshots of the new functionality or a web page and participants added post-it notes with comments right next to the sections. In addition to this, each board had a feedback quadrant for general feedback.
In some cases, it was a new functionality in a mobile app that was available for review. Here, teams usually prepared a more detailed description of how to install the app from a test server, sometimes by posting a link to a screen shot that included step-by-step instructions on how to install it. There was also a link to an MS Teams channel or a team member in Slack who could support participants if they had problems installing the app. Again, the team provided relevant screenshots from the app on a frame in Miro so that participants could write post-its directly to the areas they had feedback on. There was also a feedback quadrant for more general feedback.
In other cases, there were just mockups, wireframes, scribbles or even data sheets that the team had been working on in the previous Sprint. Also here they gave clear instructions on what type of feedback they needed.
In some cases, teams even prepared customer interviews for the Sprint Review. They asked participants to write their name in a timebox slot, and a team member who was skilled at conducting customer interviews had one or more participants experience the new functionality and answer questions.
The next step was to review and discuss the participant feedback. To do this, each team sat down with their product owner in their MS Teams channel and reviewed the post-its and discussed the written feedback as well as what they had observed during the interviews. This usually resulted in new PBIs being created and discussed in the next Product Backlog Refinement, as well as PBIs being updated or reordered in the Product Backlog.
The last step of our Sprint Review was a short time slot to retrospect the Sprint Review. The participants had the opportunity to leave feedback and ideas on how to improve the Sprint Review. In order to enable this, we used another area of the board, and the familiar four quadrants.
As mentioned already, this is not a typical LeSS Sprint Review, though the flow used might be very similar to the one in a LeSS Sprint Review. I will start by describing first the differences to LeSS.
Large-Scale Scrum (LeSS) offers, through its rules and principles, an organisational design that is optimised for adaptability at minimum cost and customer value. In LeSS, there are no “Team Product Owners” and no “Team Product Backlogs”. To avoid local optimisation by increasing efficiency at team level, there is a common product backlog. This ensures that optimisation takes place at system level. Usually this makes most sense when multiple teams are working on a product, creating a clear need for all teams to work together to support the development of the area of the product that is most critical for the best possible customer impact. And even in a situation where multiple teams are working on multiple products in parallel, LeSS prefers to work with a product backlog (and with a single product owner maintaining that product backlog) and broader product definitions because they
So, as long as there is a shared (overall product) vision that encompasses multiple products, the same customers or the same markets, and the same domain knowledge is relevant, there is a strong case for extending the product definition to multiple smaller products.
In our case, the department was exploring multiple innovative products with multiple teams, so working with a single product backlog would have made a lot of sense, especially because the exploration and delivery of the products would have been extremely fast and adaptable. It would have allowed the organisation to increase investment in the more successful product on the fly, rather than working with a defined group of people for a defined period of time on multiple products, only to discard some products at a certain point and train team members to develop the more successful product.
In the concrete example, the products were very different in terms of the technology used and most of the team members were contractors, so working with several product backlogs and several product owners (i.e. without LeSS) also worked well. The interaction between the teams was very good, the efficiency at team level was high and the most promising product could be identified.
In a real LeSS setup, you might want to run a few things differently. After check-in and before teams pitch their sessions, you could have just one block where the product owner communicates relevant changes in usage numbers, specific customer feedback and updates from the market.
You could include an area on the board where important numbers and other success stories (e.g. go-live of a functionality) are shared, but I would remove both the “impediments” and the “next steps”, or at least I wouldn’t discuss them at the beginning of the sprint review. The best time to discuss impediments would be during an overall retrospective (with all teams) and I prefer to discuss next steps at the end of the Sprint Review and incorporating all the feedback from it, not at the beginning.
After the Remote Review Bazaar, the teams sort and group the input they received and summarise it. The product owner familiarises himself with this summary by moving back and forth between the areas on the Miro board or between the teams’ breakout rooms. Then all teams discuss with the product owner what feedback should be incorporated in the upcoming sprints and how this can be done. Often this leads to the creation of new items that will be defined in the next Backlog Refinement, the change of priorities of existing items or even the removal of items. This step in the Sprint Review is very important because it sets the direction for the development. It is advisable to plan enough time for this in Sprint Planning.
In my experience, the Retrospective of the Sprint Review can be shortened by only providing a section for feedback on the board, or even omitting this section completely, and discussing the feedback in an overall Retrospective if needed.
Here you can see what an agenda might look like:
I hope this article gives you an idea how to facilitate a Remote Sprint Review with multiple teams. I’d love to hear your feedback and/or questions.
Robert Briese regularly takes participants of his pCLP (Provisional Certified LeSS Practitioner) courses to such a Sprint Review, so if you are interested in experiencing this event live you can join one of his courses or feel free to write him.
Robert Briese is a coach, consultant and trainer for agile and lean product development as well as the founder and managing director of Lean Sherpas GmbH. As one of only 22 certified Large-Scale Scrum (LeSS) trainers worldwide, he works with individuals, teams and organizations on the introduction of practices for agile and lean developments as well as the improvement of organizational agility through cultural change.
Robert Briese has worked with (real) startups (Penta), corporate start-ups (Ringier, Yello) and also large organizations (SAP, BMW, adidas) to create an organizational design and introduce practices that facilitate faster customer feedback, learning and a enable greater adaptability to change. He is a frequent speaker at conferences and regularly gives training in Large-Scale Scrum (LeSS).
Robert Briese ist Coach, Berater und Trainer für agile und schlanke Produktentwicklung sowie Gründer und Geschäftsführer der Lean Sherpas GmbH. Als einer von nur 22 zertifizierten Large-Scale Scrum (LeSS) Trainern weltweit arbeitet er mit Einzelpersonen, Teams und Organisationen an der Einführung von Praktiken für agile und schlanke Entwicklungen sowie der Verbesserung der organisatorischen Agilität durch Kulturwandel.
Robert Briese hat mit (echten) Startups (Penta), Corporate Start-ups (Ringier, Yello) und auch großen Organisationen (SAP, BMW, adidas) zusammengearbeitet, um ein Organisationsdesign zu erstellen und Praktiken einzuführen, die ein schnelleres Kundenfeedback, Lernen und a eine größere Anpassungsfähigkeit an Veränderungen ermöglichen. Er ist regelmäßiger Referent auf Konferenzen und gibt regelmäßig Schulungen zu Large-Scale Scrum (LeSS).
Enjoy this great podcast by Robert Briese and Daniel Räder from Unboxing Agile! 🎶 You can listen to it anywhere you can find podcasts. 🌐 Click on here to find it on our website and go from beginner to winner and get one step closer to becoming "scaled agile" pro
In diesem Artikel bekommst du einen Überblick über den Zweck des Sprint Reviews in Scrum und erfährst mehr über ein reales Beispiel, wie ein modifiziertes Sprint Review in Large-Scale Scrum (LeSS) mit mehreren verteilten Teams gestaltet werden kann.
In this article you will get an overview of the purpose of sprint reviews in Scrum and learn more about a real example of how a modified sprint review in Large-Scale Scrum (LeSS) can be designed with several distributed teams.
In this article, Robert clarify's why LeSS shouldn’t be seen as a scaling framework and why it can’t be compared to any other scaling framework. LeSS has a fundamentally different approach. Instead of adding complexity, it aims to remove it.
In this talk Robert will make the case that to achieve true agility an organization needs to descale the complexity of processes and roles. He will discuss the organizational design implications that Scrum (as an agile framework) creates both regarding structure and policies.