From the Archives: “Stories from the Front line of Product Operations”

Tech Fleet
18 min readAug 9, 2022

--

https://techfleet.org

This is an article Tech Fleet founder Morgan Denner wrote in February 2020, 4 months before starting Tech Fleet Community DAO. This story and the things it describes are the inspiration for starting Tech Fleet Community DAO and making it what it is today. Enjoy for historical and educational purposes.
Original post (in three parts)
https://www.linkedin.com/pulse/stories-from-front-lines-product-operations-learning-1-denner-cspo/

Originally posted on LinkedIn by Morgan Denner on February 3rd, 2020

A product team’s journey into their users’ world

In 2019 our team needed an enterprise-ready app that helped our users grow their businesses. We were tasked with re-creating the legacy application that was in use for the past several years, and we were on a mission. While we couldn’t start development until mid-May, in January the team embarked on a journey of knowledge sharing that we all grew from. We experimented with a process based on the Google Design Sprint that I call “continuous learning sprints.”

What we learned over six sprints with other product teams would change the way we work for the better. It helped us prepare a powerful and delightful application for our users, and helped create a solid strategy in our vision.

I’d like to share that journey as inspiration for teams who want to learn and form strategy before they decide. I present this as a template for working with teams to develop knowledge, vision, and strategy. Continuous learning sprints can help teams form a solid foundation for building better things for the world.

Top four thoughts and takeaways for teams (see the rest at the end)

Here are the lessons I learned as a Sprint Master, a Sprint Doer, and a Product Owner during this time:

  1. Using continuous learning sprints to learn about users helps your product and everything around your product. The ROI from knowledge gained alone is huge for the backlog owner.
  2. All teams have problems they need to solve. You don’t need to build a new app to include continuous learning sprints in your regular work process. You can still use this process if you’re not building software. You can start this process with an existing product. Continuous learning sprints don’t have to apply to prototype tests of a user interface. They can work for any team if you define “question” as any question in the world. You could define “users” as people who have a stake in the question, like other coworkers, teams, or leadership. This is a way to quickly learn, solve problems, manage work, radiate knowledge across teams, and apply the knowledge to your backlog decisions. Got a problem? Explore it in five days with a continuous learning sprint.
  3. Participating in this process helped create a mindset of curiosity and empathy for user behavior on the teams we worked with. Instead of running with assumptions in development, they started saying “Wait, what you said is an assumption. Let’s test it first.” We empowered them to become better builders of software. We’re better product teams today because of it.
  4. Bring snacks.

This is our story

A long time ago in a war room (not too) far, far away…

General was upset. It was cold and we had problems. They were the same problems all product teams in the world face at every point, but especially at the beginning of an app’s life: how do we know we’re on the right track? How do we provide value to our users? We had nothing but ideas to work with. We wanted to get rid of the big hairy assumptions we had about different subject matters of our product. And we needed to do it quickly. We decided to start in the months before we were able to begin development.

We already had a foundation of user-centered design on the team. There were four alumni of my graduate program including myself ready to participate. Our team had a vision, a list of Minimum Marketable Product (MMP) requirements with Minimum Viable Product (MVP) milestones, and assumptions about features we’d offer in the MMP. We needed a way to time-box learning effort, fail fast, and adjust fast.

At the time, I’d used Scrum and Kanban to build features. I’d never run a sprint for anything except development, let alone Design Research. Note the capital “D” in “Design” here: in this case it’s not visual design, but problem-solving. The very essence of product life.

I came armed with 50 IDEO UX method flashcards and a body of academic knowledge and experience. I had read books like Validating Product Ideas through Lean User Research by Tomer Shannon and Lean UX by Jeff Gothelf. The team came armed with knowledge, curiosity, and willingness. We were ready to try something new.

Then I read Sprint by Jake Knapp. It told the story of employees from Google Ventures who started a process to answer any question in 5 days: the Google Design Sprint. I became inspired. We had tons of unanswered questions; the list was growing everyday. The similarities between Sprint, Lean UX, and Scrum seemed too strong to ignore. I realized they were team-specific applications of Scrum for Design Research.

They kept it simple: each day completed one step. First write down the questions you want to know and all your assumptions. Your team breaks them down and comes up with a plan to test your assumptions. You make an “artifact”, and go out into the wild to talk to enough people to get a result. Then on the last day you demo what you learned, take a retro, and form plans for the next sprint. By the end you have direction either way.

Typically, you learn whether something meets peoples’ needs only after they start using it. Only after it deploys to production can you get a real sense of it in the wild. I saw this process as an opportunity to shortcut that knowledge. This process gives users a chance to use something in the wild without a single line of code. It could inform both what we were building and what was coming up well before it happened.

Courtesy of https://www.gv.com/sprint/

Google Design Sprint roles

Google Design Sprints and Scrum have similar roles.

A Google Design Sprint has a team of no more than seven:

  1. The Sprint Master facilitates and guides the team on user research best practices.
  2. The Decider makes the decisions and tough calls in-sprint.
  3. Customer Experts, Business Experts, Design Experts, and Technical Experts give their advice and considerations. You don’t need them in all sprints. And you may only need one of them. Choose wisely.
  4. The Doers create the tests, perform the research, and present the demo.
  5. The planning and demo could have outside teams or leaders who have a stake in the question you’re answering.

Google’s Design Sprint process

Courtesy of https://www.gv.com/sprint/

Lean UX process proposed by Gothelf

Courtesy of https://uxplanet.org/lean-ux-how-to-get-started-bb3771697e2

Our process of continuous learning sprints

The premise is simple: everything you think you know is an assumption to test.

Google’s, Gothelf’s, and our process are all based on the Scientific Method. Pose a question, form a hypothesis, build an experiment, test the hypothesis, present results, analyze conclusions, discuss, repeat. They all get results in 5 days flat (if you count only in-sprint work). We simply put Scrum ceremonies around Google’s process and added snacks.

We applied the Google Design Sprint roles to the context of our team. Our sprint roles including Doers were performed by members of the product teams who wanted to participate, not UX practitioners. Those with out of the gate experience in UX pitched in as soon as sprint 1. I served as Sprint Master until the rest of the team learned to fly on their own.

Planning (pre-sprint, 30 minute meeting): What questions has the team chosen to answer? Why have we chosen them? What are the future implications for our product and other products? Who’s committing to each role? It’s important to include the Decider here. They need to bless the research and weigh in on considerations.

Discovery (one hour meeting): The sprint team creates problem statements and a list of assumptions for each question. Then they start forming ideas on how they can test the assumptions. It’s a good time to discuss who you’re going to test: novice or experienced, those with particular job roles, etc.

Whiteboard (one hour meeting): The Doers, Experts, Sprint Master, and Decider dig into the problems in scope for the sprint, discuss ideas, and form a solid plan for the test creation. They all agree who’s going to create what and how.

Create (however long it takes): The Doers create the “artifacts” for the test. This could be anything from experiment design, test instructions for participants, post-test surveys, prototypes, interview questions, task steps, snacks, etc.

Test (however long it takes over a day or two): The Doers go “into the wild” and run the test. We performed pilot tests with coworkers to get feedback on adjustments before going out . Note: “Test” here is vague. It could mean a prototype design you’re testing. It could mean talking to users. It could mean being a fly on the wall while watching users say and do things in the wild. We picked an appropriate test based on our collective experience as UX researchers. The Sprint Master helped everyone choose the right test for the problem.

Demo (one hour meeting): The Doers present the facts, the whole facts, and nothing but the facts. We reviewed the questions, the assumptions we had, and the results. We didn’t make conclusions about the results until the Retro (we had one week break between sprints and this helped).

Retro (one hour meeting): Analyze the results. Do we have enough results to answer the original question we asked? Sometimes the answer was “no”, and that’s OK because it informs what you do in future sprints. Were our assumptions right or wrong? Was there bias in the results at all? What do the results mean for our product? What decisions do we make related to the questions we answered? What changes do we need to make in the backlog and roadmap? What future research topics should we add to our list? What’s left to research in this topic that we uncovered in the past sprint? What went well? What could we have done better?

Disclaimer: Pay attention to the questions we answered in these sprints. They may be the same questions you as a product team need to answer yourself at some point. You too can use the methods we used (and many more) to answer them.

Sprint 1

You’re probably wondering, “Why is ‘bring snacks’ in the top four?” Well…we didn’t bring snacks to sprint 1 tests and it’s something we learned. So be forewarned, listen to your and your users’ stomachs when you’re testing in continuous learning sprints. Everyone loves to be rewarded. Everyone loves snacks.

Before the sprint began, we ran sprint planning to discuss the goals and roles. There was enough work for two task forces of sprint Doers to run two experiments. No estimation until sprint 2. I created a short document that would get our team to start living the process and we set our sprint schedule for every two weeks.

Our list of product questions was our backlog. Architectural questions aside, we picked the most important product questions at the time.

Sprint 1 questions:

  1. Do we need to support mobile device and tablet usage? If we do, how should we prioritize what we build for a responsive app?
  2. Is the current navigation of the legacy app confusing? Do we need to change the information architecture of the new app?

One task force decided to test with two things. First, they performed Guerrilla UX research through interviews to collect device and browsing habits. Then they compared the results to HotJar analytics data of the legacy app. The other task force created an online card sort using Optimal Workshop to see how users would categorize menu items.

The salesman in me wanted to use this as an opportunity (without introducing bias) to pitch the new app a little while we were out in the wild. But I didn’t even have to. We were openly looking for people to talk to. Once other people heard we were talking about new apps they wanted to be a part of the conversation. It helped us recruit. We saw this over and over again each time we went out with a new sprint.

The results were interesting. The first task force revealed new browsers we should support. They validated the need for responsive design in our new app. This helped our QA team know what to prioritize for browser and device testing.

The card sort depicted a clear pattern in organizing things based on a job role. There were things people categorized that we would have never thought of ourselves.

On Friday we invited outside stakeholders to present what we had learned. We talked about the team’s experience and what we would change in sprint 2. We added new unknowns to the laundry list and planned the next topic.

Out of these sprints, our team went on to perform four more card sorts post-sprint. Each one built on the results of the last. We honed in on the assumptions from the first sprint’s results. To this day, our menu navigation takes into account the lessons we learned in those card sorts.

Some insights from sprint 1’s research and retro:

  1. A Google Design Sprint is a full time job for a week. That’s great if you’re a UX practitioner, but our sprint participants had other full time jobs (running the products!). We needed to be sensitive to the sprint participants’ time outside their teams. We included this in our first sprint retro.
  2. Our users don’t think in the way that we the product team think. This became clear from the card sort and even in the interviews. It grounded our team in the old saying “You are not the user”.
  3. We formed new strategies for the app after this sprint. There were opportunities to reduce confusion of our app’s menu. We had no idea about some of the devices and browsers that specific job roles relied on. We were able to account for both of these things from the first initial deployment in May because of this sprint’s research. This proved the theory that you can go straight from an idea to making decisions in a week. Just like Sprint said.

I was convinced. It was on to the next one.

Disclaimer: Pay attention to the questions we answered in these sprints. They may be the same questions you as a product team need to answer yourself at some point. You too can use the methods we used (and many more) to answer them.

Sprint 2

Sprint 2’s process:

Sprint 2 questions:

  1. What problems do people experience when creating/editing/copying things in the current app?
  2. What are the different ways that our users perform a specific business accounting workflow? When do they do it? What problems do they have with it today? How could we improve it?

This time we came prepared: we had snacks.

Two task forces explored two problems in-sprint. Task force 2 tackled the business accounting question. We worked with two other teams to solve a problem at the business that far outreached our app, but started with it. We focused on getting an objective state of how people did the things they did and documented it in-sprint. Post-sprint we went back to the drawing board to form a way forward with the other teams.

Not only did the people the second task force talked to appreciate snacks (Who wouldn’t? Cupcakes? Beer? Count me in), they appreciated us coming to them to ask about this topic. At the end they learned things about the workflow too. A confusing topic for everybody got a little less confusing for both of us.

The first task force’s research helped us build a better experience of creating/editing/copying right out of the gate. The second task force uncovered the problem’s messy state more than it provided a solution. We had to do some more detective work post-sprint to get to the point where we decided what to do, but we got there.

When we designed the new workflow from the second question, we went back in sprint 6 to the same people we talked to (snacks in hand) and tested a paper prototype with them. That gave us closure that we were on the right track. A lot of that work happened outside the sprint; sprint 2’s results merely armed us with knowledge. I’m glad we did this because it turned into (what we hope to be) intuitive features for the new app today.

During sprint 2 we started estimating level of effort for questions. We had some retrospectives too. We didn’t have time to demo and retro in the same day because the Doers needed more time. We ran into some last minute cancellations from our research participants. We decided to try the demo on Friday and do a retro post-sprint.

Mission accomplished. We adjusted our process and trudged ahead.

Some insights from sprint 2’s research and retro:

  1. This process forces a team to answer a question quickly; it doesn’t define what “question” is. While we answered a future product question in sprint 1, sprint 2 focused on finding a solution to a complex business problem. Before it gave us a solution it gave us a candid picture into user behavior and the business problems that went beyond users. We saw the current state of their world first-hand. It gave us direction. It gave us empathy. Ever since then, the team has used the “observe current state in the wild” method as a first step to understanding a problem (referred to as “ethnography” in research circles).
  2. When other teams participated they heard the same things we heard. They made their own conclusions about their products after hearing our results. The information we learned radiated to other teams and they were all the wiser for it.

We decided to take the approach of integrating other teams from sprint 2 into sprint 3’s goals. We helped another team test their new application. It shared UI components and features of ours. It helped both of our teams make future decisions and align on a vision and strategy.

We had two continuous learning sprints under our belt. It was

Courtesy of https://gearpatrol.com/2012/08/31/saratoga/

Adamant Learner takes the lead followed by Raging Snacks…

Sprint 3

Sprint 3’s process:

Sprint 3 questions:

  1. How intuitive is the experience of the other team’s app for technical users who are seeing it for the first time?

No snacks this time, gift cards. This was a large in-sprint effort. We needed to make sure the app was user-friendly because it was so different from the legacy applications.

The application had enough features to spend several sprints heavily testing. We only had two days to test. We decided to watch two new users put together marketing content with no instruction and noted where they got stuck. We noted what seemed self-evident to them and the tools and tricks they used to code web pages. Observers in the sessions jotted down quick notes and asked participants about what they observed at the end.

We had two three-hour sessions, each with one developer. The users had a blast doing it and were glad they got a sneak preview of a new app.

We learned so much. It shortcut many product decisions both teams had to make for our applications. It also created lots of new feature ideas that we have in our new apps today. And this was two months before our team would be able to begin development. We had room to adjust and align strategies with the other team. Our laundry list of product questions grew from sprint 3’s results too.

Some insights from sprint 3’s research and retro:

  1. We found our stride as a sprint team in sprint 3. The team finished all the work they committed to with less strain. Two days of whiteboarding and two days of testing helped the team a lot. We decided to give ourselves flexibility to test any day in the sprint if the opportunity arose. We reflected this starting in sprint 4.
  2. Spending six hours watching new users perform tasks on their own informed the entire year of our roadmap. I still get flashbacks of those moments in backlog refinement. They help ground me in the decisions we make for future development. I’m not saying Product Owners need to run tests, but the knowledge of what the Doers learn in this type of research is priceless and the information can easily be radiated to the PO.

After sprint 3, we reached out to another team to see if they wanted to perform a continuous learning sprint with our team. Our apps were going to work together in the future. We decided to get ahead of the unknown unknowns early. We did so in sprint 4.

Sprint 4

Sprint 4’s process:

Sprint 4 questions:

  1. How do people use mailing systems in their marketing workflow today? What are their workflows around it?
  2. What problems do people have with current mailing platforms that we can learn from?
  3. What important workflows and features do we need to be able to support in the future for our two teams?
  4. What is the best way to structure user accounts, roles, and permissions in a new mailing platform based on the businesses we serve?

During sprint 4 we asked participants to sketch a diagram of their workflow from their perspective. Then we interviewed each person to talk about the diagram and ask questions.

We came out of sprints 2, 3, and 4 having radiated knowledge about users to other teams. Our team became subject matter experts on user workflows in which we had a high stake, but about which we had little knowledge of at the time. We were better armed for what was to come later in 2019 and even into 2020.

Later in the year, one of the Doers from sprint 4 became inspired by this method. He took the lead on his own Design Research effort for a project he was working on for our team. He used the “sketch your interpretation” method to learn about how ideas turn into workable marketing content. He provided us with a first iteration blueprint of how all users and products interact with each other. It helped us form the bedrock of our product vision for the new app and prioritize the problems we focus on solving as a product offering.

Sprints 5 and 6

Other team members starting Sprint Mastering in sprint 5. They felt confident running them after Doing in them.

These sprints continued our learning in the same ways. We dug into more nitty-gritty topics that we were on the hook to deliver while everyone ate…you guessed it…snacks.

Questions:

  1. How do people currently perform a particular workflow and how should we build it in the new app?
  2. How should we adjust the prototype we created from sprint 2’s results?
  3. Do users really need “real-time” statistics data in our app? What do they define as “real-time”? When do they need “real-time” stats and when do they not need it?
  4. How far back do users look at stats in the current app, and how long should we keep stat data in the new app?

By the end of Sprint 6 our team had evolved. Doers grew their knowledge of users and honed research skills they could (and did) apply in the future.

The journey continues

We started development with an informed backlog of ready work and the team went into “Get ‘er Done” mode.

Today our app and our collective arsenal of user knowledge continues to build. To this day, our teams’ continuous learning progresses in the same spirit but in different forms. Our application is nearing the Minimal Marketable Product. The team continues applying what they learned in these sprints. Their research skills are stronger than ever because of it.

Additional takeaways and thoughts for teams (read the top four in part 1)

  1. The process teaches the Doers of the sprint how to perform the best research and experimental Design practices. They can take that back to their teams and apply it on their products. My coworkers are better contributors because of the work they started in these sprints and the lessons they continue to apply today.
  2. Sprint Master is a tough role in this process. The Sprint Master coaches the sprint team in their work. It takes experience to create unbiased experiments and to lead teams in the right direction. This is especially true if this is the Doers’ first continuous learning sprint.
  3. The Sprint Master has to reduce scope of the questions being tackled. Especially if the team realizes they’ve committed to too much during the sprint. Keep this in mind, it will allow your team to adjust goals with agility and get a workable result without completely failing in-sprint.
  4. This process can work for teams who aren’t dedicated full time to the sprint work, but be sensitive to people’s time and workload. You may need to change the process around like we did to make it work for the Doers.
  5. If you have someone who can act as Sprint Master, give it a try. Experiment with these sprints while having an open mind and flexibility. Bring your assumptions to the table. You’d be surprised what you think you know and what you have no idea about by answering questions in these sprints. You’ll come to rely on the insight you get from these sprints. If you don’t have someone who can act as Sprint Master, the books and resources mentioned above can help get your team there.
  6. Boy are we glad we starting bringing snacks.

Our continuous learning journey will never stop, it will only grow as our product and users do. As we come to the horizon of a new application era, I can already see the fruits of our labor. I can’t wait to see how the team grows from here.

--

--

Tech Fleet

A place where UX'ers, product managers, and developers earn their wings in Tech through community education and team expereince. https://techfleet.org