Feature Factories are dangerous things. You might even work in one and don’t realize it yet.
Here’s a test: if your team built something 6 months ago and you’ve no idea if anyone is using it, or if anyone even finds it valuable, you may well work in a Feature Factory.
If a new hotel opens and nobody ever pays to stay there, that would be a terrible outcome with a very negative business impact for the hotel chain. Yet plenty of software teams spend months building features in products that don’t change their user’s behavior (or worse, never get used).
Engineers working in Feature Factories are generally not engaged with their customers, are not well motivated, have lower morale, and quit at high rates. The endless spoon-feeding of stories to them by a Product Owner leads to a feeling of being on a conveyor belt of boredom and sameness, and their product and business suffer as an inevitable result.
Outcome Focus over Feature Factories
The most successful product companies do not work as Feature Factories. Instead, they focus on outcomes. An outcome-focused team can always tell you which of their past shipped features were widely adopted, what their users thought of them, and what experiments they are running to guide their future work.
The term “outcome” is somewhat abstract, but Josh Seiden provides a powerful definition in his excellent book, “Outcomes over Output”:
An outcome is a change in user behavior that drives business results.
The implications of this simple definition are profound:
- The user’s behavior should change after you ship the feature
- Measuring the user’s behavioral changes is a must
- The change in user behavior must cause a positive business impact (for example, increasing revenue)
Becoming an Outcome-Focused Team
For many teams, the pressure to remain a Feature Factory comes from organizational structures that reinforce that model. Large companies build systems of incentives for sales reps to sell future capabilities, and new customers come on board with the promise of the team delivering those features on time. Experimentation appears wasteful and risky, user-focused learning diminishes, and the engineers get measured on predictability of hitting ship dates, regardless of the actual user value of what they ship.
It’s not a simple task to switch from Feature Factory mode to being outcome-focused, but small steps can be taken that make a big difference. One of those high-leverage changes is to stop using the word “can” in your backlog’s user stories.
The Problem With “Can”
It’s common to see user stories and requirements like this in JIRA projects and issue trackers everywhere:
- Trial users can upload a new file on the custom logo settings page
- Admin users can view all the active accounts
- Users can log out from the home page
As satisfying as these might look to developers (who can quickly grok what they need to build by reading them), these kinds of definitions push a team deeper into Feature Factory mode. Why?
Just because a user can do something doesn’t mean they actually will.
Story titles using “can” move immediately to Done the moment the feature ships, the team will celebrate another output, and they’ll move on to the next item on the conveyor belt.
Will any users actually upload a new logo? Will an admin user view any active accounts? Will any users log out from the home page? The team isn’t planning to measure these behaviors, so…maybe? Who knows?
Drop “Can” to Focus on Outcomes
Let’s now remove the word “can” from the first story example and see what effect it has:
- Trial users upload a new file on the custom logo settings page
Now, this story can’t be completed solely by shipping new features. Users actually have to upload new files in order for the team to claim victory!
Knowing if users actually take that action requires visibility into their behavior and knowledge of what they’re doing with the product. There are plenty of tools that can inject this kind of measurement into web applications with minimal effort, so a team using this approach would quickly need to install such a system. They could also schedule conversations with some of their users to discuss the changes with them.
When the measurement starts, it might yield surprising results! For example, if very few trial users upload a new logo file, that’s a learning in itself. It may mean that the hypothesis was wrong, or the design has issues, or something technical is causing it to glitch. Figuring that out is crucial to realizing the value from the feature that shipped.
As the team gets more sophisticated with their validations, they’ll tie the features they ship to the impact they have on their business. In this example, they might use feature flags or A/B testing to determine if the offer of custom logos to trial users helps convert those users into paid customers. Importantly, they’ll also have precisely stated goals for those conversions.
Common Challenges When Moving to Outcomes
A barrier that teams encounter when orienting around outcomes is that outcomes always lag the output.
In other words, shipping the feature is only step 1, and that’s where the learning and measurement must begin. But many teams spend years orienting around only shipping, and rarely review the success of what they shipped before. Feedback loops are easy to ignore, but crucial for success. Setting up a regular review session to validate the adoption of features shipped in the past is a simple step that can accomplish this.
It’s also common for leaders in business to gravitate to things that appear to have high certainty (like future ship dates and multi-year roadmaps) and be less enthusiastic for experimentation, hypothesis-setting, and risk. This can make the adoption of outcomes with senior management a difficult task.
A team that is very focused on outcomes can cause too heavy a focus on metrics and data, and too few conversations with users. Usually this happens when the product has scaled to thousands or millions of users, and the team feels like their measurement tools give them a sufficient “pulse” from their users. It’s wise to retain the immense benefit of talking directly with your users. This will provide far more nuance than any automated measurement tool could offer.
The simple step of adding a past-focused “outcome review” to any team’s calendar will bring significant benefits. For example, the team could meet every month and ask this simple question: “For each of the features we shipped last quarter, how well were they adopted, and what did we learn?”
The act of focusing on outcomes over just shipping features is one of the most valuable changes a software team can make, but removing the word “can” from stories isn’t enough to make that transformation complete. The team must embrace the natural implications which that simple change brings, and recognize the importance of measurement, experimentation, and communication with their users.
- “Outcomes over Output“, by Josh Seiden
- “Empowered“, by Marty Cagan
- “Outcomes over Output” by John Cutler (conference video, 2019)
Cover photo by the author, via Unsplash.