I’ve recently come across the Operator’s Manual, a post by Ian McAllister. It’s a compilation of tips for communicating data, dates and deliverables. Basically, how to live up to “be bright, be brief, be gone”. 

Taking inspiration from that post, below are some tips that I frequently share when it comes to OKRs. My standard line is “OKRs are deceptively simple, but tricky to get right” or as I just recently heard from Isaac “a minute to learn, a lifetime to master.” Below is an overview of some of the tricky aspects. None of them are rocket science, but each represents a hard-earned lesson:

Introducing OKRs

Articulate the “why” for OKRs: The original sin is starting the conversation with “I think we should do OKRs” without specifying what you want to improve with them. OKRs can help with many things such as increasing transparency, aligning teams on clear goals and targets, surfacing disconnects between teams early, bringing strategy closer to execution, improving focus, … Pick one or two challenges upfront and be clear on how OKRs shall help. Unless you create clarity for what you optimize for, OKRs are easily dismissed as another management fad.

OKRs are not an extra thing: OKRs are set to fail when they are handled separately to the rest of the product management process. Instead, they should be tightly integrated into how teams work and manage their products. Product decisions and feature prioritization should be grounded in their alignment to the Objectives and their ability to move Key Results.

Keep it simple: It can be tempting to design OKRs from the get-go for an entire organization all at once with multiple levels. That becomes overwhelming quickly. It helps to pick one leader and design a single set of 1-2 Objectives with 2-4 Key Results each for their area of responsibility. Think of it as a cheat sheet that they would use for the 1:1s with their boss. Don’t overthink it and avoid the temptation to put too much detail into too many Objectives and Key Results. Start with 1-2 Objectives with 2-4 Key Results and take it from there.

OKRs are not individual performance management: OKRs can be an input for an individual performance review conversation. But they should be one of many inputs. As somebody once told me: “I’m not going to force myself to promote jerks just because they deliver on their OKRs.”

Selecting Key Results

Outcomes over outputs: When defining Key Results, if at all possible, avoid output-Key Results. Outputs are basically tasks that need to get done (Launch feature X, reach milestone Y, revamp marketing website, …). They are tactics to get to an outcome, but they are the outcome itself. Walk the extra mile and identify what the benefit is that you want to achieve with a specific feature or activity (increase in user engagement, more leads, better customer conversions, …) and use that as your Key Result. Techniques like the five whys help. That discussion can become quite esoteric (“Why do we exist? What is our purpose?”), but that is only temporary and typically an indication that the team is on the right track to better and more refined OKRs. Rick wrote a good post about this: Binary: Good for code, bad for OKRs

Fewer but better OKRs: Every single time I’ve introduced OKRs we ended up with more than the recommended 3-4 Key Results. I’ve come to accept this as a necessary rite of passage, because after a few OKR review meetings the team realizes that the conversation always centers around 2-3 Key Results that really matter. Once you come to that realization, go ahead and deprecate the other Key Results. They are typically the ones that turn out to be green/on track while nobody feels good about the overall progress.

Be an OKR pragmatist, not an OKR dogmatist: There are lots of different flavors and nuances with OKR implementations. John Doerr had “output” Key Results in his examples and even within the almighty Google not everything is as clear cut as it looks from the outside. Try to understand the underlying concepts behind OKRs, but also feel free to break the rules when they hurt you. In the same way as database normalization comes with clear drawbacks the further you progress, there is value in not being too dogmatic about OKRs. Know the rules well, so you can break them effectively. (Hat tip to Isaac for the punchy headline)

Don’t start with an OKR tool: When introducing OKRs, the first step is to understand the methodology and create a rhythm around setting, reviewing and scoring OKRs. You can do that with a shared document to set, track and review OKRs in a smaller team or one specific level in the org hierarchy. A tool will distract at the beginning and easily derail the effort. Once you roll OKRs out across multiple teams and organizational hierarchies, a tool can help manage the complexity. But start small and simple first, build the muscle and then move to a tool. I wrote more on that here.

Reviewing progress

Commentary over number: Knowing where a metric stands is important, but understanding what drives that metric and what the implications are is by far more important. I recommend addressing three aspects in the commentary:

  • Data (what): What does the data say, is it going up or down, by how much
  • Insight (so what): What does it mean, is this good or bad, what are the causes for this behavior, what can we learn from this, have we identified new risks to our plan, …
  • Action (now what): What should we do about it, do we need to adjust our plans, …

Once you get to a spot where the commentary is consistently good in OKR reviews, OKRs become this amazing learning tool that accelerates how quickly a team understands its customers and its product.

Embrace the red: Encourage Key Result owners to give a fair and balanced assessment of the status of their KR. Being “red” is not necessarily bad – pretending not to be is. Once a team “embraces the red” OKRs shift from a performance management tool to a learning device that facilitates highly impactful conversations about what teams learn as they build, launch and operate products. I wrote more on that here: Embrace the Red.

No watermelons: Avoid the temptation to declare a Key Result as “on track” when it is actually “at risk”, i.e. the watermelon (“outside green, but when you look inside it’s red”). If it’s at risk, declare it early, which brings us to …

No surprises: Leave it all on the table and make sure that you create maximum transparency when reporting your Key Results. I’ve seen too many rationales that assume a hockey stick growth curve only to be surprised in the last OKR review of the quarter or semester. Hope is a bad strategy. One way to address this issue is to track the confidence level for each Key Result as part of the review conversation: each Key Result starts at 50% by default, i.e. a 50-50 chance to hit the goal. As you progress through a quarter you will learn more and the confidence will move up or down and allows a team to communicate confidence even if a Key Result is backloaded. It creates an embedded feedback loop in case confidence levels swing hard towards in the last weeks of a quarter: What could we have anticipated earlier or better mitigated risk? Christina Wodtke has more on that in her book Radical Focus.

Final Words

Every OKR implementation is different and the above are just a few of the lessons that I learned over the years. I’m sure I’ll be able to add more over time. By the way, most people know Measure What Matters by John Doerr, which is a great intro into OKRs. Once you understand those basics, I highly recommend The OKRs Field Book by Ben Lamorte. Lots of hands-on advice in there on how to get started and what kinds of pitfalls to avoid. 

Big “Thank you” to Isaac for his generous contributions to this post. He’s also been the one who got me back on the OKR bandwagon.

Photo by Ryan Graybill on Unsplash