Something happened in one of my workshops earlier this month that I haven’t stopped thinking about since.

A participant (excitedly) told me he’d uploaded my training materials to Claude,

… then uploaded his notes from the Get Started with No-BS OKRs workshop that he’d completed (which you can join yourself, with our Essentials bundle) and asked Claude to build a skill based on the practices.

He shared what the tool produced, and it was… actually pretty good.

That same week, I was working with a large group of localizers, and their draft OKR quality was just off the charts compared to the average. I said so a few times, and they loved the “gold stars,” and people kept telling me the Get Started workshop was so good, it made it easy.

The next day, when I tucked in for 1:1s with people the truth finally came out.

Nearly everyone I spoke with said — sheepishly — that they’d taken their notes from the Get Started workshop and dropped them into [insert GPT of choice here], and out popped No-BS OKR drafts.

I can’t believe I didn’t put that together myself.

I’m also kicking myself a bit for not making my own trained agents available to clients, since I’d much rather they be using my actually-trained agents than asking ChatGPT or Professor Google about OKRs. (We’re changing that now — our Done-With-You Consulting clients are going to have optional access to my own trained agents, on vetted, isolated platform implementations — since it’s become clear that if that option isn’t provided, folks are going to turn to whatever model they have — and in some settings, that means their personal accounts, which isn’t good for anybody.)

It’s really sunk in for me the last few weeks what I’ve been hearing from other OKR practitioners for several months now:

GPTs can now write OKRs.

And they can write textbook OKRs — OKRs that align with best practices.

So you’d think — as an author and expert with a huge investment in and deep specialty in OKRs — that would be scary.

But it really isn’t.

Since the very beginning, I’ve done everything I can to make creating OKRs as simple as possible. Because the mechanics of creating OKRs — they’re really, truly, very simple.

What’s hard is what surrounds the mechanics of creating OKRs: enunciating the vision of change that informs meaningful OKRs, and then what happens after you’ve written them.

OKR writing was never the hard part.

The part that leads to slow adoption — the part that makes DIY implementation so challenging and full of pitfalls, the part that separates OKRs-in-a-document from OKRs-that-change-how-people-work — is what surrounds the OKR writing.

The questions that determine whether your OKRs actually function are behavioral and organizational. They require judgment about your specific situation: your team’s capacity, your org’s maturity with this kind of thinking, its skills with conflict, the dynamics between people and functions, what you’ve already tried, and the baggage (or excitement) that people come into the process with. They can’t be answered by pattern-matching on a corpus of OKR literature, however high-quality that corpus is. I’ve read every OKR book on the market — and none of them can answer these questions for your specific organization.

Here’s what I mean. I’m going to share some real questions with you from my sessions with participants this month. These are real questions from real workshop participants this month — anonymized, but not sanitized. They’re perfect examples of the human insight and intelligence that OKR implementation requires, way beyond OKR writing.


"What about payroll? We have to do it. Does it just get a key result?"+

This came from a finance leader whose team does accounting, analytics, and facilities. Important work, but not the kind of work that most OKR templates have in mind.

Most OKR teaching tells you that your OKRs should reflect your most important things. I’ll push back on that. Making payroll on time is as important to the organization as any stretch goal — but it doesn’t need a key result, because completion is success. Getting payroll out on time is a mandatory. It belongs in plans and milestones, not in the OKR zone.

Where operational and support teams do create key results is when something needs to get better. Improving payroll accuracy — if there’s a quality or compliance problem — is a key result. Decreasing the number of approval steps required to process an invoice is a key result. It’s not quantifying activity; it’s measuring friction removed and flow increased. The question isn’t “is this important?” It’s “is completion success, or do we need to achieve some kind of improvement?”

An AI-generated OKR draft for a finance team will probably give you something that sounds outcome-oriented. What it can’t do is sit in the room with your finance leader and work out which of their responsibilities genuinely need a key result — and which ones just need a solid plan.

"I want to reduce our monitoring errors from 75 million to 8 million a month. No one cares about that but us. It doesn't ladder to a company objective. Can I still do it?"+

This came from an engineering leader. Their team had a real problem: monitoring noise so high it was affecting their ability to do their jobs. Critical to their function. Invisible to everyone above them.

Absolutely. That’s exactly why we localize.

The whole purpose of OKR localization — translating company-level priorities into team-level ones — is that not everything your team needs to improve will show up in the company key results. Even when something doesn’t ladder to a specific company KR, it almost always aligns to operational excellence. (In this case, the connection to culture and operations was clear.) The point of localization is that your team builds clarity about what you need to decide, achieve, and change — to do your jobs well and to do what the company is asking of you.

This is also one of the reasons I don’t require objectives below the company level. A theme is enough. What matters is that your team has articulated what needs to improve and can write a key result around it.

An AI tool can write company-level OKRs reasonably well. It has no idea that your engineering team is drowning in monitoring noise — or that clearing that noise is the precondition for everything else you’re trying to accomplish.

"I have an important outcome I won't know the result on until December 31st. What do I do in the meantime?"+

This is the watermelon problem — a well-known phenomenon in OKR practice: status reports that are green, green, green all year, until the actual outcome turns out to be red. It happens because we set year-end key results and then rely on subjective judgment about whether we’re on track. Human optimism is powerful. So is cognitive bias. We think we’re fine until we’re not, and by then it’s too late to course-correct.

For outcomes that are genuinely important and genuinely lagging — things you won’t have data on until the end of the year or quarter — the question is whether there’s anything you can observe before the outcome that would give you signal.

One participant had exactly this challenge: an important outcome she wouldn’t have real data on until year-end. After some thinking, she identified a behavior that tends to happen before the outcome she cares about — something she could start tracking now, with a simple checkbox in her existing CRM. The leading indicator she came up with is experimental; she doesn’t know yet whether it will prove meaningful. But the point is to start iterating on it, quarter over quarter, until you develop something that gives you real signal.

Leading indicators come from observation, not from dashboards. They’re built by noticing what tends to happen before the outcomes you care about — and that kind of noticing requires someone paying close attention to the specific dynamics of your organization. It also, frankly, takes more than one OKR cycle. The organizations I work with that have the best leading indicators built them over time, through iteration and refinement.

One thing that helps: think about this as a pipeline problem, not a metrics problem. We usually talk about pipelines in the context of sales — but everything has a pipeline. Every important outcome has stages that tend to happen before it, signals that show up before the actual result does. The work of finding leading indicators is figuring out what those stages are for your outcome, in your context. It takes more than one cycle to build something reliable, but once you have it, you have something that can give you real signal for years.

"I track this outcome daily. Do I still need a leading indicator?"+

This question came from a participant whose team tracks NPS every day. At any moment, he can log in and know whether he was on or off track. His question: do I still need to identify leading indicator key results when my data frequency is daily?

“That depends,” I replied. “How much do you trust your daily NPS data? Do you trust that it gives you good signal?”

At that, there was a pause.

If you have daily data on a metric you genuinely believe correlates with the outcome you care about, you’re in an enviable position — most people don’t get that. You don’t need additional leading indicators. You already have signal. What you need is to deliver the work that improves that performance metric: do the specific activities that move it, knowing you’ll be able to see whether they’re working.

The caveat matters. If there’s real uncertainty about whether your metric is telling you what you need to know — if you’re not fully convinced it correlates with the outcome you actually care about — then daily data doesn’t solve your problem. You still might need leading indicators, not because you lack frequency, but because you lack confidence in the signal. In that case, the work is either to develop a more trustworthy version of the metric, or to identify other levers that supplement it and experiment with an index measure (a composite of several related signals that together give you a more trustworthy read on progress).

The flip side: if the activities you’re planning need to yield their own impact in the form of measurable progress or outcomes, that’s when initiative key results are called for. But you don’t create more just because a template or best practice seems to have space for them.

These are the kinds of questions that — at least for now — remain in the domain of human intelligence: requiring someone to actually ask you the questions, and to push on whether your instrumented data is giving you the signal you need.


What AI speeds up isn’t just the writing.

Something unexpected is happening. When AI speeds up the drafting, it speeds up more than just the drafting.

In a first cycle, one of the biggest shifts most people learn is the difference between activity and results — and how to write truly measurable key results — how to discern when something just needs to get done vs. when it needs a progress or success measure. This is where I’m seeing AI tools genuinely accelerate the work. Where early models couldn’t, the agents I’m training now allow clients to verbally process their thinking about their business — what they dream of being different — while the agent listens and turns that into potential draft key results that follow my No-BS key result formula:

[Increase / decrease / improve] [metric or observable phenomenon] by [change] from [start value] to [finish value]

So the person can stay focused on the change they dream of creating, and the agent can do what it’s good at: process that language into measurable draft key results. Agents are great at formulas, and they’re good at listening when they’re well trained. I’ve put a lot of effort into making OKR creation simple and well-scaffolded — and people still struggle with the cognitive shift from activity to outcome thinking. Trained agents don’t. They don’t have the cognitive bias toward activity that humans do. It’s been genuinely surprising to watch.

By the second cycle, most organizations are dealing with something harder. They usually learn two things: they ran into places where they had surprises on outcome measures at the end of the term — showing they didn’t have leading indicators everywhere they needed. And two, they usually learn that they would benefit from more cross-functional OKRs — which means first getting people out of their functional silos and aligned on a shared vision of change. Teams always come back to me at the end of Q1 and say: why didn’t you tell us that? Why didn’t we start with cross-functional localized OKRs? And my answer is always the same: if I tell you it’s important, that’s different than you learning it’s important. Also: it’s hard enough to learn OKRs (and especially key results) thinking functionally — adding cross-functionality adds a layer of alignment that most teams struggle to execute even when they know they need it, much less when an outside consultant is the one telling them to.

None of that is a writing problem. And none of it gets easier just because the OKR drafts come faster. But those learnings are happening faster: when we move up the drafting speed, I’m seeing teams get into cross-functional OKR creation right after they lock their first-cycle OKRs, instead of waiting until the end of the first implementation quarter — moving the needle on cross-functional alignment significantly earlier.

The organizations I work with don’t stick around just because I help them write OKRs. (I mean — some do.) Most come back because I help them work through the judgment calls, the cultural dynamics, the places where the framework meets the messiness of actual organizational life. AI is genuinely useful for accelerating certain parts of OKR creation. But the human behavior part is still the hard part — and for now, that’s still the domain of human intelligence.


If you’re navigating the hard part

If any of this sounds familiar — if you’ve got OKRs now and you’re thinking: “Now what?” — that’s what I do best, and help clients with every day.

I’m opening up a limited number of Q&A calls specifically for folks who find me through this blog post. Bring your real “beyond writing OKRs” questions. This isn’t a sales call — it’s a brief working session to get your specific questions answered.

Book your Q&A call →