Sunday, August 17, 2025

"How to Be a Good Intelligence Analyst"

From the Statecraft substack, August 7:

“The first to get thrown under the bus is the intelligence community” 

Today we're joined by Dr. Rob Johnston. He's an anthropologist, an intelligence community veteran, and author of the cult classic Analytic Culture in the US Intelligence Community, a book so influential that it's required reading at DARPA. But first and foremost, Johnston is an ethnographer. His focus in that book is on how analysts actually produce intelligence analysis.

Johnston answers a lot of questions I've had for a while about intelligence:

  • Why do we seem to get big predictions wrong so consistently?

  • Why can't the CIA find analysts who speak the language of the country they're analyzing?

  • Why do we prioritize expensive satellites over human intelligence?

We also discuss a meta-question I often return to on Statecraft: Is being good at this stuff an art or a science? By “this stuff,” in this case I’m referring to intelligence analysis, but I think that the question generalizes across policymaking: would more formalizing and systematizing make our spies, diplomats, and EPA bureaucrats better? Or would it lead to more bureaucracy, more paper, and worse outcomes? How do you build processes in the government that actually make you better at your job?

Thanks to Harry Fletcher-Wood for his judicious edits.

I want to start with a very simple question: What's wrong with American intelligence analysis today?

That's an interesting question. I'm not sure that there's as much wrong as there once was, and I think what is wrong might be wrong in new and interesting ways.

The biggest problem is communicating with policymakers. Policymakers have remarkably short attention spans. They're conditioned by a couple of things that the intelligence community can't control. They want to know, "Is X, Y, or Z going to blow up or not?" That's fine. However, if you say, "Yes, probably in 10 years," there's nothing a US policymaker can do about it. "Oh God, 10 years from now, I can't think about 10 years from now. I've got to worry about my next election."

If you say, "Oh, by the way, you've got 24 hours," they think, "Oh God, I can't do anything about it. It's too late. I don't have a lever to pull to effect change in 24 hours."

So there's always this timing-teaming problem. If I give a policymaker 2–3 weeks, that's optimal space for the policymaker. But a lot of the consumers of intelligence aren't savvy enough consumers to know that they should ask for that: "In the next three weeks, lay out the three different trends that might occur in Country X, and tell me what the signposts are for each of those so that I can make some adjustments based on ground truth. So, if we see X occur, it indicates that there's a greater probability that Y will occur versus A or B."

The questions from policymakers range from "Whither China?" which is so broad as to be almost meaningless, all the way down to "Here's this weapon platform. Do we know if this weapon platform is at this location?” The "Whither China?" questions are always driven by poor tasking.

That communication between the consumer and the producer needs a lot of focus and a lot of work. In my experience, it's always an intelligence failure and a policy success — it's never a policy failure. The first to get thrown under the bus is the intelligence community.

Could you define tasking for me? As I understand it, I'm the consumer of some product from the intelligence community, and I “task” you with providing it, right?

Generally speaking, yeah. Our consumers are policymakers, either civilian or military, and they're tasking.

And “teaming” is how people get assigned to tasks?

There are a couple of different ways it works. I can't speak to the current administration, but in the past, there’s been the “President's Intelligence Priorities.” Every president has a list, "These are the 10 things I really care about." The community puts together the National Intelligence Priority Framework, and it says, "Okay, community, in all of the world of threat and risk, what do we really care about?" They lay that out. There are some hard targets: Russia, China, North Korea, Iran, the usual suspects. “All right, let's merge those two lists, and then we will resource collection and analysis based on them.” It's a fairly rational, albeit slow process, to arrive at some agreed-upon destination over the next year, two years, four years, whatever it happens to be.

So the product of that workflow is a set of decisions, “We're going to staff this question with this many people, and this question with fewer people.”

That's exactly right. And that leaves certain things at risk. A good example of that is the Arab Spring. If you think about a protester self-immolating in Tunisia, the number of analysts really focused on Tunisia at that moment in time was minimal.

Give me a ballpark estimate. How many analysts would've been thinking about Tunisia week to week?

Honestly, within the community, maybe a dozen. At my old shop, the CIA, it was half of one FTE [Full Time Equivalent] for a period of time. Tunisia's not high on the list of US concerns. The issue is that was a trigger for a greater event, the Arab Spring. When that happens, it's affectionately referred to as, "Cleanup on aisle eight": there's some crisis, and we have to surge a bunch of people to that crisis for some period of time. Organizations try to plan so that they can staff around crises, knowing that we're not going to catch everything because we don't have the resources or personnel or programs.

We don't have global coverage per se. HUMINT [human intelligence: collecting information through individuals on the ground] is slow, meticulous, and specific. If you're going to dedicate human resources to something, it's usually a big something. Case officers go out and find some spies to help us with collection on Country X. It’s a very long and methodical process. We may not have resources in Tunisia at any given time because it just isn't high on our list. And then we get a Black Swan event out of nowhere. The immolation triggers a whole bunch of protests, and then [Hosni] Mubarak [the president of Egypt] falls.

The administration says, "You didn't tell us Mubarak was going to fall." The response is, "We've been telling you for 10 years that Mubarak is going to fall. The Economist has been telling you for 10 years. Everybody on Earth knows that Mubarak can't stay in power based on his power structure. But we can't predict what day it's going to happen.” We would love to. But, realistically, we all recognize that Mubarak is very weak, is hanging on by a thread, so at the right tipping point, he's going to go. But the notion that somehow the community missed it is fictitious. That's generally a policy utterance, you know? "Oh, the community missed it." Well, not really.

You mentioned this problem — that until you have a cleanup on aisle eight, you’re not monitoring important things — is a feature of the way that these priorities get put together. Are there other blind spots or weaknesses as a result of how the intelligence community sets priorities?

There are other weak spots. I think the biggest misconception about the community and the CIA in particular is that it's a big organization. It really isn't. When you think about overstuffed bureaucracies with layers and layers, you're describing other organizations, not the CIA. It is a very small outfit relative to everybody else in the community.

Put some numbers on that?

I can't actually, that one's classified....

....MUCH MORE 

 Our last visit to Statecraft was July 21's Luttwak: "How to Stage a Coup...".