With the multimodal capabilities and extended reasoning capabilities of the latest AIs, imagine (to borrow Dario’s phrase) an entire country's worth of NSA analysts in a data center. Call this “automated strategic intelligence.”
Consider the scale that data can be synthesized and analyzed (all the social media posts, maps data, networks of people) with AI.
And with such a paradigm, I argue that the math-econ-modeling-good-at-software-engineering folks should strongly consider pursuing this field.
Why should you do this?
The work can make the world better in concrete ways. With enough public feeds—or the right institutional datasets—you can answer questions just at the edge of technical feasibility:
Is China pursuing intelligence recursion, meaning a feedback loop where AI-enabled analysis improves its own capability generation and then compounds further? If yes, that helps determine whether a red line has been crossed and becomes the key input to any MAIM-like deterrence proposal.
What is the scale and routing of gray-market H100 GPU flows into China?
You can create value without committing to either speeding up a lab you favor or slowing down all labs; the work is orthogonal to that axis.
Many default entrants to such a strategic intelligence space optimize for national capability (what is the insurance risk for this twenty-year-old) rather than safety questions, which leaves important public-interest questions underexplored.
You build career capital and option value that can point to frontier developers, policy roles, or a future national program if one emerges.
The field is a priori neglected because roles are hard to advertise and often require security clearances, which narrows eligibility (it does help if you are a US citizen).
It is tractable now because modern models and tooling newly enable it; measurable outputs reduce credentialism and make room for junior contributors with the right skills.
It is exceptionally fun if you like building systems that answer hard, real-world questions.
You can be paid pretty well! I hear the defense and intelligence contractors are doing okay these days.
Where should you work on automated strategic intelligence specifically? That depends on your view of AI risk.
Is making AI systems do what humans want them to do easy or hard? If you’re in camp one—believing that companies like Anthropic and OpenAI have this mostly figured out—then working at their natsec and defense partnerships makes sense. You’d be using their powerful models for intelligence work while trusting their safety measures.
If you’re in camp two—worried that we don’t yet know how to control increasingly powerful AI systems—work somewhere that builds intelligence capabilities without directly advancing AI capabilities themselves: defense-focused startups, Open Source Intelligence, or the NSA. Do the national security work, but you’re not accelerating what you see as potentially dangerous AI development.
Why shouldn't you do this?
There's not much structure or information, particularly for the startup, OSINT, and government worlds. It's easier to get a sense of what you're signing up for from lab employees who've worked on these problems. The bureaucracy, particularly in government, can be crushing for people used to moving fast (DOGE, again?). You likely won't be working with folks who are culturally similar to you (though I hear the Bay Areans are increasingly migrating to DC).
I don't personally know anyone who has done this and would be willing to speak about it, which makes career advice scarce. Intelligence-related fields carry slightly higher personal risk—you become a more interesting target for adversary nations.
And if I'm wrong and this is already a large field that's entirely classified, I wouldn't know by design. We could already be late to a party we can't see.
If you are thinking about this, I would love to chat! @jasonhausenloy.48 on Signal. I helped with a small paper introducing the concept, if you’re interested.