The $1 Trillion Signal That Was Sitting on a Job Board
Frontier AI capabilities don't debut at keynotes. They're assembled in training data pipelines - and the hiring patterns are visible months in advance.
Longform writing on security, AI, and the systems that protect everything else.
Frontier AI capabilities don't debut at keynotes. They're assembled in training data pipelines - and the hiring patterns are visible months in advance.
Every system in history was designed to last. AI agent architecture is the first that should be designed to disappear. The best code you write today is code that becomes unnecessary tomorrow.
The industry uses 'context' as a synonym for 'more data.' It's not. Here's a first-principles framework for what context actually means - from events to chains to storylines - and why most SOCs are stuck at Layer 1.
The greatest vulnerability in the modern SOC is not a lack of data - it is a lack of memory. How tribal knowledge, context lakes, and a new role called the Context Analyst change everything.
We treat the SOC as a defensive funnel, but functionally it's a bottleneck. When attack volume goes exponential, a fixed capacity model forces you to ignore the vast majority of signals to save the sanity of the team.
Maliciousness isn't an inherent property of an event - it's a property of its relationship to future context. Every dismissed alert is a liability on your balance sheet.
Every time you tune a detection rule to silence a noisy alert, you're hard-coding a blind spot. We're trading false positives for false negatives.
AI's expanding context windows sound like a defensive breakthrough. In reality, they're structurally easier for attackers to exploit - attackers save the whole board state while defenders rebuild from fragments every move.
Security is theoretically simple. But SecOps in practice is a war against entropy - where the real task isn't correlation, it's intent recognition.
Teams build elaborate state machines to compensate for model limitations. The result benchmarks well - and doesn't think. Your architecture is a commitment, not a snapshot.
We spent two years perfecting our RAG pipeline. Then our AI started reading markdown files from a filesystem instead of querying our vector store. This isn't a bug - it's the future.
AI SOCs inherit and amplify human bias - favoring detections that reduce workload over ones that maximize threat coverage. Without a sparring partner, even AI-generated rules drift conservative.
You can't explain intelligence by inserting another intelligent agent. If GPT-5's router needs sophisticated judgment to route intelligence, who's routing the router?
Howard Marks argues you can't quantify risk even after the fact. We've built an entire vulnerability management industry around measuring the unmeasurable.
Traditional user-first thinking breaks down when technology constraints shift weekly. Start with rigorous backend validation, then design the minimal human interface around what actually works.
False Positive Rates were never about detection accuracy - they were always about human capacity. AI triage changes the equation entirely.
Sutton's Bitter Lesson says intelligence emerges from scalable learning, not encoded knowledge. Does that mean we'll outgrow MITRE ATT&CK?
AI agentic architectures are a modern manifestation of the oldest problem-solving paradigm in CS - divide and conquer, but with reasoning, adaptation, and evolution.
Every Gen AI product fundamentally excels at one core function: context management. The more precisely you infer and enrich the prompt, the better the application performs.
SecOps reasoning is graph-based but our data arrives as time-series logs. This impedance mismatch - like ORMs bridging objects and tables - is the core data engineering problem in security.