· ai security-operations detection-engineering philosophy

The Bitter Lesson in Security Operations

The tension between encoded knowledge frameworks and scalable AI learning, with tradeoffs in explainability and shared language The real question isn’t whether to keep MITRE ATT&CK - it’s what we lose if AI outgrows it.

In 2019, Richard Sutton wrote The Bitter Lesson: the most powerful AI systems emerged from scalable learning on raw data, not from encoding human knowledge.

This raises a provocative question for Security Operations: Do we still need discretized frameworks like MITRE ATT&CK?

ATT&CK has been transformational for standardization, training, and threat intelligence sharing. But it was designed for human reasoning - categorical, sparse, optimized for communication between analysts.

As AI systems become more sophisticated, I’m wondering: What happens when machines can learn directly from raw logs, alerts, and threat data without needing our taxonomies? When patterns emerge from the data itself rather than being imposed through frameworks?

The bitter lesson suggests intelligence emerges best when unconstrained by human categories. But in security operations, those constraints serve purposes beyond just detection.

The tension is real: we want AI systems that can discover what we haven’t thought of, while maintaining the shared language and explainable logic that security operations depend on.