In 2023 and 2024, I worked on a project that I’m genuinely proud of. Northwave was looking for a way to assess the quality of their detection rules, and I was tasked with figuring out how to do that. It was one of those “build it from scratch, no map provided” challenges — tough, rewarding, and surprisingly fun.
What started as a research question — Which properties of a detection rule can serve as meaningful indicators of quality? — ended up evolving into an advanced, multi-layered scoring system. The result? A sleek, interactive ranking of all use cases, complete with detailed scores.
While I can’t share any visuals or technical specifics (thanks to a signed NDA), I can give you a sneak peek into what the project was all about.
In a Security Operations Center (SOC), detection rules — often called use cases — are what help spot cyber threats in real-time. They look for specific patterns in activity that might signal something’s not right.
Imagine them as a mix between smart sensors and tripwires. They’re constantly scanning for signs of suspicious behavior — like a login from an unusual location, someone downloading way more data than usual, or a rare system command being executed.
When something matches a rule, the system throws up a flag:
“Hey, this might be bad — take a look!”Without these rules, SOC analysts would be drowning in noise. With them, they can zero in on real threats — fast.
But here’s the kicker:
👉 Not all detection rules are equally helpful.
Some are sharp and reliable. Others are outdated, too sensitive (lots of false alarms), or not sensitive enough (missing actual threats).
That’s why constantly improving these rules is key — and why Northwave is so committed to doing just that.
To stay ahead of potential threats, Northwave asked me to figure out how they could measure the quality of their use cases — and spot the ones that need the most improvement.
The project focused on two big questions:
After months of research, brainstorming, testing, and refining, we built a system that does exactly that.
It’s not just a list — it’s a priority map that shows where to improve, where to maintain, and where to celebrate success.
What started as an exploratory project quickly evolved into a working Proof of Concept (PoC) — one that’s already proving useful. The positive reactions I got during internal presentations confirmed that we were onto something valuable.
Because of the success of the PoC, a follow-up project has already kicked off to take this to the next level:
📘 Code is being cleaned up and fully documented
👥 More team members will be able to contribute and expand on the system
🖥️ A front-end interface is coming — turning raw scores into sleek, visual dashboards
Imagine being able to open a dashboard and immediately see:
That’s the future — and it’s already in motion.