I had a conversation last month with a Tier-1 analyst who’d been in the job for about 18 months. Smart kid, good instincts. He asked me, half-joking, whether he should start learning to code because “the AI is going to take my chair.”
I told him his chair was fine. But it’s going to be a different chair.
Here’s what I mean. The job he was hired for, the one in the job description, looked something like this: monitor SIEM dashboard, triage incoming alerts, enrich IOCs, check if the alert is a known false positive, escalate or close. Repeat 300 to 400 times per shift. Eight hours, five days, until you burn out or get promoted to Tier 2, whichever comes first.
That job is going away. Not in some abstract “future of work” sense. It’s happening now. Gartner projects that AI will automate more than half of Tier-1 SOC tasks by 2028. One large MSSP reported going from 144,000 monthly alerts to 200 that needed human attention after deploying an AI triage platform. That’s a 99.8% reduction in the queue.
So what does the person sitting in that chair actually do all day?
They review AI verdicts instead of raw alerts
The shift isn’t from “analyst” to “unemployed.” It’s from “processing raw alerts” to “reviewing resolved cases.”
In the old model, the analyst gets an alert that says “suspicious login from unusual location.” They open five tabs: SIEM for the log, EDR for the endpoint, Active Directory for the user, VirusTotal for the IP, and Jira for any previous tickets. Twenty minutes later, they have a verdict: false positive, the user is on a business trip. Close the ticket. Move on to the next one.
In the new model, the AI does all of that in seconds and presents the analyst with a pre- assembled investigation. The user’s travel history was checked. The IP was enriched. The endpoint is clean. Historical pattern: this user triggers this alert type every time they travel. AI verdict: false positive, 94% confidence. The reasoning is laid out.
The analyst’s job is to look at that and decide: do I agree? Does the reasoning hold up? Is there anything the AI missed?
That sounds simpler. It’s actually harder. Because now the analyst needs to understand how the AI reached its conclusion, spot the cases where the reasoning looks right but the conclusion is wrong, and make judgment calls on the edge cases where the AI is genuinely uncertain.
A practitioner wrote about exactly this shift in his own SOC. He described receiving an alert about a marketing employee downloading 15 GB of data after hours. The AI flagged it as high- risk data exfiltration and recommended account suspension. What the analyst discovered was that the employee was downloading video assets she’d created for a campaign launching the next morning, with manager approval. The AI saw the behavioral deviation and scored it correctly by its own logic. The human saw the business context and overruled it.
That’s the job now. Not “triage the alert.” It’s “evaluate whether the AI understood what actually happened.”
They teach the AI what “normal“ looks like in their environment
Every SOC environment is different. The login patterns at a hospital with shift workers look nothing like the login patterns at a software company where everyone works from home. A financial trading firm’s network traffic at 4 AM is normal. A manufacturing plant’s network traffic at 4 AM is not.
The AI doesn’t know this out of the box. It needs to learn it from someone who understands the business. That someone is the analyst.
This is what “SOC AI Trainer” or “Detection Engineer” means in practice. The analyst notices that the AI keeps flagging the third-shift nursing staff’s logins as suspicious because they happen outside “business hours.” She writes a tuning exception. She notices the AI is scoring internal vulnerability scanner traffic as lateral movement. She adjusts the behavioral baseline. She notices the AI is missing a specific pattern of credential abuse that’s unique to their identity provider. She writes a custom detection rule.
64% of cybersecurity job listings in 2026 now require AI, ML, or automation skills. That number was negligible three years ago. The market is telling you something: the analyst who can tune the AI is worth several analysts who can only click through a queue.
This is also where the “learn to code” advice becomes relevant, but not in the way most people think. You don’t need to write a machine learning model from scratch. You do need to be comfortable writing Python scripts to automate enrichment, building SOAR playbooks, querying APIs, and understanding what a detection rule is actually testing for. The analyst who can write a playbook that automatically handles the third-shift nursing login pattern, so neither the AI nor any human ever has to look at it again, has 10x the leverage of the analyst who manually closes those tickets every night.
They hunt for the things AI can‘t see
Here’s the part that doesn’t get enough attention. Alert triage was never the interesting part of the job. It was the boring part. The part that caused burnout. The part that made good analysts leave the field.
The interesting work was always: something feels off about this set of events, and I want to pull the thread and see where it goes.
That’s threat hunting. And it’s the part of the job that AI is worst at.
AI is very good at pattern matching. Give it a known attack signature and it will find every instance in your data. Give it a statistical deviation and it will flag it. But ask it to form a hypothesis, something like “I think someone is staging data in an unusual S3 bucket because of a comment I read in a threat report last week and a vague memory of seeing a similar pattern three months ago,” and it has nothing.
Hypothesis-driven investigation requires institutional memory, intuition built from years of seeing weird stuff, and the ability to follow a hunch that doesn’t fit neatly into a detection rule. Microsoft’s description of how threat hunting changes in their agentic SOC model is that hunters use AI to surface anomalies but focus their own time on creative investigation and adversary simulation. The AI does the heavy lifting on data retrieval. The human does the creative reasoning.
The analysts who get good at this become the most valuable people on the team. Not because they can process more alerts per hour, but because they can find the threat that nobody, human or AI, was looking for.
They explain things to people who don‘t understand the data
There’s a skill that shows up in almost every “future of the SOC analyst” discussion and it’s always the one people gloss over: communication.
When the AI handles triage, investigation, and initial response, what’s left for humans? A lot of it is translating what happened into language that non-technical people can act on. Explaining to the CISO why this incident matters and that one doesn’t. Writing the post-incident report that the board reads. Briefing the legal team on what data was exposed. Telling the CFO why the AI quarantined her laptop at 3 AM (a story I’ve told before).
This is not a soft skill. It’s a survival skill. The analyst who can write a two-paragraph executive summary that accurately conveys the severity, business impact, and recommended actions of an incident is more operationally valuable than the analyst who can triage 400 alerts a day, because the AI can triage 400 alerts a day and the AI absolutely cannot write a coherent briefing for the board.
ISC2’s workforce study found a 4.8 million global gap in cybersecurity professionals. But the gap isn’t for people who can click through a SIEM. It’s for people who can think, communicate, and make judgment calls under pressure. Those are the same skills that make a good doctor, a good detective, a good military officer. They’re not automatable because they’re not pattern matching. They’re reasoning under uncertainty with incomplete information and real consequences.
The part nobody in leadership wants to hear
If you’re a SOC manager reading this and thinking “great, I’ll automate Tier 1 and save headcount,” you’re making the mistake that every vendor wants you to make.
The SOC doesn’t shrink. The work changes.
When you automate triage, you don’t fire the Tier-1 analysts. You retrain them into detection engineers, threat hunters, AI oversight roles, and incident communicators. If you fire them, you lose the institutional knowledge they built while triaging 300 alerts a day for two years: which alerts are always false positives, which systems generate garbage data, which users always trigger suspicious-looking behavior for legitimate reasons. That knowledge is what makes the AI tuning work. Without it, the AI stays generic and your false positive rate stays high.
A CISO at a European enterprise who automated phishing triage and header analysis said the roles were eliminated within weeks and the security team went through a reorganization. He described the transition as reactive, and said he would have invested sooner in creating new positions. That’s the lesson. Plan the retraining before you deploy the automation, not after.
The Tier-1 analyst who asked me if he should learn to code? He doesn’t need to worry about losing his job. He needs to worry about whether his employer understands that his job is about to become more valuable, not less. And whether they’ll invest in getting him there.
So what does a Tuesday look like in 2027?
Rough picture. The analyst, let’s call him Ravi, starts his shift. He doesn’t open a SIEM dashboard with 400 alerts in the queue. He opens a case management console showing 8 AI-resolved cases flagged for human review, 2 cases where the AI is genuinely unsure and is asking for a judgment call, and 1 active investigation that the threat hunting team started last night.
He spends 30 minutes reviewing the 8 resolved cases. Seven look right. One looks suspicious, not because the verdict is wrong, but because the same user has appeared in three unrelated low- confidence cases this week. He opens a hunting session.
He spends two hours pulling the thread. It turns out the user’s account was compromised through a session token hijack that didn’t trigger any individual high-confidence alert. The AI handled each alert correctly in isolation. The pattern only became visible when a human noticed the same name showing up too often.
He writes up the finding. He sends a brief to the SOC manager. He flags the detection gap for the detection engineering team. He updates the AI’s behavioral model to weight repeated low- confidence appearances from the same user more heavily. The AI learns. Next time, it might catch this itself.
That’s the job. Not triage. Judgment.