This blog was written by Roberto Enea, Lead Data Scientist for Fortra.
Security alerts never stop; they flood in, one after another. AI runs quietly in the background, sorting through a plethora of data, making snap decisions, and raising red flags when it finds anomalies. Unfortunately, somewhere else, bad actors are running similar algorithms, monitoring, probing, and learning.
Efficiency (speed and scale) isn’t the danger, nor is the way security teams use these tools. The problem is how attackers exploit them. When efficiency meets risk, this ups the ante. For security and AI teams, the question isn’t whether to adopt AI, it’s how to turn it to your advantage without handing the keys to the adversaries.
Finding the Right Balance
Automation is seductive: it speeds things up and cuts through the clutter, taking the grind off human teams. However, not everything should be handed over. Rare events, anomalies, and unusual edge cases should be saved for human judgment.
Let AI sort the noise, triage the bulk of alerts, and point the way, while people focus on judgment, context, and interpretation.
If you push AI too far into these gray areas, it starts missing signals and this causes blind spots to appear. This causes analysts to chase the wrong alerts, causing frustration with the technology and suspicion of mistakes being made.
The balance isn’t only operational, it is strategic: when it is done right, AI clears the backlog of alerts and analysts act on what matters and make better decisions.
The Threats Are Learning, Too
There’s also the possibility that AI could shift the balance between bad actors and defenders. For instance, agentic AI could lead to the creation of a new type of phishing that we could call “mass-targeted phishing.” An AI agent could map every connection in an organization; employees, friends, relatives, partners, because the agent will be aware of all the relationships of the target.
The attack could be multi-channel, using emails and social media as part of the same campaign. At first, these attacks would shift the balance in favor of the malefactors because traditional detection methods would struggle and could well become obsolete.
These campaigns bypass traditional detection. One channel isn’t enough, which is why threats must be watched across networks, relationships, and behaviors at once. The defenders who ignore this risk more than data: partners, clients, suppliers, in fact everything connected to the network is at stake. AI amplifies both attack and defense, and the scale of the threat is growing.
Turning AI Into a Multiplier
AI can also change the rules for security teams. It moves fast, sifting through logs, scripts, files, and streams, separating duplicates from anomalies in seconds. The unusual alerts are handed to analysts, while the rest get dealt with automatically. Scale is no longer a barrier. Alerts stop being a flood, they become signals worth following.
For teams overwhelmed with thousands of events, the effect is instant. Chaos now turns into clarity. Analysts chase what matters. Patterns emerge before they explode into incidents. Predictive monitoring starts to act like a guide, not just a warning. It can make the difference between containment and a breach that spirals out of control.
Malicious Prompts: The AI Blind Spot
Then there’s the more subtle elements: malicious prompts embedded in macros, prompt injection aimed at AI systems themselves, and the goal: trick the AI into classifying malware as safe. The “Skynet” malware incident of June 2025 is an ideal example: code disguised as benign, AI misled, and threats slipped through.
Protecting against this takes more than technical tools. Guardrails, sanitizing input, and verification are all necessary, but they still fall short. People have to stay in the loop, decisions must be carefully scrutinized, protocols have to be followed, and logs must be watched all the time. Layered together, these controls turn AI from a potential weakness into a real strength.
Building a Robust Framework
Entities facing these risks need more than AI. They need context, insight, and an intelligence backbone. That’s where Fortra fits in. Fortra’s Unified Cybersecurity Platform simplifies operations while improving detection and response.
Fortra Threat Intelligence pulls telemetry from multiple sources to feed AI-driven correlation engines. Patterns appear that people might miss on their own. Teams can prioritize real risk. The blast radius shrinks, while resilience grows. A single agent framework handles deployment, updates, and patching, letting security teams focus on judgment and decision-making instead of logistics.
Fortra Threat Brain takes this further. Here, internal and external signals converge. Machine learning identifies anomalies, and trivial alerts are triaged automatically. Analysts spend their energy on high-value threats, while offensive and defensive intelligence converge to give a complete view of the threat landscape.
Human expertise still matters. Research teams study the tactics adversaries rely on, those subtle moves that AI cannot detect on its own. Community collaboration is also vital, keeping intelligence fresh and actionable.
The combination of automated insight and expert judgment shows that attack chains break before they can escalate into full-blown breaches. AI becomes a multiplier, instead of a single point of failure.
Making AI Work Safely
AI in cybersecurity isn’t optional, but it can be turned against you if left unchecked. Success comes from blending human judgment, automation, and continuous intelligence. Partial automation, layered defenses, and cross-channel awareness are all survival strategies, not optional nice-to-haves.
The takeaway for CISOs and data leaders is to automate where it makes sense. Verify what matters. Watch relentlessly. Choose platforms that integrate intelligence, analytics, and human insight.
AI extends reach; it shortens response times and keeps teams ahead of fast-moving threats, but it only works when paired with governance, strategy, and tools designed for modern cyber complexity.
Fortra provides that framework. Unified, AI-powered, and research-backed. It turns the risk of adoption into a security advantage. Automation, intelligence, and human oversight combine to make AI a strength, not a vulnerability.