We can’t get complacent when it comes to breaches. When people are constantly bombarded with bad news, they eventually become numb to it — it’s just human nature. But once this sort of complacency sets in, that’s when the lasting damage takes hold. That’s why it’s great to have a managed security provider that can help shoulder the load.
An overview of the HAFNIUM attack
On the heels of the infamous breaches earlier this year, another breach dominates the headlines, the HAFNIUM led attack that affects on-prem versions of Microsoft Exchange. Fortunately, Microsoft has been aggressively moving customers from on-prem Office and Exchange to O365 and Exchange online. However, with every technology transition (much like public cloud adoption) each organization moves at their own pace.
On March 2, 2021, Microsoft publicly disclosed four CVEs (Common Vulnerabilities and Exposures) related to this attack: CVE-2021-26855, CVE-2021-26857, CVE-2021-26858, and CVE-2021-27065. They are all related to remote code execution but differ in their attack methods. The different CVEs covered different parts of the cyber-kill-chain. Some address attacks via the network by bad actors trying to gain access, while others describe local attacks, once the actors have already penetrated the network.
For many organizations, this is when the story around patching and remediation really began. Depending on how the IT organization deployed their Exchange server, there may be exposures to one or more of these vulnerabilities. In many ways, March 2 was when the breach became legitimized, and organizations everywhere initiated their patching processes. IT organizations knew instantly if they were impacted because the email system isn’t hidden, obscured, or tucked into a corner of a department or workgroup. It’s important to understand that not every organization has a patch management program, let alone an emergency patching process. These types of vulnerabilities are great reasons to get yourself prepared. But that’s a topic for a different day.
The story within Alert Logic
For Alert Logic, the story started a few days prior to the public disclosure of the vulnerabilities. Due to the nature of what we do 24/7, we are always monitoring our customers’ environments, staying vigilant on their behalf. On February 28, days before the public disclosure, we noticed outbound connections originating from within some of our customer’s networks that were communicating with a malicious IP address. We knew it was a bad IP address because it was on a list of known bad IP addresses we maintain and constantly update with the latest threat intelligence.
Detecting the breach
Based on our experience, when we see this activity, it’s a clear sign of a breach. Breaches like these occur when the preventative tools that the customer has put up as their first line of defense are insufficient or have been compromised. Our Security Operations Center (SOC) immediately determined that this was a critical issue, and we called our impacted customers. Simultaneously, we began digging into this breach, putting in more instrumentation to identify similar style attacks.
At this point in time, we did not know the full scope of the problem, or that it involved the Microsoft Exchange Server. Our primary objective was to contain the problem before any real damage occurred. After the public disclosure, we were able to correlate the breach we detected days earlier with the information contained in the CVEs. We then initiated our emerging threat process and began a series of threat hunts.
Kicking off threat hunts
One purpose of our threat hunts, outside of just detecting attacks that have circumvented active security controls, is to identify opportunities to further refine threat content we have and/or need to develop. A successful threat hunt is a big deal and means we recreated the attack that successfully thwarted detection and prevention technology in play. Our objective is to achieve the fastest detection and response possible. This is made possible through the processes defined between our threat hunters, threat intel analysts, and SOC, resulting in a continuous improvement cycle of correlation and verification.
We can confirm that a half-dozen of our customers were at high risk from attacks that ranged from basic failed attempts to deploying web-shells. In some instances, the attackers made it far enough to execute a pivot — meaning lateral movement through the compromised environment after establishing an initial foothold. In all cases, we successfully identified the attacks before any major damage occurred. We do this through monitoring of multiple phases of the kill chain and by not limiting ourselves to a single point of detection.
Responding to those affected or at risk
As we work alongside our impacted customers to help them remediate their environment, we were able to add those 4 CVEs into our automated scan coverage and build tuned telemetry signatures into our IDS and network analyzers to rapidly identify active attacks. Now, when our customers run a vulnerability scan, these exposures will surface and be highlighted. By leveraging our scan data, we found dozens of other customers who were exposed to this vulnerability. This means that we were able to confirm, based on the scan data and the logs that we process, that the affected versions of the software were present in the customer environment but had not been breached yet. Once we had the confirmed list of customers, as part of our community defense ethos, we began reaching out to these customers to let them know about the situation present in their environment and provide guidance to get to a known-healthy state.
Be prepared for anything with MDR
While we hope that this is the end of this particular attack, we are realistic and know that there will be many more like it. Fortunately, through our Managed Detection and Response (MDR) service, we have a lot of experience to help our customers both pre- and post-breach. In a pre-breach environment, we can help identify vulnerabilities and cloud configuration issues that can create holes in your defensive posture. In the unfortunate event of a breach, we can help reduce the mean-time-to-detection and accelerate response through our network monitoring, log ingest, and IDS capabilities, both on-prem and in the cloud. By reducing the attackers dwell time, we can address the problem before real damage occurs.
To learn more about the HAFNIUM attack, please read our support center article or head to Microsoft’s blog.