Multi-cloud is expected to be the norm within a year, with more than 90 percent of enterprises worldwide relying on a mix of on-premises/dedicated private clouds, multiple public clouds, and legacy platforms to meet their infrastructure needs by 2022.
While adopting a multi-cloud strategy can make your business more efficient and agile, it can also have considerable downsides if not implemented carefully. In working with our customers, we typically see organizations make the same handful of mistakes. Here are three of the biggest.
3 Top Mistakes in Multi-Cloud Environments
1. Believing that moving to a cloud environment is going to make you more secure
Part of the problem with multi-cloud environments is that the number of opportunities you have to make a mistake are almost infinite. From a misconfigured account within the management console cloud environment to misconfigured rules that apply to the assets within that environment to how it’s connected to your actual data center or how it’s connected to your other cloud hosting provider. There’s a lot of room for error along the way.
One of the most common issues we see as we do continuous scanning and vulnerability and exposure identification for our customers is actually something super basic: multi-factor authentication (MFA) for their cloud console.
To put this into perspective: there was a company called Code Spaces that went out of business in 2014 because their entire AWS infrastructure was deleted, including customer data. They lost it all and the company went under. There were several failures between resilience backups and some other issues, but the root cause was a compromised AWS credential with no MFA enabled. As a result, this company went out of business. And unfortunately, many other companies have since suffered in the same way that haven’t made headlines.
There is an assumption that cloud systems are inherently secure.
But there’s a shared responsibility for security between the cloud provider and the customer.
What we see at Alert Logic is that it’s not necessarily the cloud that’s getting compromised. It’s often the same applications stack the customer had in their data center, put into a cloud environment, that becomes the path of entry.
From a security standpoint, any mistakes that you make in your data center can be carried into the cloud, and you can compound them with the cloud management layer if you don’t follow basic security principles. If you’re running WordPress or Joomla, it doesn’t matter if it is running in Azure, AWS, GCP, or in your data center.
If you don’t secure it properly, it will open you up to risk.
2. Trying to achieve visibility using multiple tools
You can’t protect what you can’t see, so it’s important to have visibility across the entire environment. That includes your public and private clouds, SaaS apps, custom web apps, and endpoints.
As an example, many organizations are re-prioritizing their endpoint strategy, moving it up higher on their priority list to improve their ability to detect threats as user endpoints are no longer centralized to offices and servers are moving to all sorts of cloud solutions. But that only addresses part of the challenge. When using the tools provided by each cloud provider in isolation, your visibility is limited to that cloud provider. It makes it virtually impossible to get a complete and accurate picture of your whole environment when in a multi-cloud or hybrid environment.
That’s where having a holistic view across the entire environment is imperative.
At Alert Logic, we’ve developed an asset model that allows us to standardize our internal taxonomy across these different cloud environments and on-prem. We don’t care where it is; if it’s running an operating system, it’s running applications and services, it’s an asset. We collect the standard metadata about that asset to give you the visibility that you need across these different cloud platforms. We are able to do this via a single console, whereby activity logs and data are pulled in from multiple sources and presented in a unified dashboard view.
3. Failure to incorporate an integrated approach
We see a lot of customers that come over to us from the SIEM world. They buy a tool that they think is going to solve all their problems. Then they realize that the tool is just sending out more data that they don’t understand because they don’t have people, and therefore, they don’t have the expertise. Not only do you need to make sure that tools are properly deployed, configured, and have the right visibility, but equally important, you need the people to be able to look at the output, process it, and execute on it.
From a SOC perspective, we act as interpreters of that solution. Instead of someone buying a SIEM and trying to read the raw material coming out, it passes through our SOC who look at it, validate it, and present it to the customer with recommendations that are easy to consume, that an IT admin with less than five years’ experience can act on. The objective is to lower the barrier of entry to security and get our customers there more quickly. You need technology to do that, you need people to look at the output, and you need to have a consistent, repeatable process to execute against.
It comes down to three key elements: people, process, and tools. You need all three for a secure multi-cloud environment.
How Alert Logic Can Help
As a managed detection and response (MDR) provider, Alert Logic helps support your multi-cloud strategy. Our SOC provides 24/7 security monitoring by GIAC-certified security analysts using state-of-the-art technology. Our SaaS-based platform easily integrates into your environment and analyzes network traffic and more than 140 billion log messages each day, providing comprehensive coverage across your attack surface. And Alert Logic’s dedicated research team is continually focused on the development of new and innovative technology to maintain pace with the ever-changing threat landscape.
You can learn more about us and get more insights into a secure multi-cloud strategy at the Multi-Cloud Summit.