How to improve cloud provider security: 4 tips

My Post - 2020-02-04T141549.250.pngMany IT pros remain concerned with the risk of data loss and leakage in the cloud, according to a new survey from AlgoSec.

Many companies are increasingly looking to the cloud as a more effective and efficient way to manage their applications and other business assets. Ideally, a cloud environment can offer the agility, flexibility, and scalability that a company may not be able to achieve internally. However, the cloud carries its own set of concerns and challenges, several of which were highlighted in a Tuesday report released by security provider AlgoSec.

 

In a survey commissioned by AlgoSec and conducted by the Cloud Security Alliance, security was the top worry among the 700 IT professionals polled. A full 81% expressed significant concerns about security when moving data to a public cloud platform. The risk of sensitive customer or personal data being lost or leaked was cited as the biggest security fear.

Respondents pointed to specific security concerns when running applications in the public cloud. Those concerns included unauthorized access to cloud-based data, infiltration of more sensitive areas of the network (either in the cloud or on premises), data corruption, outages due to Denial of Service attacks, and the abuse of resources (e.g., cryptomining).

algosec-cloud-concerns
AlgoSec

Ideally, using a cloud provider should alleviate some of the internal effort involved in managing applications and other assets. But IT pros still need to manage security in the public cloud, and that task carries its own challenges. Proactively detecting misconfiguration and security risks with public cloud vendors was the top obstacle cited in this area.

Respondents also pointed to other public cloud security challenges, including a lack of visibility into the entire cloud estate, compliance and preparation for audits, managing both cloud and on premises environments, managing a multi-cloud environment, and a lack of expertise in cloud-native security.

The survey also posed questions about multi-cloud environments. Yes, using multiple providers reduces the reliance on a single provider. But a multi-cloud environment adds certain challenges as well.

Among the respondents, 66% said they rely on several cloud providers, with 35% reporting that they use three or more providers. To add to the complexity, organizations may use both public and private clouds. A full 55% of those polled said they use a hybrid cloud environment with at least one public and at least one private cloud. Some 35% said they use a combination of a multi-cloud and hybrid cloud environment.

“As companies of all sizes are taking advantage of the value of the cloud with its improved agility and flexibility, they are also facing unique new security concerns, especially when integrating multiple cloud services and platforms into an already complex IT environment,” John Yeoh, global vice president of research for Cloud Security Alliance, said in a press release. “The study findings demonstrate how important it is for enterprises to have holistic cloud visibility and management across their increasingly complex hybrid network environments in order to maintain security, reduce the risk of outages and misconfigurations, and fulfill audit and compliance demands.”

How to improve cloud provider security

To tackle some of the risks and challenges in using cloud providers, AlgoSec served up a few recommendations.

1. Build in security and compliance

Cloud providers now offer tools for managing security and compliance, many of which meet certain industry and government regulations. As such, IT pros should available themselves of these native tools.

– Read more

Channel predictions: Opportunities in AI, hybrid cloud, edge and security

My Post - 2020-01-31T103308.264.pngSome of the great and the good in the industry have shared their predictions for what they expect in 2020

How Application Security Is Different In Clouds And Desktops

My Post - 2020-01-28T123944.420.pngBefore we get started, let’s take a look at this definition: “A language-neutral way of implementing objects that can be used in environments different from the one they were created in.”

And this one: “A platform-independent, distributed, object-oriented system for creating binary software components that can interact.”

Do you think these are describing a service mesh from the 21st century? Well, they are actually definitions of Microsoft DDE (1987) and COM (1993) technologies, respectively. It’s worth noting that a lot of concepts were invented a while ago and then just reused under different names to solve different cases.

The real world is going to merge desktop and servers, desktop applications, web applications and APIs to one unified environment. It seems clear to me and a lot of other security experts that we’ve already made a lot of technologies to solve similar problems like interprocess communications, data sharing and data analysis. This fact made obvious an assumption that the security layer should be the same as well. There is no difference between the attack surface on endpoints, servers and clouds.

It doesn’t matter what the subject is or what the object is during the data transmission process. It could be a Win32 application, legacy web application, microservice, serverless application or something else entirely — the security requirement will be the same. It’s related to that fact that all the security controls eventually target to protect data. This is true for other resources, of course, but let me please keep folks who have dealt with unexpected crypto miners and DDoS attacks out of this talk. Let’s focus on data as the main thing.

Whatever we are building in security — from old-fashioned VLANs and role models to modern pod security policies, east-west communications security, or container isolation strategy — we always use data as a starting point. And the data nowadays is everywhere inside a company — in clouds, servers, desktops, laptops and mobile devices. To make business faster, we need to give widespread access to this data, and that’s not a trend but a survival requirement.

It’s the same thing for CISO and risk management folks from where user data was stolen. Regardless if it’s a developer machine, QA environment or database, it’s all the same from a data perspective.

This idea is fairly similar to the zero trust concept that was invented by Forrester analyst John Kindervag back in 2010. It requires verifying every single source (object or subject) in any communication processes at every single stage. According to a 2018 CSO article, “Zero Trust is a security concept centered on the belief that organizations should not automatically trust anything inside or outside its perimeters and instead must verify anything and everything trying to connect to its systems before granting access.”

So the data is similar everywhere, as we discussed above. The last thing that was mentioned as a difference between desktops and servers is automated end user behavior. People with that point of view claimed that desktops should be protected in a completely different way just because they are machines for human operators, unlike servers, which host services for external usages.

But the reality is different. Currently, automation is everywhere, and it’s completely usual nowadays to automate desktops by robot process automation (RPA) software, where a lot of players become unicorns like UIPath, Automation Anywhere or electroNeek. These products provide automation for users’ daily actions that technically align desktops with APIs and applications in terms of user behavior.

So at this point, we have figured out that desktops, laptops, servers and serverless applications are all similar from the data sensitivity and usage perspectives. That’s why similar application security controls and policies are relevant in the modern world.

Application security always starts with the following points:

  1. Data profiling, categorization, prioritizing and risk mapping (data authentication).
  2. Attack surface definition and entry/input points inventory.
  3. Application to data and input mapping.

As a result of this, we will have a map of data types, which application uses which types of data and which inputs these applications accept. This map allows for implementing basic controls, policies and other restrictions. Again, this is all completely unified and agnostic to the platform, application type and runtime.

Once we have a map, the first goal is to build an authentication and authorization strategy and implement it. Again, according to the zero trust concept mentioned above, we should not trust anything by default. And to claim they are not “anyone,” applications should authenticate themselves. – Read more