Cloud Security: Common Vulnerabilities and Best Practices

Introduction

Today, the cloud is an essential pillar of innovation. Its massive adoption reflects the advantages it offers in terms of flexible data access and storage capacity.

The popularity of the cloud is also partly explained by the increasing complexity of modern systems, supported by technologies such as AI and IoT. These require a dynamic and scalable infrastructure, and cloud infrastructures are ideally suited to meeting these needs.

However, cloud security raises critical questions. The increase in the volume of data stored and the multiplication of services go hand in hand with a growing risk of security breaches.

This article explores the different aspects of cloud security. We begin by presenting the specifics of the various cloud service models, before detailing the associated security challenges and risks. We will then look at the most common vulnerabilities and best practices for securing your cloud infrastructure.

Comprehensive Guide to Cloud Security

Why is Cloud Security Important?

Cloud adoption exposes organisations to unique security risks which, if left unchecked, can have serious consequences for their data, operations and reputation.

The security of data stored in the cloud is critical. This information is very often sensitive: personal data, client information, financial data, etc.

A security breach can expose this data to unauthorised access, leakage or alteration, with potentially serious consequences: financial losses, damage to reputation and loss of client confidence, non-compliance with regulations, etc.

Furthermore, cloud environments are often shared between several clients, which introduces risks linked to data segregation.

A misconfiguration, an exploited vulnerability or human error can enable an attacker to compromise sensitive data hosted on the same server or in the same virtual network.

The cloud is based on complex, interconnected technologies, including APIs, containers and multi-cloud environments.

These components multiply the potential entry points for attackers and make the cloud a prime target.

For example, vulnerabilities in APIs exposed by cloud service providers or poor identity and access management can allow unauthorised access to critical resources.

Cloud service providers operate on a shared responsibility model, where the underlying infrastructure is secured by the provider, but the security of applications, configurations and data remains the responsibility of the client.

This shared responsibility often leads to grey areas and gaps, particularly where companies do not have the expertise to take on their share of responsibility.

Securing cloud environments is not a luxury, but a necessity. As businesses continue to migrate to the cloud, proactive security measures are essential to ensure resilience in the face of increasingly sophisticated threats.

Cloud security can more or less be summed up as a question of configurations. And there are a lot of them, and supplier documentation is wordy and difficult to understand for people who are not security-savvy.

Nevertheless, proper configuration is vital, because security incidents linked to misconfiguration are the most common. These are generally situations where resources are left publicly accessible or where key components and processes are poorly managed.

As you will have realised, this article is not intended to be exhaustive. The aim here is to present the best practices for securing your cloud infrastructures and to explain the common vulnerabilities exploited by attackers.

We’ll focus on the AWS cloud to illustrate the main risks using concrete examples encountered during our penetration tests.

But before we get to the heart of the matter, a few details on the differences and specificities of the main types of cloud services.

Cloud services come in three main models: IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service).

These models are distinguished by the level of abstraction and responsibility assumed by the provider. And each type of service meets different needs in terms of resources, development and services.

IaaS is a model where the Cloud provider offers a complete IT infrastructure. The infrastructure includes hardware resources such as virtual servers, storage and network equipment.

This model is an alternative to managing a physical datacenter. It gives businesses access to resources that can be provisioned or released on demand, and billed according to actual usage. Amazon EC2, for example, allows businesses to lease computing and storage capacity on an as-needed basis, without having to manage the underlying hardware infrastructure.

However, this approach requires companies to retain certain in-house technical skills to manage system environments, configure virtual machines and ensure that their applications run smoothly in the Cloud. In other words, the provider takes charge of the physical infrastructure, but the user is responsible for managing the OS, applications and data security.

PaaS takes simplification a step further by offering not only an underlying infrastructure but also a complete runtime platform for application development and deployment.

By providing an abstraction layer on top of IaaS, PaaS allows developers to focus solely on developing their applications, without worrying about servers, operating systems or database management.

Developers benefit from tools, frameworks and database services, as well as flexible hosting that optimises the development process.

However, this model has its limitations. Dependence on the technologies offered by the PaaS provider creates a degree of rigidity, as migrating to another provider may require applications to be rewritten.

In addition, businesses do not have total control over the underlying infrastructure, which can pose a problem if the provider changes its offering or discontinues certain services. This lack of control and dependency can also represent potential risks for the security and availability of services.

SaaS is undoubtedly the most advanced model in terms of dematerialisation and management by the provider.

Here, the hardware, software infrastructure, applications and data are entirely taken care of by the provider, and the client accesses the software directly from a browser or via an application.

In this model, businesses gain access to the tools they need without having to worry about the underlying infrastructure, updates or server management.
SaaS offers simplified access to applications and rapid scalability, with billing often based on usage or subscription.

However, the lack of total control over the data stored in the provider’s datacenter, as well as over security and data management policies, is a challenge for many businesses.

The issues of confidentiality and protection of sensitive data therefore remain a concern, especially as the user is entirely dependent on the provider for fault management, security and updates.

What are the Most Common Cloud Vulnerabilities and How to Prevent Them?

As mentioned above, misconfigurations are clearly the main cause of cloud security incidents.

They can take many forms: lack of encryption, ports open by default, misconfiguration of access policies, improper identity and access management (IAM), and so on.

Furthermore, whatever the type of cloud infrastructure, Azure, AWS or GCP, the issues are the same. Here, we will use the AWS cloud to illustrate examples of misconfigurations and the associated best practices.

Bucket policies misconfigurations and S3 buckets compromission

The exposure of S3 buckets is a problem that we very often encounter during our AWS pentests.

This problem is critical to the security of AWS environments and can lead to data being compromised and, in certain situations, allow third-party services to be executed.

Let’s look at it step by step.

What is AWS S3 Bucket?

An S3 (Simple Storage Service) bucket is a cloud storage unit offered by AWS. It is often used to store files, databases or any type of object.

By default, S3 buckets are private, but errors in their configuration can make them publicly accessible. In most cases, this happens when access policies (Bucket Policies) or ACLs (Access Control Lists) are misconfigured.

Bucket Policies and ACLs: how they work and misconfiguration examples

In AWS S3, Bucket Policies and ACLs (Access Control Lists) are mechanisms used to manage access to buckets and objects.

Although they have related functionalities, they differ in their approach and use cases. Misconfiguration of these mechanisms can lead to significant vulnerabilities.

Buckets Policies

Bucket policies are JSON documents that define permissions at bucket level (and sometimes for the objects it contains). These buckets policies :

  • Apply to the entire bucket or to a selection of objects.
  • Use a syntax based on IAM (Identity and Access Management).
  • Can include conditions (such as TLS requirements or IP ranges).
  • Support specific principals (IAM users, roles, external AWS accounts, etc.).

Below is an example of a bucket policy that denies public access if the transport is not secure:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::example-bucket",
                "arn:aws:s3:::example-bucket/*"
            ],
            "Condition": {
                "Bool": {
                    "aws:SecureTransport": "false"
                }
            }
        }
    ]
}

Here is an example of a policy allowing access to a range of IP addresses:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "AllowSharedIPRange",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::example-bucket",
                "arn:aws:s3:::example-bucket/*"
            ],
            "Condition": {
                "IpAddress": {
                    "aws:SourceIp": "203.0.113.0/24"
                }
            }
        }
    ]
}

In this case, the policy allows access to all users (Principal: ‘*’), but restricts this access to the specified IP range (aws:SourceIp: ‘203.0.113.0/24’).

This type of configuration should obviously be avoided, as anyone operating within the authorised IP range will be able to access the objects without authentication, especially as it is particularly easy in a cloud environment to get into certain IP ranges.

And another example of a policy allowing full public access to the bucket:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::example-bucket",
                "arn:aws:s3:::example-bucket/*"
            ]
        }
    ]
}

The problem here is that this policy allows all users (Principal: ‘*’) to perform any action (s3:*) on the bucket and its objects. This totally exposes the data.

ACL (Access Controls Lists)

ACLs define permissions on a bucket or object at the level of AWS users or groups, but they are less flexible than Bucket Policies.

They are generally used to manage simple access. The specific features of ACLs are as follows:

  • ACLs operate at object or bucket level.
  • They allow precise permissions (read, write) to be given to predefined groups (such as AllUsers or AuthenticatedUsers).
  • They do not support the conditions or granularity of Bucket Policies.

Below is an example of an ACL allowing read access to a specific IAM user:

aws s3api put-bucket-acl --bucket example-bucket --grant-read id="USER_IAM_CANONICAL_ID"

And another example of an ACL allowing full public access:

aws s3api put-bucket-acl --bucket example-bucket --grant-full-control uri=http://acs.amazonaws.com/groups/global/AllUsers

This means that anyone can add, modify or delete files.

Best practices for avoiding misconfigurations in Bucket Policies and ACLs

The correct configuration of Bucket Policies and ACLs is essential to guarantee the security and access control of data stored in Amazon S3.

To do this, Bucket Policies must be used as the main method. Mixing the two methods can lead to conflicting permissions. Let’s imagine the following configuration:

  • A Bucket Policy that denies public access.
  • And an ACL that allows public access.

In this scenario, it could happen that the ACL cancels out the security intention of the Bucket Policy, and unintentionally allows public access.

To avoid this problem, it is essential to configure Bucket Policies to control access centrally, so as not to be too dependent on ACLs.

And if ACLs are necessary, it makes sense to limit them to authenticated users or specific AWS accounts. We can imagine this configuration:

  • AllUsers for public reading only in controlled cases.
  • AuthenticatedUsers for shared and restricted environments.

Finally, the S3 Block Public Access option can be used to simply block public access at global level:

aws s3api put-public-access-block --bucket example-bucket --public-access-block-configuration '{
    "BlockPublicAcls": true,
    "IgnorePublicAcls": true,
    "BlockPublicPolicy": true,
    "RestrictPublicBuckets": true
}'

Improper identity and access management (IAM) is a frequent source of vulnerabilities in cloud environments.

Misconfigured IAM can allow an attacker to access sensitive resources or compromise the entire cloud environment.

Let’s stay in the AWS cloud environment to explain this problem.

How the IAM system works on AWS

IAM (Identity and Access Management) is the AWS service for managing who can do what on which resources.

It provides tools to control the permissions of users, groups, roles and applications via policies.

Below are the key components of IAM:

  • IAM Users: represent individuals or services requiring access to AWS.
  • IAM Groups: enable collective permissions to be managed by grouping together several users.
  • IAM Roles: provide temporary permissions for entities such as EC2 instances, Lambda functions or external accounts.
  • IAM policies: JSON documents defining permissions (actions, resources and conditions).
  • Access credentials: include API access keys (Access Key ID and Secret Access Key) and passwords for the AWS Console.

IAM misconfigurations and associated security best practices

Principle of least privilege not followed

Giving permissions to a user or entity that are too extensive is one of the most common mistakes. This includes :

  • Assigning AdministratorAccess policies to users who don’t need them.
  • Allowing wildcard actions (Action: ‘s3:*’) on wildcard resources (Resource: ‘*’).
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "*",
            "Resource": "*"
        }
    ]
}

With such a configuration, a compromised user or service could delete all resources (s3 buckets, database, etc.) and modify security configurations.

In the same way, we also observe common errors in IAM roles, such as :

  • Allowing anyone (Principal: ‘*’) to assume a critical role.
  • Not limiting the use of roles to specific services or accounts.
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": "*",
            "Action": "sts:AssumeRole",
            "Resource": "arn:aws:iam::123456789012:role/AdminRole"
        }
    ]
}

Here, with this configuration, any AWS account could take on this role and act as an administrator.

This means that the principle of least privilege must be applied. In other words, only the necessary permissions should be granted to each user, role or service.

Poor management of access keys

API access keys are often used for automation or to interact with AWS via scripts. They are not regularly renewed or are stored in unsecured files or in publicly versioned code.

It is therefore important to strengthen the management of access keys through:

  • Rotating keys regularly.
  • Using AWS Secrets Manager to store keys securely.
Lack of multi-factor authentication (MFA)

Not requiring multi-factor authentication for privileged users increases the risk of compromise in the event of password theft.

To counter this risk, you need to :

  • Configure multi-factor authentication as a minimum for users with admin access.
  • Activate MFA for critical roles using IAM conditions:
{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Deny",
            "Action": "*",
            "Resource": "*",
            "Condition": {
                "BoolIfExists": {
                    "aws:MultiFactorAuthPresent": "false"
                }
            }
        }
    ]
}

Exposed APIs have become a prime target for attackers, as they play a key role in the exchange of data between applications and services.

Poorly secured APIs can have serious consequences, including the leakage of sensitive data, the compromise of systems, etc.

For more details, see our dedicated article: How to strengthen the security of your APIs to counter the most common attacks?

For more information on GraphQL APIs, see: GraphQL API Vulnerabilities, Common Attacks and Security Tips.

Web applications such as APIs can also be used as an entry point to compromise your cloud infrastructure.

We have many articles on our blog on this subject. Just note that the security of web applications is paramount. And to get away from the straightjacket of AWS, which we used in this article, we invite you to read this article:

Azure Penetration Testing: Objectives, Methodology and Use Cases

In it, we detail an example of the compromise of a cloud infrastructure (in this case Azure) following the exploitation of a web application.

Conclusion

Cloud security is a shared responsibility that requires constant attention to configuration, access management and the protection of critical interfaces such as APIs.

Misconfigurations, from poorly secured S3 buckets to overly permissive IAM policies, expose organisations to major risks, from data leakage to service disruption. These vulnerabilities are often exploited by attackers who take advantage of the slightest flaw to compromise systems or exfiltrate sensitive information.

In this context, it is essential to implement best practices, such as the principle of least privilege, the activation of multi-factor authentication and active monitoring of activities in the cloud environment.

However, these measures are not always enough to guarantee complete security in the face of constantly evolving threats.

Regular cloud penetration tests enable vulnerabilities to be identified before they are exploited. These assessments simulate real attacks to analyse the resistance of your systems and offer concrete recommendations for strengthening their security.

Author: Amin TRAORÉ – CMO @Vaadata