Container security extends into all aspects of the container ecosystem, and not just to the well-known registries like Docker or those offered within the cloud service providers. Securing a container deployment may include best practices for companies supporting: the developer workspace, continuous integration, build automation, testing frameworks, release automation, and operations tools.
This is the second blog in a series detailing workload best practices.
The first blog, Securing Modern Workloads, is available here
The Cloud Security Alliance has done a phenomenal work in defining various cloud controls you should act upon or at least be aware of when determining your responsibility and choosing qualified vendors or training in-house personnel. One such work is a Cloud Controls Matrix that highlights the Shared Security Responsibility Model and provides architectural references.
This is the first blog in a series detailing workload best practices.
Per Wikipedia “A workload is the amount of work an individual has to do. There is a distinction between the actual amount of work and the individual's perception of the workload. Workload can also be classified as quantitative (the amount of work to be done) or qualitative (the difficulty of the work)”.
It’s the week of Google Cloud NEXT and, as a Google Cloud Technology Partner, we are glad to see our efforts to add Google Cloud Platform (GCP) into the Cavirin family of cloud security products succeed. The March 2017 release of Cavirin's platform will include support for continuous security assessment of workloads on GCP, and marks a major milestone in our company’s vision to be the provider of consistent security solution across workloads running on multiple cloud providers’ platforms.
With increasing reliance on the cloud, and in many cases on a single cloud service provider, the probability for a widespread (though infrequent) outage grows. On Tuesday, AWS S3 storage experienced a major outage, taking down the back-ends of many sites that include Netflix, Slack, and HubSpot, two of which we use at Cavirin. For enterprises that were single threaded, they just had to wait it out, and though the actual outage lasted only 4 hours, it took the remainder of the day for many to recover. To give you an idea of the magnitude of the impact, AWS S3 supports over 150K sites and upwards of three trillion data elements. Thousands of tweets were questioning if the Internet went down, just like last October with the Mirai outage. Compounding the problem is that the storage service is shared across multiple AWS zones, and though an enterprise may distribute compute across geographies, due to practical or cost reasons they may depend upon a single storage instance.
The DCCP protocol is recommended by the security benchmarks to be disabled to reduce the attack surface.
DISA RHEL 6 STIG reads “Disabling DCCP protects the system against exploitation of any flaws in its implementation.”
The CIS Security Benchmark for Debian 8 reads “The Datagram Congestion Control Protocol (DCCP) is a transport layer protocol that supports streaming media and telephony. DCCP provides a way to gain access to congestion control, without having to do it at the application layer, but does not provide in-sequence delivery. If the protocol is not required, it is recommended that the drivers not be installed to reduce the potential attack surface.”
Cavirin’s solution automates the assessment of these security baselines in your hybrid cloud. It continuously protects you from vulnerabilities arising out of misconfiguration and such zero-day vulnerabilities arising out of default attack surface. Vulnerabilities such as these do not really bother you if you used the solution to detect the presence of such uncommon network protocols and already reduced the attack surface by disabling them all together if not in use. You cannot really protect what you don’t see and Cavirin’s solution helps you with security evidence, audit reports, and operational procedures instead of verbal security assurances and recommendations.