Free Trial

Blog

The CISO is under immense pressure, expected to manage a dozen or more vendors across perimeter, endpoint, network, application, and data security, not to mention having to be an expert on policy and operations.  Hackers in many cases have the upper hand, and the human element is still the weak link. 

Because of this, more and more enterprises are realizing that what we offer to automate some of this is no longer a nice-to-have…. It is a must-have!   At the same time, we’re able to clearly show our differentiation from the vulnerability assessment vendors, and we are more versatile than the cloud-only solutions.  Look at it this way, best articulated by one of our customers, Cepheid.  VA will tell you how many windows and doors you have, and which are open.   We take the next step, and tell you how to close them.  And, if you are so inclined, we’ll do the closing.  

The API-first architecture of our new Pulsar platform was also top of discussion, with potential ecosystem partners realizing the need for a unified view of overall security compliance, be it server, endpoint, identity, or vulnerability, and across all clouds and containers.  If you missed it, check out our Pulsar General Availability PR.  In all, a more than successful first day for Cavirin’s first RSA presence, based on both the quantity, and more importantly, the quality of discussions and demos. 

(Breaches photo from SS8 shirt at RSA - thanks!)

 

 

 

 

 

 

0
0
0
s2sdefault

First of a multi-part series on the CIS benchmarking process, by Pravin Goyal.

ON CIS BENCHMARKS

What are CIS Benchmarks?

The CIS Security Benchmarks program provides well-defined, un-biased and consensus-based industry best practices to help organizations assess and improve their security. The Security Benchmarks program is recognized as a trusted, independent authority that facilitates the collaboration of public and private industry experts to achieve consensus on practical and actionable solutions. Because of the reputation, these benchmarks are recommended as industry-accepted system hardening standards and are used by organizations in meeting various compliance requirements such as PCI and HIPAA.

What is the typical CIS benchmark development process?

CIS Benchmarks are created using a consensus review process comprised of subject matter experts. Consensus participants provide perspective from a diverse set of backgrounds such as consulting, software development, audit and compliance, security research, operations, government, and legal. Each CIS benchmark undergoes two phases of consensus review. The first phase occurs during initial benchmark development. During this phase, subject matter experts convene to discuss, create, and test working drafts of the benchmark. This discussion occurs until consensus has been reached on benchmark recommendations. The second phase begins after the benchmark has been published. During this phase, all feedback provided by the Internet community is reviewed by the consensus team for incorporation in the future versions of the benchmark.

What does it take to develop a new benchmark?

It is easy to contribute to CIS benchmarks. Just write to the CIS community program managers with your proposal for addition. The respective program manager will respond to you followed by a call to understand your proposition and discuss timelines, project announcement and project marketing to attract community participants. With some internal approvals, the project is created in around two weeks of time.

How long does it usually takes to develop a new benchmark?

It usually takes around 12-24 weeks based on the number of participants in the community and the size of the project.

Who else is providing security benchmarks like CIS does?

I would say none. CIS provides the broadest set of benchmarks covering both software and hardware. These include databases, operating systems, applications, mobile operating systems, firewalls, browsers, office applications and almost anything else that touches IT. The only other agency that provides a subset of the benchmarks is DISA. Also, sometimes vendors provide security documentation in the benchmark format. For example, VMware provides a VMware vSphere hardening guide for securing vSphere deployments.

How can we contribute?

Join the existing CIS communities. It is exciting and challenging, and you will get to work with amazing people.

How do we implement CIS benchmarks in our product?

You have two ways to implement CIS benchmarks. The first one leverages the content directly from CIS. The second method is to develop your own proprietary content to implement the benchmark.

Tell us a bit about CIS Docker and CIS Android benchmarks?

Both CIS Docker and CIS Android benchmarks have fascinating community members. I had the privilege to work on both as an author. One thing interesting to note is that CIS Docker benchmark exists from Docker version 1.6.  At that time not many people knew Docker or Docker security. But, the community did an amazing job by documenting 84 security recommendations! That is the power of community.  I'll cover Docker and Android in more detail in a future segment.

 

0
0
0
s2sdefault

Docker yesterday released Version 1.13 and today, we are announcing the release of CIS Docker 1.13 Benchmark, with Cavirin as a key contributor. The CIS Docker community has worked extremely hard to ensure that the time lag between the software availability and security recommendations is almost zero, a leading example of the concurrent availability of security guidance with implementations.

Download your copy from the CIS website.

The changelog between CIS Docker 1.12 benchmark and CIS Docker 1.13 benchmark is as follows:

Rules added with the Docker 1.13 benchmark

  • 2.19 Encrypt data exchanged between containers on different nodes on the overlay network
  • 2.20 Apply a daemon-wide custom seccomp profile, if needed
  • 2.21 Avoid experimental features in production
  • 2.22 Use Docker's secret management commands for managing secrets in a Swarm cluster
  • 2.23 Run swarm manager in auto-lock mode
  • 2.24 Rotate swarm manager auto-lock key periodically

Rules modified from Docker 1.12 benchmark

  • 2.8 Enable user namespace support - Updated Audit Procedure
  • 2.5 Avoid container sprawl - Updated Remediation and Audit Procedure
  • 2.3 Keep Docker up to date - Re-worded

Rules deleted in the Docker 1.13 benchmark

  • 1.2 Use the updated Linux Kernel
  • 1.3 Remove all non-essential services from the host

It is easy to understand new additions to the benchmark given the pace of innovation at Docker and the energetic community behind it. But, you might be curious to know why we deleted a couple of rules above?

CIS benchmark development is community-consensus driven. Every change to the benchmark is vetted for consistency, technical accuracy and alignment with current requirements in production.

Rule 1.2 has become obsolete given that most of the Linux distributions are now shipped with the updated kernel that fulfils Docker install kernel requirements. When Docker began, that was really an important thing to check for to run production workloads to ensure reliability.

Rule 1.3 is typically addressed in their respective CIS Linux benchmarks. Hence, this was a duplicate from other benchmarks and got deleted as well. CIS Docker benchmark provides core security guidance for Docker deployments and eliminates obsolete recommendations.

Cavirin Systems automatically scans container workloads against the CIS benchmark. Its agentless discovery mechanism quickly builds inventory of your Docker host instances and containers and runs a deep inspection against the entire CIS benchmark.

Check us out!

0
0
0
s2sdefault

Most people find stories like the Uber snooping lawsuit pretty unsettling.  If you heard nothing of this but the accusation of Uber's use of "God View" as explained in a recent series of articles by Forbes , it is important to know that Uber collected customer and employee information, and used that information in a manner that was well outside of reasonable use by the standards of California Privacy Legislation.

“Exhibit A contains customer data collected by Defendant and constitutes Defendant’s confidential, proprietary, and private information about its users — the very existence, content, and form of which are of extreme competitive sensitivity to Defendant in that they demonstrate what data Defendant considers important enough to capture, how that data is stored and organized, and could, individually or in the aggregate, provide Defendant’s competitors with insights into how Defendant views, analyzes and executes certain aspects of its business,” Uber wrote in a court filing.

0
0
0
s2sdefault

The Hackers – Time Magazine person of the year runner-up, and what it means for the rest of us

This last week, Time announced their person of the year, and as expected, President Elect, Donald Trump got the nod. More interesting was the selection of Hackers as number three. In fact, cybersecurity also touches Donald Trump, the person of the year, and Secretary Hilary Clinton, the runner-up, both knee deep in the conversation and controversy. Trump with his ties to Putin and attacks against the DNC, and Hilary with her private email server. 2016 also saw terms such as ransomware, malware, and IoT botnets enter water-cooler conversation, and the credit card hacks of the past were eclipsed by an order of magnitude when Yahoo admitted the breach of over 500 million email accounts. Even the Internet was not immune, with a denial of service attack in October cutting off connectivity to many well-known web properties.

0
0
0
s2sdefault

The first step in building a secure infrastructure is to understand the threats. Threats are potential events which lead to something useful for the attacker. It could be money, it could be bragging rights, or it could just be pure fun mutilating the reputation of a business entity. Threat risk modelling is an essential exercise to categorize threats and determine strategies for mitigating them. One such threat assessment model is STRIDE.

STRIDE is an acronym for six threat categories as outlined below:

  • Spoofing Identity – An attacker could prove that she is an authorized user of the system
  • Tampering with Data – An attacker could successfully add, modify or delete data
  • Repudiation – An attacker could deny or make it impossible to prove his delinquency
  • Information disclosure – An attacker could gain access to privileged Information
  • Denial of Service – An attacker could make the system unresponsive to legitimate usage
  • Elevation of privilege – An attacker could elevate her privileges

The STRIDE threat model forces you to think about securing your infrastructure from a threat perspective.

0
0
0
s2sdefault

© 2018 Cavirin Systems, Inc. All rights reserved.