Security auditing of OpenStack releases

I was recently asked some high-level security related questions about OpenStack.  This included questions such as:

  • What cryptographic algorithms are used, and are the algorithms user configurable?
  • What implementations are used for cryptographic functions?
  • How is sensitive data handled?

These are common questions for those evaluating and deploying OpenStack, as they want to see if it meets their security requirements and know what security related areas they need to watch out for when configuring everything.

Unfortunately, I have no good answer to these questions, as this information isn’t really collected anywhere (unless you want to go code diving).  OpenStack is also much too large for any single person to provide easy answers due to the number of projects involved (we’re up to 12 integrated projects not counting Devstack as of the Icehouse release by my count).  That’s a lot of code to review to come up with accurate answers.

The answers to these security questions also change from release to release, as the development teams are always marching forward improving existing features and adding new ones.  If one were to conduct their own audit of all of the integrated projects for a particular OpenStack release, it would quickly be time to start over again for the next release due to the 6-month release cycle.

I feel that the answers to these questions are also invaluable for developers, not just evaluators and deployers.  If we don’t know where are weak points are from a security perspective, how can we hope to improve or eliminate them?  Many projects are also solving the same security related issues, but not necessarily in a consistent manner.  If we have a comprehensive security overview of all OpenStack projects, we can identify areas of inconsistency and duplication.  This can serve to identify areas where we can improve things.

What form would this information take to be easily consumable for deployers and developers both?  For starters, we would want to see the following information collected in a single place for each project:

  • Implemented crypto – any cryptography directly implemented in OpenStack code (not used via an external library).
  • Used crypto – any libraries that are used to provide cryptographic functionality.
  • Hashing algorithms – What hashing algorithms are used, and for what purpose?  Is the algorithm configurable or optional to use?
  • Encryption algorithms – What encryption algorithms are used, and for what purpose?  Is the algorithm configurable or optional to use?
  • Sensitive data – What sensitive data is handled?  How is it protected by default, and are their optional features that can be configured to protect it further?
  • Potential improvements – What are potential areas that things can be improved from a security perspective?

So with that said, I went code diving and took a pass at collecting this security information for Keystone.  Keystone seemed like an obvious place to start given it’s role within the OpenStack infrastructure.  Here is what I put together:

This information would be collected for each project for a specific OpenStack release.  A top-level release page would collect links to the individual project pages.  This could even contain a high-level summary such as listing all crypto algorithms and libraries used across all projects.  Here’s an example that I put together for the upcoming Icehouse release:

My hope is that there is interest in collecting (and maintaining) this security related information from all of the development teams for the integrated projects.  The Keystone page I created can be used to discuss the most useful format, which we can then use as an example for the rest of the projects.  Once an initial pass is done for one OpenStack release, keeping this information up to date as things change with new development should not a very big task.  We would simply need to be vigilant during code reviews to identify when code changes are made that require changes to the wiki pages.  It would also be fairly easy to look over the bug fixes and blueprints when a milestone is reached to double-check if any security related functionality was changes.

If we get through a successful first pass at collecting this information for all projects, it would probably make sense to have a cross-project discussion or even an in-person security hackfest to go over the results together to work on consistency issues and removing duplication (moving some security related things into Oslo maybe?).  It would be great to get a group of security interested developers from each project together to discuss this at the Atlanta Summit.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>