Invisible Work of OpenStack: Security Bugs
Written by Jay Faulkner, Open Source Software Developer.
When you work on software, you’re always waiting for the moment when you get an email or message telling you that there might be a security problem.
This is the first step in a chain of events that ends with a patch and a vulnerability report. Here, I’m going to talk about that chain of events, and what happened for the most recent security bug I dealt with, OSSN-0091.
Identify the bug
First, some context: OpenStack has an entire Vulnerability Management team (VMT) and security project, with well-defined documentation and process to follow for a security issue.
In my role as Ironic Project Team Lead (PTL), I’m the de-facto Ironic contact with the VMT. This means that part of my job is shepherding the security bug through this pre-existing process.
The first step, which I alluded to above, is the initial notification that there might be a problem.
This can sometimes be as vague as a public bug report about weird behaviour, or, as in this case, can be a fully fleshed out problem statement by someone who has already done some troubleshooting.
In this case, Julia Kreger, another Ironic contributor, identified an issue where VirtualBMC or Sushy-Tools would strip secrets from a libvirt domain XML configuration.
Fix the bug
With the bug identified, it’s time to go about fixing it. Before we do, however, it is important to get a sense of its severity and impact. And this brings with it an important decision: to embargo or not to embargo.
Embargoing a security issue means you can repair the issue in private before it’s announced, to give people time to patch before details are released.
Making this decision is not free; there is significantly more overhead to fixing an issue in an embargoed fashion because you are unable to use your usual tooling and process. That’s a huge amount of developer time and frustration to pay for secrecy.
The Ironic community determined that this issue was not worthy of an embargo, for a couple of reasons.
- VirtualBMC and Sushy-Tools, where the bug was found, are used for testing purposes only. Therefore the overall number of Ironic users impacted by the bug was minimal.
- The impact of the bug would not immediately be resolved by a patch – due to the nature of the bug, secrets were permanently removed from XML configuration, meaning that a patch was not sufficient to eliminate the issue for users who experienced it.
Because impact was limited, and patching wouldn’t help in this instance, we didn’t embargo but even when a bug is not embargoed, we try to maintain discretion while fixing the issue.
For example, we’ll avoid specifically mentioning threat models in commit messages for patches up for review, and leave the bug set to private.
For this bug, we were able to get the issue reproduced and resolved in both projects on the same day, and performed a release, but that work was just beginning.
Announce the fix
Now that we’ve identified the issue, triaged it, and fixed it, we have to announce it. This includes several steps, and a bit of writing.
The first thing we do is to have a CVE number assigned. CVE numbers are universal identifiers that can be used by anyone to uniquely identify a bug.
We do this step first because it can take multiple business days to have one assigned – approximately 2 days after it was requested, OSSN-0091 was issued CVE-2022-44020.
While waiting for the CVE number to be assigned, we turned our attention to writing a security note.
In OpenStack, there are two types of security announcements:
- An OpenStack Security Advisory (OSSA) is issued when there is a security issue in code that can be fixed simply by upgrading the code in question
- An OpenStack Security Notice (OSSN) is issued when more action than just a package upgrade is required, such as a reconfiguration of an impacted package
In our case, we had to issue a notice because impacted VMs would need explicit operator action to re-enable security.
Luckily, writing a security notice is made significantly easier by the templates available and the myriad of older notices that can be used as examples.
Now the process is complete – we identified the bug, fixed the bug, and announced the fix.
The big takeaways from me about this process are that even an extremely minor security issue can be time consuming to deal with. I can’t imagine the difficulty in navigating through this without being able to stand on the shoulders of years of process and documentation created by the OpenStack security team throughout the years.
Which is why I need to say a big thanks to Jeremy Stanley, who runs the security team, for walking me through this.
Dealing with security bugs, and the associated documentation, is the type of invisible, but important, open source work that the G-Research Open Source Software team prioritises.
Thanks to them for enabling me to do it for Ironic in this case.