NOTE: The following is primarily transcribed narrative from Gunnar Engelbach, captured during a recent Tech Talk at the Rogue Techies Monthly Meeting. Some minor changes—a word or phrase here and there—were made to translate conversational to narrative format, or to clarify a concept. Enjoy!
Gunnar, a member of the ThreatGuard team, began his presentation by clarifying the pronunciation of SCAP: it’s tempting to say “scap” because we’re so used to pronouncing acronyms as words when they seem to form words; however, those in the know pronounce this one “S-CAP.”
NIST operates under the Department of Commerce, which regulates interstate commerce and standards. One example of a NIST-governed standard that is relevant to today’s discussion is the AES (Advanced Encryption Standard), a subset of the Rijndael [which is a Dutch word pronounced ron-del] block cipher developed by two Belgian cryptographers, and is a specification for the encryption of electronic data. The atomic clock that all of our timekeeping is based upon is also governed by NIST.
At one point, the Security Division (of NIST) decided they wanted a better way of doing security testing. Their specific interest was IT Policy Compliance but they were looking a little broader than that. And, that’s where SCAP comes from. Peter Mell was the person in charge at the time, so he’s probably the one who decided on the proper pronunciation of the acronym.
SCAP is an umbrella term that encompasses a whole bunch of other standards, and all of these are open standards—free for public use.
Let’s start with the biggie: OVAL; Open Vulnerability and Assessment Language. OVAL was developed by MITRE Corporation (Massachusetts Institute of Technology Research and Engineering; based in Bedford, MA and McLean, VA). MITRE is a peculiar kind of corporation called an FFRDC; Federally Funded Research and Development Centers, one of (currently) 42 public-private partnerships which conduct research for the U.S. Government. [NOTE: Other FFRDCs that may be more familiar include Jet Propulsion Lab (JPL), Lawrence Livermore National Lab (Lawrence Livermore), Los Alamos National Laboratory (Los Al), Oak Ridge National Laboratory (Oak Ridge), and Sandia National Laboratories (Sandia).]
All that MITRE does, along with other FFRDCs, is get government contracts to do research and development and build prototypes. They came up with this spiffy idea called OVAL for automating security tests, and they were looking specifically at vulnerability testing. At the heart of all of this is OVAL, and we could spend days just talking about OVAL; how good and how awful it is.
OVAL is a peculiar combination of a knowledge base of vulnerabilities—or, at least the attributes that are used to determine a vulnerability—and the logic for putting it together. Being that MITRE developed OVAL, and that they are a bunch of academic brainiacs, it’s very mathematical; it’s based on set logic.
And, the reason to bring up the fact that MITRE is an FFRDC is that there’s a peculiar rule that comes out of the government which is: the government is not allowed to fund anything that competes with any commercial product; that’s a really hard-and-fast rule that’s broken constantly.
Because MITRE is an FFRDC, and all the money they get comes from the government to do research and development, they are not allowed to field a product—or even field-test a product—in an operational environment. They can build a prototype. They can build protocols and standards, or tell you how to build one, but they’re not allowed to do anything else.
Therefore, OVAL, this thing they came up with—using government funds—is an open standard by default; they just can’t do anything else with it.
There was kind of a nifty thing, starting about 2002, when Andrew Buttner was the Project Lead on the project. What OVAL does is provide a language specific to security, where you define what you’re looking for on a system, and how to determine if that particular system is vulnerable to an attack.
Question: Is it like a 4GL?
Gunnar: It started out as a SQL Database—a relational database—that’s glued-together with some set logic that, in order to export to machines so they can execute it, they decided to represent that in XML; it’s a programming language; it’s an XML document.
At this point, Gunnar pulled out about 40 printed pages of what looked like a core dump and explained that it was an XML document, and explained that it was the smallest, simplest SCAP data stream he could find; actually, it was part of the validation suite for Unix Password Validation Testing. It looks at all of the validations on Unix Password settings, and does collections and tests on them. Forty (40!) pages for that one test suite—mostly because it is OVAL.
After that, this really bright guy sitting in his cubicle at the NSA, named Neil Ziring, came across OVAL and said, “well, that’s a nifty idea … but there’s something else I can do with that.” So, Neil came up with a complementary standard, XCCDF; Extensible Configuration Checklist Description Format that’s all about policy compliance. So, at simple aspect, it’s a prose document of IT Policy Compliance. It’s another XCCDF standard, and it breaks IT Policy Compliance into groups and rules, and then, in order to determine whether or not an individual machine complies with a policy rule, it references an OVAL check. So, OVAL is the language used to go out and grab information off of the machine and then apply logic to it to determine states: pass | fail | other. And then XCCDF is another document that sits above it that references it and also has additional information. For example, an XCCDF document can contain variables and profiles. So you can do things like declare that, because this is a classified machine, I have stricter security requirements such as the password must be ten characters instead of eight characters. You could define that requirement in XCCDF so that, when OVAL goes to validate it, what it’s looking for is ten characters, which is a value passed in from the XCCDF document that referenced it, as opposed to eight characters. So it’s customizable.
Question: Can it also get into special cases like a password must have special characters, upper/lower case, etc.
Gunnar: Yes. What makes OVAL unique and useful, and why Neil Ziring picked up on it is two key features:
- It’s made very specifically as a read-only document; IOW, they did not want to create an attack vector. OVAL Interpreters are only allowed to read information from a machine, and not allowed to make changes. So no matter what you put in your OVAL document—the OVAL document is treated like a scripting language—the interpreter that’s running it (and collecting information from the machine) can’t actually change anything on that machine; this specific design principle makes OVAL much less likely to be usable as an attack vector.
- The other thing is: it’s transparent; because it’s an open standard and everybody gets to look at how definitions are written and the logic that goes into it, you can actually check the logic that goes into it. Even for something like passwords, it can be really involved. It’s not simply checking what’s the setting on the SAM (Security Account Manager) Windows. On a Unix machine, it could be a case where it looks at the PAM (Pluggable Authentication Modules) and sees which authentication module is being used. But because OVAL is transparent, you get to see what it’s doing and how it gets the information. Every step of it is laid out; you can see that it’s doing the right thing. If it wasn’t, somebody out there in the community would speak up.
So that’s how we get to policy compliance. XCCDF is the document that defines the compliance; OVAL is the mechanism used to collect the information on the site and whether any particular rule is pass or fail. And that’s the compliance part of it.
SCAP, as a broader thing, is intended to be used for all of these uses: vulnerabilities or policy compliance or indicators of compromise (looking for malware on a system). So that’s the idea behind SCAP as an umbrella; a tool to determine how much of this can we automate and make open source and publicly available and create tools for people to do any of these things.
CVE & CVSS
Now, there are a couple of other acronyms that come into this that make it more usable. When you’re talking about the vulnerability area, we get to CVE and CVSS.
CVE = Common Vulnerabilities & Exposures
CVE is actually a database maintained by NVD (National Vulnerability Database), which is a division of NIST. Basically, every time somebody finds a vulnerability, they write up a description of it and submit that description to the NVD; then they get assigned a CVE Number which is used as a reference. An example of a CVE Number is: CVE-yyyy-nnnnn where yyyy=year and nnnnn=sequential number, within that year. You can actually go to https://nvd.nist.gov/vuln/detail/ followed by the CVE ID and a description will pop up. The description will describe the platform it affects, what the result is, what the risk is, and any other details about the vulnerability. So that’s a common reference for vulnerabilities, which is much better than the anti-virus products from companies like Semantic, Norton, whatever that give their vulnerabilities names. The named vulnerabilities are not comparable; they’re quite often talking about the same thing, but you can’t tell from the write-ups because there’s no common reference between them. The CVE becomes the common reference between them. And, a lot of companies like Microsoft and Red Hat, when they release a vulnerability, they will register with NVD and release the CVE Number with it, creating a common reference. All of those are embedded within OVAL checks.
Then we come to CVSS.
CVSS = Common Vulnerability Scoring System
Another issue, when you look between vendors, they will have their own way of scoring things: vulnerability, virus, anything else. Some of the vendors will use a 1-100 scale; some use a 1-10 scale; some will call it Low | Medium | High | Critical.
This is a mathematical vector; an actual vector based upon CIA Values; CIA stands for Confidentiality | Integrity | Availability. Confidentiality meaning how important is it that the information you have is only visible to people who are allowed to have that information -or- how devastating would it be if somebody else got to see the information you have. Integrity is how important is it that data not be altered or edited; somebody can’t actually change your data; would that cause you problems? Availability is wherever this data is or this resource is located, how critical is it that it’s always available? Are you OK if it’s down for a day, or is it real-time, always must have it available? So that combination of critical values determines the criticality of a particular vulnerability … plus other things that go with it.
If a vulnerability is only theoretical, that means its ability to be used or used against you is a little lower but if there are known attacks out there, known tools for attacking it, that means it’s a much higher risk. Is it something that requires local access? Is it something that can be done remotely? Is it something, for example, that is a privilege escalation; IOW, it only works if somebody is already logged-in and then they do a privilege escalation, as opposed to remotely get full access? All of those go into the CVSS Formula, and they’re all numerical values, and when you plug all these different factors in, you get—through this mathematical formula—a very specific number, which is on a scale of 1 to 9.7. But it is comparable; it makes it very easy to determine one really is worse than another. So it gives you a very reliable ranking system for the criticality of any particular vulnerability. We find so many of them that ranking them and then attacking the important ones is all you can do; you never have time to get them all. Here’s a link to the CVSS Standard. From that site, an example VCSS Vector for vulnerability CVE-2002-0392: AV:N/AC:L/Au:N/C:N/I:N/A:C yields a numeric value of 6.4, using a CVSS calculator.
Question: Do they typically come up with a test for every CVE?
Gunnar: No. There are other vulnerability databases, like BugTraq (BID) where you get a BID Number. So there will be vulnerability tests that have references to BIDs for which there is no CVE entry in the NVD. CERT has another database of vulnerabilities. And then, for compliance, there are some others: CCE is the compliance version of CVE, same as CCSS is the compliance version of CVSS. As it turns out, with policy compliance, nobody uses CCSS because policy compliance works like this: if you fail one rule, you’ve failed; it doesn’t matter what the level of criticality is, you’ve failed.
That said though, for all of the places that are under policy compliance (e.g., Military), that go into the STIGs (Security Technical Information Guides); this is the guidance the military uses for when you can attach a computer to a military network, you must be in compliance with the STIG first.
STIGs started off as prose guides (we used to refer to them as the Rainbow Books) that were very thick. Recalling the time we attached a Windows NT4 to a Top Secret Network, the guide that came with it was many inches thick; we had two machines to attach. DISA sent an experienced guy down to show us how to do it, and it took him all day to attach two machines because the process wasn’t automated. And he probably made a few mistakes, even though he was really good at it, and afterwards, if somebody changed anything, we wouldn’t know.
Question: So, that was all just configuring the machine to make sure the settings were proper?
Gunnar: Yes. Security policies, password policies, shutting down extra services.
As an example, the STIG for Windows Server 2008 actually comes to around 450 individual rules (e.g., password length, password history, encryption types, FIPs compliance mode) that have to be checked for just the operating system; hundreds more when you start taking into account firewall, Oracle database, and other applications on the machine. The web browser has its own separate set of STIG Guidance for it.
Another acronym: DISA is the Defense Information Systems Agency; kinda like the CTO for the military; for DOD. DISA is based at Ft. Meade, Maryland, but has multiple facilities around the country; the primary one being a big black square building just outside of Dulles Airport. So we basically have five branches of military underneath the DOD and they all have their own divisions for doing computer security, for doing intelligence, for doing everything.
Question: What about Space Force?
Gunnar: Yeah. SPACECOM is a command already run by the Air Force, which is headquartered at Peterson AFB in Colorado Springs, just down the road from Falcon AFB. And, Falcon AFB was renamed to Schriever AFB in 1998.
There’s a lot of duplication in the services so some smart guy suggested consolidating the security activities for all of DOD. They set policies for the DOD; they do testing. I talk about the STIGs and what you’re required to do in order to attach a computer to a military network and DISA is who determines where that comes from. And, in fact, in the case of the STIGs, their source for that is: they go back to the original vendor and say, “Hey, it’s your system, have you set it up correctly?” So Microsoft STIGs come from Microsoft. Red Hat produced their own guidance. There’s automated guidance for Mac OSX, AIX, Solaris, HP/UX, Cisco Routers … and then there are prose guides for other stuff. A prose guide is a written document that you must follow manually (instead of an automated system where you can launch the process by pushing a button). And some of the restrictions on that go back to the OVAL Language; this is what we use to collect in order to decide pass/fail. The way OVAL is made, it’s segmented by functional area. There’s an OVAL test for passwords, which is a huge document, there’s a separate one for File Systems and Access Control Lists (ACLs), a separate one for Registry Keys and Registry Key Permissions, etc. Just in the Validation Test Suite, there are about 50 of them, and that’s not the whole suite, that’s only the ones that do validation testing.
Question: You’re talking about so much stuff, even though it’s automated, if you had a normal business, how many staff techies might be involved with dealing with the security part of it?
Gunnar: This particular SCAP type of thing?
Gunnar: It depends on the vendor. With our company, ThreatGuard, we try to make it simple and easy to help you get to stuff really quickly, but we also only work with small- and medium-size networks. For actually running and monitoring it, one person is plenty.
Question: What about setup? Do you guys do the setup?
Gunnar: Nope. They do the installation themselves; very easy to install. It depends on which product you get. We do a desktop product that has to be run manually for every machine that you’re running it on; that can be pretty labor-intensive. The benefit to that one is: we actually built remediation into it; this breaks the read-only rule but it turns out that it’s what everybody wants. Because when you’re talking about 300-500 different things you have to change on a machine—and do it right—you’re not going to do that manually. And, even if you’re in a domain where you’re doing it via GPO (Group Policy Object), you still have to write the GPO, and get it right, and then validate the GPO. And we’ve found that, even if you’re in a domain using GPOs, a lot of times they don’t get applied. The Microsoft SCCM gives you no idea, no indication that it wasn’t applied. So, you have all these machines that you think had the GPO applied, but it wasn’t done completely and you don’t know it until you actually get back and test it.
Question: SCAP. It seems like it’s primarily leveraged by vendors to create these products. It doesn’t seem like there are a lot of security people working on them. It’s kinda esoteric.
Gunnar: If we go back to MITRE and their OVAL Engine, they can’t field the OVAL Engine; so the only way this gets out there is if commercial vendors come along and build it. Actually, we were the first one to do that, back in 2002, 2003 time frame.
But there’s a validation program which I kinda hinted at. NIST runs a validation program, and you can actually go to the NIST website and look for it and validate products, and there will be a list of things that have gone through independent lab testing to make sure they have implemented this protocol correctly. I’ve been doing it from the beginning; I helped develop this; it still took us 6 months to get it through the lab; it is that awful to do a validation!
So there’s about a dozen companies—a dozen products on the NIST website. The driver behind it is that the DOD requires use of SCAP for application of the STIGs; the caveat to that is: the DOD funds their own tool for doing it; it’s run by SPAWAR (Space and Naval Warfare Systems Command) out of San Diego, called SCC (SCAP Compliance Checker) so they’re already breaking their own rule about funding competitors. So that kinda takes away the DOD for business. Then, around 2006, the OMB (Office of Management and Budget) out of the White House put out a memorandum that dictated that all Federal unclassified systems are required to do SCAP Validation. Period. So now, every Federal Agency is required—on their unclassified systems—to do this, and they use a different standard that started out as FDCC (Federal Desktop Core Configuration), and then they kinda broadened that a bit to USGCB (U.S. Government Configuration Baseline). Both of these are available in SCAP Datastreams from the NIST website. These are a little less-rigid than the STIGs; the STIGs lock-down a machine more securely. These are intended to be used on an unclassified machine, so the machine ends up being a little more usable, but the OMB Memorandum requires you to do this and all the commercial companies decided to get in the game, with a guaranteed market, and that’s why there’s a list of companies that have gone through the validation process.
Question: What about OpenSCAP?
Gunnar: That’s Red Hat; mostly linux.
FollowUp: I was trying to find an SCAP scanner I could play with.
Gunnar: The list of validated products can be found at: https://csrc.nist.gov/Projects/scap-validation-program/Validated-Products-and-Modules. To download and play with the ThreatGuard SCAP scanner for free, visit the downloads page of the ThreatGuard site.
One other thing I didn’t touch on is the fact that this an open standard. You can write your own content or you can take existing content and edit it for your use, and that’s one of the big, powerful things about it. You’re not beholden to whatever Symantec puts out. Or a lot of other companies don’t want the rest of the world knowing that they’re testing so they can write their own. The drawback to that is it’s really, really hard. This is a bad language; it’s really hard to come up with the expertise to actually build OVAL content. There are two initiatives that are addressing that issue; one is a group called SACM (Security Automation and Continuous Monitoring) that’s trying to take this suite of standards and make it an international standard through IETF; Internet Engineering Task Force, and in the process, simplify it and make it more accessible. They’re dealing with some of the management issues. They’ve been going for about five years now, without much progress. And then NIST is working on the SCAP version 2, with which they’re trying to do the same thing; IOW the same thing SACM is.
FollowUp: Right now, it’s on version 1.2, right?
Question: How big is your company?
Gunnar: I’m one of five people.
FollowUp: Is that typical?
Gunnar: No. Our competitors are: Symantec, Fortinet, TripWire, RSA… but, some of those aren’t actually our competitors. We sell them a license and they’re actually our customers.
FollowUp: And those companies do a lot of other things, too.
Gunnar: Yep. And there’s another factor there. I mentioned that the government funds a project through SPAWAR called SCC, where they build their own and then give it away for free to anybody in the government. So, automatically, that takes government use out of the process. Well, there’s a second thing: there was a purchase program that the DOD put together, and they settled on ePO (ePolicy Orchestrator), a McAfee product, as a toolset. They bought an enterprise license for ePO; IOW, everybody in DOD gets to use ePO for free, so there’s no room in the DOD for anybody else to do this stuff. Thing is: everybody hates that tool so much that they find other budget money to buy something else.
Comment: Hmmmm…sounds like government to me.
Gunnar gracefully traversed the hierarchy of protocol controls and a vast array of acronyms to present a fairly user-friendly Tech Talk on a highly-technical topic that was previously unknown to the whole audience. Great job, Gunnar! Thank you!
Some more acronyms that are associated with SCAP include:
ARF (Asset Reporting Format): A standardized format for exporting the results of an assessment; this is important because it helps support vendor interoperability. It’s also very finicky and resource-intensive (i.e., each compliance/vulnerability/whatever scan will produce a very large ARF Document. Multiply that by the number of endpoints in a very large enterprise and you’ve got a data management nightmare.)
CPE (Common Platform Enumeration): This was originally intended as a dictionary of all known software and hardware, but in SCAP practice, it doesn’t really get used that way. There is a database of registered CPE identifiers at NIST: https://nvd.nist.gov/products/cpe. In practice, CPE is simply used as an arbitrary tag for automating applicability of XCCDF content. What this means is that an SCAP-compliant tool can use a small bit of OVAL code and an associated CPE identifier to automatically determine if any particular SCAP datastream is applicable to each assesses endpoint or not. So the user shouldn’t have to tell the tool which content to run for each endpoint; the tool is capable of determining that on its own (in fact, this functionality is a required part of the validation process.)
Gunnar added that being familiar with SCAP would be a good idea for anyone who does administration of Redhat/CentOS systems, because SCAP is the underlying technology used to set and monitor security policy. Anybody who has done a recent installation of Redhat/CentOS and noticed the security policy configuration section when using the Anaconda installer now has a clue to what that is all about.
Written By: Karen
Vetted By: Gunnar
Copyright © 2018, FPP, LLC. All rights reserved.