Blog / 5 min read
Anticipating cybersecurity incidents is complex. It begins with identifying vulnerabilities in your mission-critical environment and then mitigating them based on their likelihood of exploitation. But, the complexity arises in how to predict exploitability and which factors to assess. Traditionally, security teams have utilized the Common Vulnerability Scoring System (CVSS) to determine which Common Vulnerabilities and Exposures (CVEs) to prioritize in their environment. However, this system has faced much criticism over its complexity, inaccuracy, and general misusage in that scores tend to be ineffective at enabling teams to identify and mitigate the CVEs that are most dangerous to their organizations. This has led to the rise of the Exploit Prediction Scoring System (EPSS) — whose aim is to add more value to risk scoring. Throughout this blog, we will break down the ins and outs of EPSS and how it can be leveraged to effectively and efficiently prioritize vulnerabilities without overburdening your already-overwhelmed personnel.
The new Exploit Prediction Scoring System, developed in 2019, is a data-driven, machine-learning model that estimates the likelihood that a software vulnerability will be exploited in the wild. EPSS is governed by the Forum of Incident Response and Security Teams (FIRST), who are responsible for a number of vulnerability scoring protocols. FIRST’s goal is to assist network defenders better prioritize their vulnerability remediation efforts; ultimately assisting overburdened security personnel to manage their never-ending stream of vulnerabilities.
Most industry standards are limited in their ability to assess threats. EPSS is able to fill this gap by using current threat information from Common Vulnerabilities and Exposures (CVE), a list of publicly disclosed security flaws, and real-world exploit data. Utilizing this information, the EPSS model then produces a probability score between 0 and 1 (0 and 100%). The higher the probability score, the greater the likelihood that a vulnerability will be exploited. A majority of industry standards on the other hand, like the CVSS model, focus on the inherent characteristic of a vulnerability to determine its severity. Next, we will discuss the key differences and distinct benefits of utilizing EPSS versus CVSS.
Developed by the National Infrastructure Advisory Council (NIAC), the Common Vulnerability Scoring System (CVSS) was designed to promote a common understanding of vulnerabilities and their impact. This is done by providing the end user with a composite score representing the overall severity and risk a vulnerability represents. CVSS scores are commonly used to calculate the severity of vulnerabilities discovered in one’s environment, and as a factor in prioritization of vulnerability remediation activities. Although this is one of the most-used tools for assessing risk, CVSS is not a measure of risk and therefore can be misused in the way that security teams interpret and apply scores. This has caused a number of criticisms to be leveled against the system. Some of these criticisms include:
Vulnerabilities in operational environments tend to be rated too high due to the fact that their complexity can make vulnerabilities difficult to exploit, leading to scoring inaccuracies.
How a vulnerability might be affected by organizational and environmental variations is not taken into account, leading to false positives in vulnerability prioritization.
Supply chain risk is not accounted for
Without considering important contextual factors, it is difficult for vulnerabilities identified with CVSS to be deemed relevant and actionable. EPSS on the other hand is a machine-learning model trained against real-world exploitation data and updated daily based on real-time data collected. It takes data from multiple sources, accounting for over 1,100 variables, and attaches measurable metrics to vulnerability profiles — allowing security teams to better address system issues. Essentially, EPSS allows teams to prioritize the most pressing vulnerabilities by providing threat information and a probabilistic understanding of threats, while CVSS states how dangerous a particular vulnerability might be if exploited. Due to the focus of EPSS on vulnerability prioritization that extends beyond incident severity prediction, this methodology is better suited for enabling teams to identify and mitigate the CVEs that are most dangerous to their organizations.
Traditionally, standard solutions and conventional wisdom guide vulnerability prioritization based on CVSS v3 severity scores — not based on exploit likelihood. This has caused often-already overburdened personnel responsible for managing cyber-physical systems (CPS) vulnerabilities to expend resources prioritizing those that are or will not ever be exploited. According to findings from a third-party study, CVSS v3-guided prioritization has an average coverage rate of 82.4%. This means that the average security team using CVSS v3 scores of “high” or “critical” as their remediation threshold will prioritize 82.4% — and overlook 17.6% — of the actively exploited vulnerabilities in their environment. In order to prevent resources being wasted on prioritization of vulnerabilities that will never be exploited, organizations should focus on finding a solution that makes it easy to focus on the vulnerabilities that are or most likely will be exploited based on the latest current and predicted exploitability indicators.
At Claroty, we understand that measuring the severity of any vulnerability can be an incredibly difficult task, that’s why we’ve introduced new enhancements to our vulnerability and risk management (VRM) capabilities. Our VRM offering automatically prioritizes vulnerabilities based on exploitation likelihood by utilizing the Known Exploited Vulnerabilities (KEV) catalog and the EPSS. Combining the data points from both sources allows our customers to gain full visibility into the current and probable near-term state of vulnerabilities posing the greatest risk to their environment. As a result, security teams can 11 times more efficiently prioritize the vulnerabilities threat actors are most likely to leverage — and are further empowered to make the best decisions when it comes to protecting their most critical assets. As we know, each and every CPS environment is unique, that's why Claroty provides its customers with the ability to quantify their CPS risk posture in the true context of their business.