As a vulnerability specialist, it is your job to discover all known vulnerabilities. And on its own, that is already challenging. You can wonder if you merely provide a report with discovered known vulnerabilities including steps on how to remediate them if you indeed provide value to the business. Remember, if you have set up the rights processes and configured the technology correctly, you are sitting on a pot of gold (high-value data). Therefore, the logical question is what should you be doing as a vulnerability specialist to deliver true value to the business?
And that brings me to the first question: ‘How do you know you have discovered all known vulnerabilities?’. I deliberately use the term known vulnerabilities, because discovering unknown vulnerabilities is the realm of penetration testing. A known vulnerability is listed in the CVE database.
The trouble with vulnerability management is you need to know your assets. The obvious place for asset information is the CMDB, but we all know that the CMDB is outdated most of the time. And therefore, we need to actively hunt to discover assets. And although that may sound easy because you can conduct a discovery scan. In fact, it is not. Often conducting discovery scanning is only providing insights to you for on-premise based assets.
You also need to ask yourself the question how do you know you have successfully discovered each active asset in the network? Again, the question itself is easy, but answering is not. When you conduct a discovery scan, you only know which assets are alive at the moment of conducting the discovery scan. True, this is only the case when you conduct active discovery scanning because you reach out to every possible asset address in your network. But what if the asset is using a different protocol than the one the discovery scan tool is using?
To overcome this limitation, you can use passive discovery scanning techniques. For example, by mirroring at the right place in the network traffic. Or a more data scientific approach, by combining various data sources like firewalls, DHCP, DNS data, and authentication data.
However, the initial question remains unanswered. How do you know you have discovered all active assets in your network? The network architectural team might be able to help you. They own the IP Administration database. As long as you conduct discovery scans (active or passive) in the active ranges of the IP Administration database, you can sort of guarantee you have discovered all active assets at the moment of conducting the discovery activity.
But what about assets hosted somewhere in the cloud?
Most vendors have recognized this and developed a product called Attack Surface Management, but that term is in my view misleading. In essence, it is nothing more and nothing less than a cloud asset discovery tool. Only if the vendor also has a vulnerability scanning tool integrated into the same solution, you might say it is an Attack Surface Management tool.
Once you have identified all your assets (Remember, an asset can also be a service.), it is time to conduct a vulnerability scan. But you can only do this if you have permission from the asset owner. And cloud providers often don’t allow vulnerability scanning in order to protect the stability of the environment. However, depending on the contract you have with the cloud provider, sometimes it is allowed. As a vulnerability specialist, you need to maintain an active and up-to-date administration on which assets you have and haven’t got permission to conduct a vulnerability scan. Based on this administration, you can calculate the percentage of coverage of vulnerability management. This percentage should be tracked as a KPI.
After each vulnerability scanning cycle (Remember, this is an ongoing activity.), it is time to create the report. You can debate that when you submit the report with discovered known vulnerabilities to the business, you are providing value. But are you indeed providing value to the business? Yes, the asset owner knows how to remediate a known discovered vulnerability. And yes, if that list is limited, then he might be able to prioritize them in the proper way. But what if the asset owner possesses hundreds of assets or when there is a considerable number of known vulnerabilities discovered?
In my view, as a vulnerability specialist should next to providing the report with discovered known vulnerabilities, you should also provide a reading guide on how to prioritize them. If the estate is large, you can also provide value as a vulnerability specialist by providing more in-depth analysis.
That brings me to the second question: ‘Which in-depth analysis you should be conducting as a vulnerability specialist?’. When you skim through the list of discovered known vulnerabilities, you can see patterns emerging which is usually a group of vulnerabilities like SSL/TLS misconfiguration, Certificate issues, etc. You can use this knowledge to collaborate with various stakeholders to resolve these known vulnerabilities. A question you might hear often during those collaborations is ‘I have already implemented security control X. How is this security control helping me to protect my assets against these discovered vulnerabilities?’. On its own, that is an excellent question of the asset owner because the asset owner is busy prioritizing the discovered known vulnerability.
Depending on the maturity of the discovered vulnerabilities, in the CVE database, there might be enough information available to update the temporal score as this reflects the characteristics of a vulnerability. And they may change over time. It provides a time-specific assessment of a vulnerability’s exploitability. It is based on the following metrics:
- Exploit Code Maturity: Represents the maturity level of known exploits (e.g., high, functional, or proof-of-concept).
- Remediation Level: Considers the availability of solutions or workarounds to mitigate the vulnerability (e.g., official fix, temporary fix, or workaround).
- Report Confidence: Reflects the confidence level in the vulnerability report and the existence of a known exploit (e.g., confirmed, reasonable, or unknown).
However, in my view, it is better to implement a process to update the environmental score because this allows you to update the CVSS score based on the specifics of the environment, such as the configuration and importance of the impacted systems and/or implemented security controls. However, this does require a significant understanding of how the environment’s network is organized. In other words, how the network traffic is routed from point A to point B. Having these insights, therefore, as a vulnerability specialist, you can also suggest changes to the network to further reduce the likelihood.
But currently, there is (as far as I know) not one single product out there that can help you in updating the environmental score as each environment is different. If you want to follow a more data-scientific route, you can use tools like R and Python to develop the analytical script or scripts that process all relevant data sources and updates the environmental score before you generate the report.
Leave a Reply