Reporting on Continuous Agile Application Security
- Joel Van Dyk
- Feb 11, 2021
- 3 min read
In my previous article about Application Security we figured out how to make the process frictionless and integrate into the Agile CI/CD development process. The whole point of the exercise is not just to find vulnerabilities and fix them. The coders are not going to able to fix every vulnerability before they have to go live. This where the evaluation and judgement of risk is.
As an Information Security professional your main job is to access and eliminate risk. You are not going to be able to eliminate all risk. CyberSecurity, as John Binkley told me once, is like a roulette wheel: you can’t put your chips on every possibility. Likewise, you cannot eliminate all risk. So, you need a way to measure risk firmwide and weigh it against other risks. In this way you eliminate or mitigate the risks that will really hurt you, and leave the rest for when you can more conveniently eliminate them.
We have a good guideline to do this quantitatively (see Hubbard, et. al. https://www.amazon.com/How-Measure-Anything-Cybersecurity-Risk-ebook/dp/B01J4XYM16/ref=nodl_). In any firm financial risk to drive the business is accessed in dollars. You are going to have to do the same. So start with the following and it’s easy to be quantitative:
Go to CWE Repository at Mitre (https://cwe.mitre.org): each vulnerability you can’t eliminate in code is due to a coding error/logic flaw/weakness. Each coding flaw translates into a Common Weakness Enumeration (CWE). E.g. sql injection is here https://cwe.mitre.org/data/definitions/89.html as is how to remediate it.
Each CWE is worth a certain value. Here is where some folks don’t get it. How can you assign a number something like risk? Well, easy if you have studied risk and probability. But an easier answer is here: https://www.joelvandyk.com/post/cyberrisk-how-long-is-a-cm-and-why-the-speed-of-light-1 . The speed of light is the same all over the universe and it doesn’t matter if call it kilometers, miles, parsecs, or you set it 1 like my field theory professor. You just have to pick a measure and be consistent.
Add up all the risks. Mitre has an involved formula for this: https://cwe.mitre.org/cwss/cwss_v1.0.1.html . I seldom use more than the basic score and the environmental weighing. The rest of the formula requires a maturity level that it takes time for organizations to get to.
Renormalize the risk scores. CWEs are scaled 0 to 100. As an InfoSec organization you are also going to have risks from other sources: your CyberSecurity architecture assess process, your infrastructure vulnerability acessment process (CVEs, which are scaled from 0 to 10), and your threat process (generally based on threat intelligence which will be rated by CVEs). So to get on the same scale divide all your CWEs by 10, or multiply all your CVEs by 10. All you are doing is rescaling: much like when you need to compare dollars to yen.
Convert everything to dollars. Abstract numbers are fine, but a business is run at the C-level and the Board by people who used to thinking about risk in terms of dollars spent, dollars at risk, and return on investment in dollars. Risk is a percentage of the revenue of the firm at hazard. Assume the revenue of the firm is about $1billion. 1% of that would be about $10million. If the maximum risk loss tolerance for a line of business is $1.5million (more on how to determine this in another article), then you can assign multiples and fractions of this monetary value based on risk score of the application(s) that support that business. I usually assign breaks at 20%, 40%, 60%, 80% and 100% which gives 5 tiered levels. I then assign those monetary breaks based on the risk score of the application. What you are really doing is another process of renormalization, as above, this time scaling the risk number to dollars. This is really no different than converting measurements from meters to feet.
This is what you want to report to the firm. You’ll need to be able to report this based on division, line manager, project, down to which coders are contributing what types, quantities, and qualities of code. This is not to rate people against each other. It is a tool to track risk and to help people learn better coding. The best tools (see last article https://www.joelvandyk.com/post/continuous-agile-application-security ) let you do this.
These are the numbers, along with a robust set of kpis to show the health of the process that are reported to the rest of the firm and the corporate board.

Comments