RISK vs REWARD
- Joel Van Dyk
- May 4, 2020
- 3 min read
The toughest thing in remediating risk is not finding the vulnerabilities or toting them up, and it's not even persuading the owners to fix them. It's trying to figure out which risks ought to be remediated, in what priority and which ones need to be left because they are too difficult to fix and must be mitigated, or which you will not fix or mitigate because the business is taking a risk for a reward. This, then, supposes you have a way of measuring that risk and adding it up.

Most organizations do this with something that they think is quantitative, but usually amounts to asking some questions and rating the answer from 1-5. Then they add up the answers and come up with something they call red, yellow or green. The model depends very much on their own personal judgment. It winds up with something like the chart above. It cannot easily correct for new discoveries, or answer the question of why this vulnerability is important if it has sat around undiscovered for a long time. Not only is this confusing to everyone, it is confusing to the board, who are used to accessing financial risk in and in expectation of dollars and cents. In short, you don't write an insurance company a check for a "medium" amount of risk, you have an exact premium amount. You don't fly your airplane at a "high" altitude, you fly it at 35,000 ft. You measure things exactly so you can compare and manage them exactly.
The better way to do CyberRisk analysis, the way we do science and mathematics, and the way we access risk in more mature areas, like finance or actuarial, is quantitatively. This is easy enough to do conceptually for CyberSecurity, but not so easy in practice. It is also not easy to convince one's peers and immediate management who are used to the qualitative way. However, it is better in that that is how we measure things in science and that is how a board manages risk in other areas like finance and business.
Here is one way to do it: every control applied in CyberSecurity has one or several measures (a key process indicator). This measure shows you how effective the control is. Each measure is a function of the type of control and is a percent (e.g. how many people complete cyber training successfully), a function of a state (e.g. passwords are complex and change every x days), a value (e.g. what is the overall CVSS score of the vulnerabilities on your servers), or a statement of existence (e.g. we have a cybergovernance committee or not). Each control has a desired outcome (expectation) and how far away from that desired value you are. That can be measured as a deviation away from where the board wants it to be (the risk tolerance). You also have to realize that every control cannot be at the desired value: there isn't an infinite amount of time, money and resource to spend on risk mitigation. Like a roulette wheel, you are going to have to wager on some outcomes and risk others to be sucessful.
Each control is then weighted and its outcome score is then weighted for relevance and how much it contributes to your overall risk mitigation. The final weighted score is how much you can can contribute or not contribute to reducing the inherent risk score (more on this later) of the firm. New vulnerabilities as they come up fit into the risk model and overall score by what control they render less effective, and how much you weight them. Below is a simple example about how you can have the Risk conversation at a very high level while still having the numbers to back up the banding into green, yellow, and red. There are even more concrete ways to add up the scores that impute the risk to a monetary value.

Some argue that there is a judgement here based on how you rate the control and how you weigh it. This is a misunderstanding of the mathematics of statistics and how science is done. Nothing in science that is measurable has an exact value. Every value is expected, is effected by the measurement, and has a fundamental error introduced by the measurement so that the measurer can only be certain to within a certain percentage of the measurement. Bringing this into the risk model means you can correct for the errors introduced by the measurer and test multiple outcomes mathematically and what they mean for your risk model (i.e. see Monte Carlo analysis, more on this later also). In this way you can also answer the question of how certain you are of the result that you have.

Comments