top of page

Centralized Controls Assurance

Updated: Jan 10


Groups of people on a beach

Another day, another 14 point program inspired by a consulting company with project management to make a large checklist of controls that are not consistent  with any accepted framework and will take a large number of people months to do, with, you guessed it, lots of consulting work for the consulting company.  It’s over complexity.


Let’s get out of the way what you know is coming:  in this day this should all be automated.  As many have said, if your job depends on the repitition of tedious work, it should and can be automated so you can work at something more fulfilling.


In fact, whether or not you like the General Risk and Compliance tools out there (e.g. Archer), they will do most of this work for you.  They come with any number of controls listings already in them (e.g. NIST, CCM, etc.) cross mapped into each other.  This used to be a great exercise in VLOOKUP in Excel, and personally kept me busy for a week, and the consulting companies charged a lot for.


We need to step back for a minute.  What are we actually trying to accomplish here other than the generation of a lot of paperwork that an auditor or regulator might or might not look at.  We are trying to provide reasonable assurance to a reasonable person that our cybersecurity controls are in place and functioning properly.


Given that, cybersecurity these days runs in a highly automated fast moving ever changing environment.  What is configured and working today may be different in 6, 3, 1 month or even 1 day later given the software driven environments we work in.  Cyber insurance has the same problem.  All you are going to get with the method above is a point in time that had already passed you by when you’ve taken it.


Let’s step back again.  Controls are either preventative, detective or recovery. Preventative are implemented by set of technical process gates that require the user or process to obtain a passage of that gate.


For instance, in order to change the state of process x, you need to effect operation y.  This should be controlled by a decision gate (the diamond symbol on a process chart).  The classic example is an operation such as moving money from one account to another in a transaction program.  In order to do that you need the permission to flip the gate.  That permission is assigned by a Roles Based Access Control, a role, that allows you to affect the gate, assigned by an RBAC provisioning program (e.g. Sailpoint, etc.) and enforced by your IDentity Provider to which the gate has delegated its identity authority to.


So, to check that this control is working you check that all programs in your environment are running under delegated identity from your IDP (your environment, your security architecture should enforce this), and that it it can only make a decision with the right role from your provisioning program.  The control is automated and built into, here comes that a-word again, architecture.  If you are running on a cloud service provider, as most of us are, a lot of this compliance software is built right into your cloud control environment and all you need to do is pull it into your GRC tool via an api.


This technical implementation also hits many other controls:  authentication centralization and control (a good part of the NIST AC control family), authorization controls (a good part of the NIST AU framework), and ID and Access Mangement (a good part of the ID family of controls).


Rather then belaboring the point, let’s just say that the detective control here is enforced by logging, and with a good data scientist you can also plumb the logs for the indicators that the control is working, by seeing it envoked by the program and written to its logs.  I used to do this in the past, where the lack of this showed a non-compliant program.  This is part of your continuous control monitoring program.


Let’s leave the recovery control aside for now.  Just to say, the tech will help you there also.


And yes, these processes need to be documented so that it isn’t dependent on what is in 1 person’s head.  Many of us have another technical solution installed internally to help keep, link and cross reference docs:  a wiki.


So, paper and Excel is great for a point in time audit.  That is their job.  But, it can’t keep up with the dynamic environment we have.  You are always gonna be not up to the present, have gaps, not be able to explain why you haven’t implemented one or the other control, and be behind the regulatory 8-ball.  On the other hand, when you have an architecture, the architecture will support and fit into the Risk framework like 2 combs fitting into each other.  You can let the technology do the heavy lifting for you, and know which controls are and are not necessary for you for the enforcement and measurement of risk in your environment.


So, a round of applause and high 5s for those generating all that paper ”and doing a lot of work”, because this method is a HUGE amount of work for little risk mitigation. It’s not working smarter.


 
 
 

Recent Posts

See All

Comments


+1 917 6035530 / +44 7553 553877‬

  • linkedin
  • twitter

©2025 by Joel M. Van Dyk. Proudly created with by Caliativity Productions on wix.com

bottom of page