The GRC Mystery House

Governance, Risk Management, and Compliance – every organization does it.  There are variations in the opinion of what we call GRC.  Some like it and some do not.  Some use the term ERM in much the same way I use the term GRC, others may call it something else or not even have a name for it.

My position is that every organization does GRC.  You will not find an executive in anorganization that will tell you they do not govern the organization, they do not manage risk, and they do not comply with obligations and policies.  The components of GRC are in every organization.  They may be ad hoc, fly by the seat of our pants approaches.  They may be very mature and integrated.  The question is not if you do GRC but how mature your GRC practices are whether you call it GRC or something else.  GRC, using the only definition in a publicly vetted standard –OCEG’s GRC Capability Model, is “a capability to reliably achieve objectives [governance] while addressing uncertainty [risk management] and acting with integrity [compliance].”

Mature GRC practices involve architecture.  Design to integrate and leverage risk and disparate processes, information, and technology.  It is not about a software vendor who provides Enterprise GRC – that may be a component and part but that alone does not mature GRC.  Most organizations have multiple GRC technologies, information, documents, and processes.  Sometimes these work together in harmony producing mature GRC other times it is broken and fragmented leading to redundancy, inefficiency, and failures.

Most organizations suffer with immature GRC architecture.  They remind me of the Winchester Mystery House in San Jose, California.  This house was built in the 1800’s at excessive costs with no overall design or architect.  In fact it had 38 builders and no blueprint.  In the end it has 160 rooms, 47 fireplaces, 6 kitchens, 10,000 windows, 65 doors that open to a blank wall, 25 skylights in floors not ceilings, and 13 abandoned staircases that go up to nothing – or perhaps down to nothing.

This is the reality of immature GRC in many organizations.  The confusion of the Winchester Mystery House are there: 160 different assessment formats; 47 different policy formats; 6 different risk frameworks/taxonomies; 10,000 documents and spreadsheets; 65 risk and compliance management report formats; and 25 different technologies ranging from spreadsheets, custom built risk software, to commercial solutions.  This is a reality for large organizations – one financial services firm I worked with last year on the GRC technology strategy mentioned they had thousands of documents and spreadsheets for risk and compliance assessments and various technologies in place.  A hospital chain told me they had over 18,000 policies that were highly redundant nearly 30 hospitals each with their own risk and compliance programs.  An international financial services and insurance firm told me the line of business was screaming at them because of the number and different formats for risk and compliance assessments.

To solve this, organizations need to understand the maze of GRC processes, information, and technologies in place and architect approach that brings greater levels of effectiveness, efficiency, and agility to the business.  Your GRC architecture should align with your enterprise architecture and fit the way the organization operates.

As we look ahead at 2013 – how are you going to make GRC processes more effective, efficient, and agile?

The Titanic: An Analogy of Enterprise Risk

As we close out 2012 let us roll the years back from 2012 to 1912.  One hundred years a go was the disaster of the Titanic.  What can we learn from it today?

I have been told that Captain E.J. Smith stated before the Titanic set sail, “Never in all history have we harnessed such formidable technology. Every scientific advancement known to man has been incorporated into its design. The operational controls are sound and foolproof!”  In fact, the newspapers ran with headlines that stated UNSINKABLE.

What went wrong with the Titanic?  Yes, it hit an iceberg.  What truly went wrong?  The lessons we learn from the Titanic can help us understand and make a case for enterprise risk management today.  I do not claim to be a Titanic historian or expert, but in my limited understanding I have identified the following things that went wrong:

  • Overconfidence.  The strategy and design of the ship led to over confidence – the first too big to fail.  Not only with the Captain, but with the media/press and everyone else involved.
  • Health and safety.  To my knowledge the Titanic was fully compliant with health and safety requirements of the day – the fact still remains that there were not enough life preservers and life boats for the number of passengers on board.  There was time to get off the boat – but there was no place to go.
  • Design.  I understand that the size of the propeller and rudder were to small for the massive size of the ship which limited its maneuverability around objects.
  • Quality.  There is speculation that the iron ore in the rivets was of an inferior quality.  The rivets were weak that held the seams of the ship together, when it struck the iceberg the gash opened and the ocean waters flooded in.
  • Ignorance.  There were warnings of icebergs in the area that were communicated to the Titanic.  The response from the Titanic was SHUT UP we are tired of hearing about it.
  • Inattention.  It is understood that someone was not paying close attention on watch which caused them to confront disaster.
  • Strategy.  I have read that the Titanic was designed to stay afloat with four compartments flooded.  They were headed to the iceberg dead on and decided to turn and hit it on the side.  If they would have hit it head on only two compartments would have flooded.  When they hit it on the side six compartments flooded.
You address any one or two of these bullets and we may never have had the disaster of the Titanic.  They each contributed to the loss and tragedy that history brings us. Business today is very much like the Titanic.  We manage risks within processes and silos.  In the end we fail to see the interconnectedness/interrelationship of risks across the organization that can lead the organization to disaster.

The Titanic was a complex operation.  Business today is complex but also distributed and requires a strong enterprise risk management strategy.  One that sees the big picture but can also get down into the “coal face” of the business.  One that can show the relationship of risk and provide analysis at a strategy view as well as specific process or departmental view.  We need to understand the breadth and depth of risk in the context of strategy and operations of the business.

As you enter 2013 and are finalizing your strategic plans have you thought about the range of risks to those plans?  How integrated is risk management with strategic planning?  How integrated is risk management with business operations? Will you be caught by surprise because you failed to see how risk in different parts of the organization can work in concert to bring disaster or failure to meet objectives? I am anxious to hear your thoughts on risk management .

Improving Policies Through Metrics

Thank you for joining me on this journey through Effective Policy Management. Today we come full circle and bring the effective policy management process to closure.

Let’s review where we have been. The first illustration and roundtable introduced the topic of why policies matter and my Effective Policy Management Lifecycle. Each illustration after that took us through the stages of the lifecycle:

  1. Tracking Change That Impacts Policy
  2. Policy Development and Approval
  3. Policy Communication and Training
  4. Policy Implementation and Enforcement

And now we turn our focus on to the final stage: 5—Policy Measurement and Evaluation.

It is unfortunate that many policies are written and then left to slowly rot over time. What was a good policy five years ago may not be the right policy today. Those out-of-date but still existent policies can expose the organization to risk if they are not enforced and complied with in the organization.

Effective policy management requires that the policy lifecycle have a regular maintenance schedule. My recommendation is that every policy goes through an annual review process to determine if the policy is still an appropriate policy for the organization. Some organizations rank their policies on different risk levels that tie into periodic review cycles—some annually, others every other year, and others every three years. In my opinion, best practice is for every policy to undergo an annual review.

A system of accountability and workflow facilitates the periodic review process. The policy to be reviewed gets assigned to the policy owner(s) and has a set due date for completion. The decision from this review process will be to retire the policy, keep the policy as it is, or revise the policy to meet the current needs and obligations of the organization.

Policy owners need a thorough understanding of the effectiveness of the policy. This requires the policy owner have access to metrics on the effectiveness of the policy in the environment. Some of the things that the policy owner will want to look at are:

  • Violations. Information from hotline as well as investigation systems to determine how often the policy was violated. The data from these systems indicate why it was violated—lack of awareness, no training, unauthorized exceptions, outright violations.
  • Understanding. Completion of training and awareness programs, policy attestations, and related metrics show policy comprehension. Questions to a helpdesk or compliance department uncover ambiguities in the policy that need to be corrected.
  • Exceptions. Metrics on the number of exceptions that have been granted and the reasons they were granted. Too many exceptions indicate that the policy is inappropriate and unenforceable and needs to be revised.
  • Compliance. At the end of the day the policy needs to be complied with. Any controls that the policy governs and authorizes and the state of those controls is to be reviewed by the policy owner to determine policy effectiveness.
  • Environment. The risk, regulatory, and business environment is in constant change. The policy may have been written to address a state that no longer exists. Changes to the business (e.g., mergers/acquisitions, relationships, strategy), changes to the legal environment (e.g., laws, regulations, enforcement actions), and changes to the external risk environment (e.g., economic, competitive, industry, society, technology) are to be reviewed to determine if the policy needs to change.

When a policy does change it is critical that the organization be able to keep a history of the versions of the policy, when they were effective, and the audit trail of interactions around the policy. The audit train is used to present evidence of effective policy management and communication and includes a defensible history of policy interactions on communications, training, acknowledgments, assessments, and related details needed to show the policy was enforced and operational.

What is risk management?

Risk management is maturing, but as a result needs to be understood correctly and reminded that it does not rule the roost.

I have three teenage boys (19, 18, and 16).  At times my boys get to big for their britches and need to be reminded what the pecking order is.  It does not mean they are less loved or less valued – they just need to understand context and where they fit.  As with any child becoming an adult they like to challenge authority:  to think that they are in control and operate as the center of the universe.  After all, they know more than Mom and Dad.

My concern with risk management is that many (not all) risk professionals are trying to redefine risk management to make it something broader than it actually is.

There was a great article on risk management published by Harvard Business Review in June 2012, “Managing Risk: A New Framework” written by strategy guru and balanced scorecard co-creator Richard Kaplan and his colleague Anette Mikes.  The argument is that there are fundamental differences between traditional risk management focused on preventable risks and risk management for strategy and external risks.  What caught my attention was the concluding paragraphs, which stated:

  • “Managing risk is very different from managing strategy. Risk management focuses on the negative—threats and failures rather than opportunities and successes. It runs exactly counter to the “can do” culture most leadership teams try to foster when implementing strategy. . . . Moreover, mitigating risk typically involves dispersing resources and diversifying investments, just the opposite of the intense focus of a successful strategy. . . . For those reasons, most companies need a separate function to handle strategy- and external-risk management. The risk function’s size will vary from company to company, but the group must report directly to the top team. Indeed, nurturing a close relationship with senior leadership will arguably be its most critical task; a company’s ability to weather storms depends very much on how seriously executives take their risk-management function when the sun is shining and no clouds are on the horizon. Risk management is nonintuitive; it runs counter to many individual and organizational biases. . . . Active and cost-effective risk management requires managers to think systematically about the multiple categories of risks they face so that they can institute appropriate processes for each. These processes will neutralize their managerial bias of seeing the world as they would like it to be rather than as it actually is or could possibly become.”

For the record, I completely agree with these statements from Kaplan and Mikes. Risk management is maturing and the organization needs to make a proper place for it.  Just as my sons are looking to the future and going to college – I fully support them and want to see them fulfill what they have been called to do and contribute to society.

There are three lessons that I think risk management needs to learn:

  1. Risk management does not equal strategy management.  I posted an excerpt of the HBR article to several LinkedIN groups to seek perceptions.  The response from some was that “strategic management = risk management.”  This is a mistake. Strategy management is broader than risk management.  Yes, risk management is part of strategy management but it does not equal strategy management.  My fear is that we are putting the cart before the horse.  To keep it to an equation “strategy management > risk management,” that is strategy management is greater than risk management.  The two are not synonyms, though good strategy management will contain risk management.
  2. Risk means there is a downside.  In order to have a risk there has to be potential for a less optimal outcome.  That is where I think that ISO 31000 confuses many on the subject of risk and strategy management.  ISO 73 and 31000 defines risk as the “effect of uncertainty on objectives.”  A more accurate understanding is that risk is an event or condition that creates a state where undesirable effects may be possible.   Risk management is the act of managing processes and resources to address risk while pursuing reward.  I am all for simple and straight forward definitions but in this case I think ISO simplifies the definition too far.
  3. Strategic risk management requires different paradigms.  Much of the confusion on risk management is that risk in many organizations was buried in the bowels of the organization.  It was not an executive function.  It has been focused on insurable risks, threats, and hazards.  It was focused on preventable risks.  With growing awareness that we need formalized strategic risk management many have leapt to think that how risk is managed in the depths of the organization is how strategic risk is managed.  They are different – and require different mindsets.

At the end of the day, we need to understand that risk management is maturing.  But risk management from the top-down is not the same as how we have historically understood risk management. How we manage threat and hazard risks is different than how we manage strategic risk.  We have always managed risk as part of strategy – but it is becoming more formalized and needs a real seat at the strategy table.  However, this does not mean that risk rules those gathered at the table.  It is simply part of it.

I am anxious to hear your thoughts on the subject, though before you grill me – I would encourage you to read the HBR article.

Concluding the GRC Analyst Rant

If you have been following my posts, you will know that I created a firestorm of discussion on: Rethinking GRC, Analyst Rant, Gartner’s 2012 EGRC Magic Quadrant.  If you go to this link you will see the range of comments – many anonymous – from on the topic.

French Caldwell, who continues to be a gracious and friendly nemesis (it is interesting to be able to call someone a nemesis and friend), posted a response on his blog Oh Michael — Your Rant . . . 

October and November got me caught up in a whirlwind of activity and thus I am a month late in responding.  But I owe my followers a response.  Here it is . . .

My point of view is that Gartner and Forrester have the incorrect view of the GRC market.  More effort needs to be put into modeling the variety of niches of the GRC market and focus on GRC as an architecture that brings different pieces together.  My findings are that 86% of the market spending is on organizations looking for GRC software to solve specific issues or enhance department level processes.  Only 14% of the spending is on what we call Enterprise GRC.  Organizations looking for GRC software often turn to the Gartner and Forrester reports to build their shortlists and find to their discouragement that they do not provide the detail to make decisions on GRC software specific to their challenges.  Basically, the depth of research provided by Gartner and Forrester in GRC is lacking.  The industry needs GRC technology research that is broader and deeper. In fairness, French points out that EGRC is just one aspect of his view of the market.  Unfortunately, it is the EGRC MQ that many turn to because they have nothing else that goes into depth in these various niches.

When it comes to a comparison of the Gartner Magic Quadrant and the Forrester Wave – the Wave beats the Magic Quadrant hands down.  The Wave process is a more thorough process and the criteria are deeper and published.  Organizations can download a spreadsheet of all of the criteria, the weighting of each criterion, and how the vendors were scored based on the weighting.  Full transparency. But the Forrester GRC Wave does not go into sufficient detail in domains of GRC technology and it is not kept current.

French, in his response, told me I was inflated on the point in transparency of criteria.  Sorry French – I do not see it.  Yes, you give some high-level criteria and weightings – but this is not at the depth Forrester provides in the Wave.  It is so rolled-up and surface level that it is really useless. It does not go into specific features to look for and how vendors are scored on those features for the areas you bring forth such as: risk management, compliance management, audit management, policy management, and regulatory change management.  Despite some high-level and inconsistent comments in the MQ, the reader gets no idea how vendors rate in each of these GRC technology functional areas.  The reader is clueless as to which vendors are better in policy management over risk management – or what vendors have more advanced capabilities in these areas.

In fact, the layout of the EGRC MQ completely boggles my mind.  There are nine leaders, and many of those leaders are not leaders across the areas of risk, audit, compliance, regulatory change, and policy management.  It boggles my mind as you look at the leaders and it is apparent that Gartner is comparing apples and oranges in capabilities – they are not compared them by the same criteria to get into the Leaders Quadrant.  Only a handful of the nine have robust capabilities across all of these areas – yet they are tagged as leaders.  My only response to this is that a Leader in the Gartner MQ is a large stable technology player in the GRC market with a major brand or market momentum behind them – and they are not leaders based on the functionality of their product.  There are some in the Leaders Quadrant that definitely do stand out as Leaders. However, most only would lead in particular categories of GRC and not the range of risk, audit, compliance, policy, and regulatory change that French states he is evaluating them by.

French argues that following the two-hour script is justified; vendors have access 365 days the rest of the year to argue their points in briefings.  I am sorry, but analysts can determine when to accept or decline a vendor briefing – and those are often short and to the point. The truth is – some of the vendors get greater access to Gartner and Forrester because they spend a lot in advisory services.  They can show French how great their solutions are and define the agenda by paying $8,000 to $15,000 a day for analyst time (depending on contract).  The Leaders in the MQ are those that spend a lot of money with Gartner to bring analysts onsite where they are captive to go through the breadth and depth of their features.  Many in the Leaders Quadrant do this on a quarterly basis.  While smaller vendors get a 1/2 hour or one hour vendor briefing call once or twice a year as they do not have the budget to engage Gartner or Forrester.  The result is analysts that know the larger vendor products more intimately.  The playing field is not even.  I am not accusing French or any analyst of stacking the deck against vendors that do not spend money with them.  I am simply stating that your script process is broken.  The players that spend a lot in advisory time with you have an unfair advantage because they have perhaps a few dozen hours or more of time they have worked with you over the past year to go off script.  To level the playing field, each vendor should have at least four hours of demo time with some of it being able to go off script.  That is what I did at Forrester when I wrote the first two GRC waves.  I wanted to know the products intimately and give everyone an equal chance.

Gartner states they warn companies not to use the MQ alone to build a short-list of vendors to invite to your RFP party.  They can say this all they want – this is how organizations use the MQ.  As a result the Gartner MQ is broken.  It does not provide the depth and breadth for organizations to make valid decisions on what vendors best meet their needs.  In fact, I feel it misrepresents the vendors – the advantage is given to the larger established vendors that are marked as a leader but many of which do not have the breadth of functionality covering the areas of risk, audit, compliance, policy, and regulatory change that Gartner states they are comparing vendors against.  How are they a leader then? At least Forrester gives you a lengthy spreadsheet that breaks out capabilities in each of these areas and how vendors scored at the criteria level itself.  Forrester has a more objective and transparent process.  The issue with Forrester is that it is not current – they do not publish the GRC Wave frequently enough.  The issue with Gartner and Forrester is that there is not enough detail in specific areas of GRC such as risk, audit, policy to really compare vendors in detail within a GRC technology area, though Forester provides more detail than Gartner.

The world needs to have the analyst world re-engineered.  Client relationships should be noted so that the reader can understand conflicts of interest (something that Constellation Research Group is doing).  When a vendor is a client spending money with Gartner it should be easy to determine this.  Analyst fees need to come down.  Really, $10,000+ a day for analyst time – that is robbery.  The research process needs to be more transparent to the reader – particularly in vendor comparisons on what detailed criteria is used, what were the documented analyst findings for each criteria, and how was this weighted and scored for each criteria an
d vendor participating.

The technology world needs to be unshackled from the approach and cost of the major analyst firms.

Thank you French for continuing to be an admirable foe and friend.  I wish Gartner provided you a better framework to operate in so you could excel further in GRC research.  I am sorry that you have to defend of broken, non-transparent, and ineffective approaches such as the Gartner MQ.

Accepting Nominations for the 2013 GRC Technology Innovation Awards

ANNOUNCEMENT: GRC 20/20 is accepting nominations for the 2013 GRC Technology Innovation Awards.

To nominate a technology solution – please download the form.

The GRC Technology Innovation Awards are to recognize technologies that are revolutionizing Governance, Risk Management, and Compliance (GRC).  Please understand what it is NOT:

  • The purpose of these awards is NOT to recognize how one product has a better feature or feature set than another.
  • It is NOT to recognize competitive differentiators.
  • It is NOT like a Forrester Wave or Gartner Magic Quadrant.

The awards are given to vendors that show something truly unique, game changing, and revolutionary to the GRC space or some aspect of it.  Just another me too or we are better than the rest type of submission will not cut it and will quickly go to the digital trash bin. It has to be truly game changing, fresh, new, innovative, and exciting.  It is exactly what it states to be a TECHNOLOGY INNOVATION award.  We want to award vendors that are thinking outside of the box to boldly take GRC where no vendor has taken GRC technology before.

Please submit nominations by December 17, 2012.  Nomination forms will be reviewed.  If follow-up information is needed – Corporate Integrity will contact you.  Awards will be announced to vendors in January so that coordinated announcements/press releases can go out in late January.  Multiple vendors can receive this award – the only qualification is that you have to convince me (Michael Rasmussen) that it is game changing and innovative.

Please attach to your email submission of this form any relevant screen shots, technical details, product briefs, or even video clips/simulations.

Effective Policy Enforcement Involves Technology

I find that ineffective and unenforced policies are rampant within organizations, and are a thorn in the side of compliance and policy managers.
 
Mismanagement of policy has grown exponentially with the proliferation of documents, collaboration software, file shares, and Websites. Organizations end up with policies scattered on dozens of sites with no defined understanding of what policies exist and how they are enforced. An ad hoc approach to policy management allows anyone to create a document and call it a policy—exposing the organization to unnecessary liability. Policies end up being written poorly, out of sync, out of date, exceptions are not documented, and the organization has no evidence if the policy is enforced.
 
Document-centric approaches to policies—that lack technology to manage communication and enforcement—are a recipe for disaster. While it appears easy and cheap to just use documents and send them out via e-mail, or post them in a file-share or Website, the reality is that the cost to the organization is significant in the exposure of ineffective policy management.
 
The following is a checklist you can use to understand if your policy management system enables effective policy implementation and enforcement across the policy lifecycle:
  • Provide a consistent policy management framework for the entire enterprise.
  • Manage the policy lifecycle of creation, communication, implementation, monitoring, maintenance, revision, and archiving.
  • Deliver a system to document, approve, monitor, and review exceptions to policies.
  • Consistent format for policy assessments and surveys to gauge compliance and understanding.
  • Integrated eLearning and training quizzing and attestation.
  • Provide easy access to policies in the right language and format for the audience.
  • Gather and track comments to policies.
  • Map policies to obligations, risks, controls, and investigations so there is a holistic view of policies and metrics.
  • Provide a robust system of records to track who accessed a policy as well as dates of attestation, training, and read-and-understood acknowledgments.
  • Provide a user-friendly portal for policies with workflow, content management, and integration to other systems.
  • Provide a calendar view to see policies being communicated to various areas of the business, and ensure policy communications do not burden employees with too many tasks in any given time period.
  • Provide links to hotlines for reporting policy violations.
  • Publish access to additional resources such as helplines, FAQs, and forms.
  • Enable cross-referencing and linking of related and supporting policies and procedures so users can quickly navigate to what is needed.
  • Create categories of metadata to store within policies, and display documents by category so policies are easily catalogued and accessed.
  • Restrict access to policy documents so readers cannot change them, and sensitive documents are not accessible to those who do not need them.
  • Keep a record of the versions and interactions of each policy so the organization can refer to them when there is an incident or issue to defend the organization or provide evidence for.
  • Maintain accountable workflows to allow certain people to approve policy documents, and move tasks to others with full audit trails.
  • Deliver comprehensive metrics and reporting on the status, implementation, understanding, and enforcement of policies.
Although you may be able to implement a few of these features using a build-your own or document centric approach, the cost in training, maintenance, and management time, let alone the legal ramifications due to lack of audit trails, makes it a risky venture for policy management.

I look forward to hearing your thoughts on the role of technology in policy management . . .

Policy Communication in a YouTube Generation

So you wrote a policy—now what? Policies are only effective if you can show that they have been communicated and understood. Having a written policy that nobody knows about is just like having no policy at all. You cannot hold people accountable to a policy until you have made them aware of the policy. Unfortunately, many organizations have scattered approaches to publish and communicate policies.

I am on a mission to refocus organizations on how they approach policy management and communication. Not only are businesses failing in consistent and effective policy development and management, they are also behind the times in how they can communicate policies.

The written policy will always be critical as it defines what is allowed and disallowed explicitly in writing. The difficulty is that the written policy document, while necessary, is no longer good enough. We work and live in a YouTube world. Video and interactive content has become critical to every function of the world around us. Much to my disappointment people do not read as much as they used to. This is complicated by the fact that organizations have employees with varying learning levels and abilities. One of my own sons has struggled with dyslexia throughout his childhood; a hard worker but struggles to read.

Question to ponder: How do we ‘effectively’ communicate policies in a world where video and interactive content has become the preference of individuals? In other words, how do we communicate policies to a generation of workers that has been raised on YouTube and interactive content?

We have to make sure policies are communicated and understood. This requires that certain policies have training and interactive learning to ensure individuals understand. Survey and testing is an integral part of training to validate that policies are understood. Other mechanisms for communication involve comedy, e-mail reminders, mention at company meetings, policy-related learning activities, and other media. Policies do not have to be boring written documents—they can be written actively and use interactive learning to engage the audience. Even a written document itself can be engaging to read. Proof point: go out and Google for Google’s Code of Conduct, well written and engaging. Combine this with interactive learning to deliver the message and you have a powerful mechanism to guide behavior in the organization.

Effective policy communication requires that the organization has an ability to communicate and train individuals on policies that is easy to use and accessible. This includes the capabilities where:

  1. Any employee (across geographies and abilities) is able to log into a centralized policy system and be able to find all of the policies that relate to their role in the organization.
  2. Policies are written clearly in a consistent template and style that reflects the culture and tone of the organization and in a way that the average reader can understand (use active voice, remove cluttered language, 8th grade reading level).
  3. Clearly communicate tasks for training or acceptance of policy and it should be apparent how to ask for clarification on policy if the individual has questions.
  4. Critical policies are to have a video or interactive component in which the policy is explained to the individual. The goal is to leverage interactive content to engage the employee on how to comply with the policy.

A closing comment: Effective policy communication is a critical component of a strong compliance program. In the Morgan Stanley bribery incident, the U.S. Department of Justice stated that Morgan Stanley had a strong compliance program and was not pursuing further action against the company itself. Part of what Morgan Stanley was able to demonstrate was how often policies and training were completed by employees.

My point is simple—we need the written document, but we also need to make sure people understand it. Let’s not make this a burden for employees. Write clear policies that are accessible and easy to read, and provide the relevant training and interaction to make sure they are understood.

Download the latest GRC Policy Illustration and Roundtable on this topic.

There is an upcoming webinar on this topic this week on October 25th:

This fourth installment in the Policy Management webinar series addresses best practices for distributing policies and determining when and how to provide training.  We often think that once a policy has been formally issued the job is done, but that is far from the truth.  Properly communicating about the availability of the policy is only the start.  Join our panel of experts for a roundtable discussion of the challenges, best practices, and benefits of a well thought out policy communication plan.

 

Rethinking GRC: Analyst Rant, Gartner's 2012 EGRC Magic Quadrant

Yes, the latest Gartner EGRC Magic Quadrant is out and I am left questioning what value it provides.  My first impression is that it is best for the compost pile to be used as fertilizer for the garden next spring and not used in organizations that may rely on it to make misinformed GRC technology decisions.

NOTE: this rant is not a reflection of individual vendors in the EGRC Magic Quadrant.  Though I have issues with how some vendors are represented and placed (good night, one in the leaders quadrant almost never comes up in RFPs), my rant is because of Gartner’s flawed understanding of the market and broken process for doing Magic Quadrants.  If you want my analysis on individual vendors then give me an email or call.

For historical purposes, I first defined and modeled the GRC (governance, risk management, and compliance) market back in February 2002 while at GiGa Information Group soon to be acquired by Forrester Research, Inc.  I published the first two Forrester Waves on GRC.  What is important to note is that the 2nd Wave had four different Wave graphics as the market was too complex to represent in a single graphic to compare vendors with integrity.  Some solutions were stronger in audit, other stronger in risk, while others are stronger in compliance. The market has only grown more distributed and complex.  In fairness to Gartner, they recognize this and reference doing a Market Scope next year instead of a Magic Quadrant.

My single greatest issue with the 2012 Gartner EGRC Magic Quadrant is that the Magic Quadrant is very much as it states – MAGIC.  There is no transparency or clarity on how vendors are scored.  It is as if Gartner has a giant Magic Quadrant dartboard and hurls a vendor dart against it to see where they land – yes there is some aim involved but it is not really precise and objective.

The current Magic Quadrant is a mile wide and an inch deep.   I am left asking the question – what practical purpose does it serve?  Right now the graphic itself is misleading.  Those in the upper right quadrant – the leaders quadrant – are often short-listed to RFPs/RFIs but others get very little to no attention even though some have outstanding capabilities and can compete feature for feature with the Leaders.  Then there are those that are not even in the Magic Quadrant that have excellent capabilities, but perhaps they do not have the right revenue or are only operating in a single geography.

The truth is, the MQ does not really help you identify and select GRC vendors that are the right fit for your business.

  • If your need is audit – how do you get a detailed comparison of the audit management features of workpaper management, calendaring, audit planning/scheduling, offline audit capabilities?
  • If your need is compliance – how do you get an understanding of which vendors have the best content, can manage policies and investigations, track regulations, and conduct assessments?
  • If your need is risk management – which vendors support your risk analytics needs?  Some just do heat maps, others do scenario modeling, bow-tie analysis, monte carlo simulations.  Are the risk management features built for risk management at a department level or can they scale because they have risk normalization and aggregation capabilities?
  • If you need policy management – which vendors support versioning of policies and content management?  Which have integrated learning management systems to deliver courses, and which make you work with external systems?
  • If you need regulatory change management – which vendors integrate with content providers for regulatory content?  Do they truly integrate or do they just take in RSS feeds?  What content do they have in the system itself?  How can this content be effectively mapped to policies and other items in the GRC system?  Is this mapping at a document level or can you map statements or paragraphs across documents?

Even basic information such as deployment models – on-premise, hosted, software as a service – are not transparent in the MQ. At least not consistently.  There are gems of insight that can be gathered from the summaries of the vendors, but what you learn about one vendor you have no way to objectively compare it to another vendor as it is not discussed or measured for the other vendor.

If your need is compliance management (or specific issues of compliance like anti-bribery and corruption), I can tell you how one of the vendors in the challengers quadrant can run circles around nearly everyone in the leaders quadrant.  Though if you wanted to do offline audits this vendor should not be in your RFP. If you want deep functionality in risk management how the same vendor will not perform where others in the visionaries quadrant excel at risk management and in many cases do it better than those in the leaders quadrant.

I had one major financial services firm tell me that they never want to see a heat map again as their GRC vendor in the leaders quadrant could not aggregate and normalize risk data properly as it was built for a departmental risk solution and is flawed (in the release they were using) to do proper risk normalization and aggregation.

Friends, the Gartner EGRC Magic Quadrant does not give you the objective detail you need to make informed decisions on the vendors to engage on an RFP/RFI let alone acquire.  It gives you little quips, but not the detail to save you time and money on an RFP/RFI.  In fact, several times this year I have been engaged by organizations after they went through the RFP process using vendors that performed well from last year’s EGRC MQ. Only after spending a lot of time and effort to realize that the vendors they looked at were too expensive, did not serve their industry, or did not have the capabilities they needed.

If Gartner made public their criteria and grading scale then users could dig into the details and see how vendors scored on individual criteria.  If a vendor is not on the MQ then the same criteria can be used to evaluate other vendors objectively. Forrester discloses their criteria.  You can download an entire spreadsheet of everything Forrester evaluated, how each vendor scored on each item, and what the scale was to score the vendor.  Gartner has never provided anything like this. So we are left with a lot of subjectivity instead of objectivity.  The issue is that any organization’s understanding and need for a GRC solution varies from others.  What Gartner has produced is absolutely useless in helping a organization select a vendor for an RFP as these solutions vary greatly in depth and breadth and there are major areas of functionality that are not revealed objectively in the MQ.

Gartner has a script and gives a vendor a short time period to demo their GRC product to Gartner.  They do not allow you to go off script – I have heard this from multiple vendors frustrated with the process.  A vendor may have an absolutely amazing differentiator but if it is off script you have to kick and scream to get even passing attention.  In other words, Gartner has their rigid view of the GRC capabilities of EGRC vendors and if you approach it differently then you are outside their myopic vision.

I also take issue with how Garter defines and presents the GRC market.  While they give lip service to a lot of areas of GRC throughout the document they assume that an EGRC platform is comprised of only the four categories of risk management, audit management, compliance and policy management, and regulatory change management.  I see a much broader definition of the GRC market and define it across 29 categories: with 9 categories being components of enterprise GRC that span across the business and 21 categories being role/function specific GRC areas.  GRC is a broad market – a macro market – with many micro marke
ts that it is comprised of.  EGRC puts several of these micro market segments together into an integrated technology and information architecture platform. There is not a single vendor that can bring all the components of GRC to your organization.

Gartner states that there are many businesses implementing a single EGRC platform.  My market research tells me that 80% of the buying activity in GRC the buying organization is trying to solve specific problems.  Less than 20% have an EGRC strategy, but even those have multiple vendors.  I would state it is less than 5% that are truly trying to consolidate on one platform.  In fact, one large retailer I spoke to a month back stated they have four GRC platforms (in this case Archer, SAP, SAS, and Enviance).  A defense contractor at the same event stated they had all those platforms plus two more (Thomson Reuters and MetricStream).  A financial services firm I have worked with has four different GRC vendors in their environment (Archer, SAI Global, Mitratech, and Wolters Kluwer).

What it means (a term Forrester uses in their research reports):  If you are looking for an objective understanding of how vendors stack up to each other the Forrester Wave process is much better than the Gartner MQ (though Forrester does not consistently update the GRC Wave so organizations are often left with out of date comparisons).  The MQ is fit for the compost pile.  However, what is really needed is objective comparisons that go deeper than either the Forrester Wave and Gartner MQ.  If you need audit functionality – here is how the vendors stack up on audit features (objective and open, not hidden).  If you need compliance – here is a detailed comparison of how the vendors compare on compliance features.  If you want to know which vendors support which type of risk modeling – here is a comparison.  That is the vision I am aiming for.  Objective, open, and straightforward comparisons of feature areas of GRC so organizations do not waste time and money in the vendors they look at.  If you have core requirements that are essential you should be able to mark those requirements and find which vendors support those features.

 

Accountability and Consistency in Policy Development

In my experience, policy management processes are in disarray when operating autonomously, introducing risk in today’s complex, dynamic, and distributed business environment. The typical organization lacks a structured means of policy development and governance with an inconsistent maze of templates and processes. Inconsistency in policy management means processes, partners, employees, and systems that behave like leaves blowing in the wind. Organizations struggle with policies that are out-of-date, ineffective, and not aligned to business needs. Policy inconsistency opens the doors of liability, as an organization may be held accountable for policy that is not appropriate or complied with.

Organizations require a consistent governance process to develop and maintain policies and procedures. Policies articulate culture, they establish a duty of care, define expectations for behavior, and establish how the organization is going to comply with obligations. Accountability in policy governance is made possible by three policy governance functions:

  1. Policy Lifecycle Management. Policy Lifecycle Management is the process of managing and maintaining policies throughout their effective use within the organization. Implementation of Policy Lifecycle Management requires process and technology that is rich in content, workflow, process, and task management with a robust audit trail.
  2. Policy Management Committee. The Policy Management Committee governs the oversight and guidance of policies to ensure policy collaboration across the enterprise and provide the structure and connective tissue to coordinate and drive consistency. It is comprised of team members that represent the best interest and expertise of the different parts of the organization.
  3. Policy Manager. An individual should be assigned to the role of Policy Manager to assure accountability across the policy lifecycle to the standards, style, and process defined by the Policy Management Committee.

Critical to the success of policy governance is a “policy on writing policies” supported by a policy style guide and templates. Organizations are not positioned to drive desired behaviors or enforce accountability if policies are not consistent. Policy writing that is wordy and confusing is damaging to the corporate image and costs time and money. Every organization should have a structure in place to provide for clear and consistent policies. A significant shortcoming in policy management is the failure to define a policy style guide. A style guide for policies defines standardized:

  • Taxonomy. Policies are to have a logical relationship to each other following a hierarchical categorization taxonomy.
  • Format. Policies are to have a consistent look and feel. Anyone should be able to see a policy and recognize that it is a corporate policy by the consistent format.
  • Structure. Related to format, policies are to have a consistent structured arrangement of the headings/sections.
  • Language. Policies are to have consistent language. Good policies are written in the active voice and easy to read.
  • Definitions. Terms used in policies are to be used consistently across the organization with a common understanding of what they mean.
  • Process. The style guide should outline roles and responsibilities for writing, editing, and approving policies.

Policy lifecycle management that addresses accountability brings integrity and value to policy management. It provides accountability to policy management processes that are often scattered across the organization. It enables policy management to work in harmony across organization functions delivering efficiency, effectiveness, and agility. Well-governed and written policies aid in improving performance, producing predicable outcomes, mitigate compliance risk, and avoid incidents and loss.

I look forward to hearing your thoughts on the policy development and approval process . . .

This post is part of a broader roundtable and GRC Policy Illustration that was published by Compliance Week and hosted by OCEG.  The full piece can be accessed at:  Policy Development and Approval

There is also an webinar on this topic and illustration on October 4, 2012.