Concluding the GRC Analyst Rant
If you have been following my posts, you will know that I created a firestorm of discussion on: Rethinking GRC, Analyst Rant, Gartner’s 2012 EGRC Magic Quadrant. If you go to this link you will see the range of comments – many anonymous – from on the topic.
French Caldwell, who continues to be a gracious and friendly nemesis (it is interesting to be able to call someone a nemesis and friend), posted a response on his blog Oh Michael — Your Rant . . .
October and November got me caught up in a whirlwind of activity and thus I am a month late in responding. But I owe my followers a response. Here it is . . .
My point of view is that Gartner and Forrester have the incorrect view of the GRC market. More effort needs to be put into modeling the variety of niches of the GRC market and focus on GRC as an architecture that brings different pieces together. My findings are that 86% of the market spending is on organizations looking for GRC software to solve specific issues or enhance department level processes. Only 14% of the spending is on what we call Enterprise GRC. Organizations looking for GRC software often turn to the Gartner and Forrester reports to build their shortlists and find to their discouragement that they do not provide the detail to make decisions on GRC software specific to their challenges. Basically, the depth of research provided by Gartner and Forrester in GRC is lacking. The industry needs GRC technology research that is broader and deeper. In fairness, French points out that EGRC is just one aspect of his view of the market. Unfortunately, it is the EGRC MQ that many turn to because they have nothing else that goes into depth in these various niches.
When it comes to a comparison of the Gartner Magic Quadrant and the Forrester Wave – the Wave beats the Magic Quadrant hands down. The Wave process is a more thorough process and the criteria are deeper and published. Organizations can download a spreadsheet of all of the criteria, the weighting of each criterion, and how the vendors were scored based on the weighting. Full transparency. But the Forrester GRC Wave does not go into sufficient detail in domains of GRC technology and it is not kept current.
French, in his response, told me I was inflated on the point in transparency of criteria. Sorry French – I do not see it. Yes, you give some high-level criteria and weightings – but this is not at the depth Forrester provides in the Wave. It is so rolled-up and surface level that it is really useless. It does not go into specific features to look for and how vendors are scored on those features for the areas you bring forth such as: risk management, compliance management, audit management, policy management, and regulatory change management. Despite some high-level and inconsistent comments in the MQ, the reader gets no idea how vendors rate in each of these GRC technology functional areas. The reader is clueless as to which vendors are better in policy management over risk management – or what vendors have more advanced capabilities in these areas.
In fact, the layout of the EGRC MQ completely boggles my mind. There are nine leaders, and many of those leaders are not leaders across the areas of risk, audit, compliance, regulatory change, and policy management. It boggles my mind as you look at the leaders and it is apparent that Gartner is comparing apples and oranges in capabilities – they are not compared them by the same criteria to get into the Leaders Quadrant. Only a handful of the nine have robust capabilities across all of these areas – yet they are tagged as leaders. My only response to this is that a Leader in the Gartner MQ is a large stable technology player in the GRC market with a major brand or market momentum behind them – and they are not leaders based on the functionality of their product. There are some in the Leaders Quadrant that definitely do stand out as Leaders. However, most only would lead in particular categories of GRC and not the range of risk, audit, compliance, policy, and regulatory change that French states he is evaluating them by.
French argues that following the two-hour script is justified; vendors have access 365 days the rest of the year to argue their points in briefings. I am sorry, but analysts can determine when to accept or decline a vendor briefing – and those are often short and to the point. The truth is – some of the vendors get greater access to Gartner and Forrester because they spend a lot in advisory services. They can show French how great their solutions are and define the agenda by paying $8,000 to $15,000 a day for analyst time (depending on contract). The Leaders in the MQ are those that spend a lot of money with Gartner to bring analysts onsite where they are captive to go through the breadth and depth of their features. Many in the Leaders Quadrant do this on a quarterly basis. While smaller vendors get a 1/2 hour or one hour vendor briefing call once or twice a year as they do not have the budget to engage Gartner or Forrester. The result is analysts that know the larger vendor products more intimately. The playing field is not even. I am not accusing French or any analyst of stacking the deck against vendors that do not spend money with them. I am simply stating that your script process is broken. The players that spend a lot in advisory time with you have an unfair advantage because they have perhaps a few dozen hours or more of time they have worked with you over the past year to go off script. To level the playing field, each vendor should have at least four hours of demo time with some of it being able to go off script. That is what I did at Forrester when I wrote the first two GRC waves. I wanted to know the products intimately and give everyone an equal chance.
Gartner states they warn companies not to use the MQ alone to build a short-list of vendors to invite to your RFP party. They can say this all they want – this is how organizations use the MQ. As a result the Gartner MQ is broken. It does not provide the depth and breadth for organizations to make valid decisions on what vendors best meet their needs. In fact, I feel it misrepresents the vendors – the advantage is given to the larger established vendors that are marked as a leader but many of which do not have the breadth of functionality covering the areas of risk, audit, compliance, policy, and regulatory change that Gartner states they are comparing vendors against. How are they a leader then? At least Forrester gives you a lengthy spreadsheet that breaks out capabilities in each of these areas and how vendors scored at the criteria level itself. Forrester has a more objective and transparent process. The issue with Forrester is that it is not current – they do not publish the GRC Wave frequently enough. The issue with Gartner and Forrester is that there is not enough detail in specific areas of GRC such as risk, audit, policy to really compare vendors in detail within a GRC technology area, though Forester provides more detail than Gartner.
The world needs to have the analyst world re-engineered. Client relationships should be noted so that the reader can understand conflicts of interest (something that Constellation Research Group is doing). When a vendor is a client spending money with Gartner it should be easy to determine this. Analyst fees need to come down. Really, $10,000+ a day for analyst time – that is robbery. The research process needs to be more transparent to the reader – particularly in vendor comparisons on what detailed criteria is used, what were the documented analyst findings for each criteria, and how was this weighted and scored for each criteria an
d vendor participating.
The technology world needs to be unshackled from the approach and cost of the major analyst firms.
Thank you French for continuing to be an admirable foe and friend. I wish Gartner provided you a better framework to operate in so you could excel further in GRC research. I am sorry that you have to defend of broken, non-transparent, and ineffective approaches such as the Gartner MQ.