Forrester GRC Wave = Tsunami of Confusion
I feel that I am in an alternate reality. This cannot possibly be the real world. Are we living in a DC multi-verse where there are different GRC technology realities and I am just confused as I woke up in the wrong world?
Anyone following me long knows my frustration with Gartner and the Magic Quadrant (see note at bottom on Gartner)[i]. But now Forrester?
I long praised Forrester for their Wave approach and methodology (full disclosure, I was a VP and ‘top analyst’ at Forrester from 2001 through 2007 and wrote four Waves, including two GRC Waves). Where Gartner is based on secrets and magic (I guess that is truth in advertising), Forrester discloses every criterion, weighting, and scores.
The previous Forrester GRC Wave I only had one major issue with, and I talked to the lead analyst of the previous report about it last June at a conference we were both at. That issue was the fact that Forrester had a criterion that every solution evaluated had to be doing $30 million in GRC revenue, and at least one solution, LogicManager, was not. The analyst explained to me that they were grand-fathered in. I replied that an exception should be documented and footnoted in the research report. Organizations were being left with a false impression that this vendor is much larger than it is. That solution is a Leader in the new GRC Wave, but Forrester dropped the revenue criteria down to $15 million, but I still think that is a stretch. But that was my only issue with the previous Wave.
Now the 2020 Forrester GRC Wave is released, and I feel that I must be in a different reality. It does not make sense.
Before I get into that, I must state how I loathe two-dimensional representation of winners and losers such as in the Forrester Wave and Gartner Magic Quadrant. These graphics have deep underlying assumptions and criteria that make some solutions winners and other losers in a single graphic. Every solution in the current GRC Wave I can think of situations where they are a good fit. To have a graphic that makes someone the winner and the rest losers leads many down the road to project confusion and often failure. In fact, my last GRC Wave I wrote at Forrester in 2007 had four different Wave graphics as the market back 13 years ago was too complex to represent in one graphic. It is a time for these two-dimensional analyst graphics to die, or at least do them tied to very specific use cases based on the size/complexity of an organization and industry.
Looking at the recently released GRC Wave, my first question is who is this Wave for?
It cannot be a representation of solutions that are delivering true integrated GRC, ERM, or ORM in Fortune 500 companies. The only way the graphic and scoring make sense to me is if it is a GRC Wave for the SMB (small to mid-sized business market). Perhaps this is the ‘undocumented’ focus of the report as their comment on ServiceNow, one of the Leaders, is that it is “a good fit for midmarket companies.” Ironically, ServiceNow does have large enterprise clients for ITSM, but I am personally not aware of any large organization using them for a full enterprise/operational risk management program in all its complexity.
This leads to the question . . . who are Forrester’s clients? From my experience, Forrester subscribers have tended to be large global organizations and not the SMB market. So is this Wave a good fit for Forrester’s actual subscribers/readers . . . I do not believe so.
While I have a deep respect for the Leaders in the Wave, they all have their strengths and areas of focus, I cannot come up with any client references that I know of where they are truly being used for an enterprise/integrated GRC/ERM/ORM implementation in Fortune 500 companies. Yes, many of the Leaders are in Fortune 500 companies in specific use cases (e.g., audit management, internal controls, ITSM, IT risk management), but I am not aware of any large global organization in the Fortune 500 actually using any of the Leaders for a complex enterprise view of risk that aggregates and normalizes risk across the entire organization (e.g., strategic, operational, financial/treasury, compliance/regulatory, EH&S, IT). I could be wrong, but I talk to a lot of organizations and interact on a lot of RFPs every year in my market research. Forrester does not clarify the scope and since it is GRC, it can only be assumed that a broad focus of enterprise and operational risks would be a primary use case.
I do applaud Forrester for their focus on user experience, ease of implementation, cost of ownership, configurability of the solution, as well as artificial intelligence. These are areas I have carefully defined in GRC 4.0 – Agile GRC as well as the artificial intelligence capabilities coming forth in GRC 5.0 – Cognitive GRC. The next generation GRC 5.0 Cognitive GRC platform I have personally experienced in my interaction with ING in their GRC Orchestrate project in ING Labs.
If I was a Fortune 500 company looking at this Wave, I would ask the following questions:
- What actual client references can a solution provider deliver that are using the solution for a true enterprise view of risk (not an IT-focused view of risk)?
- You want a solution that has a proven track record at tackling the complexities of GRC/ERM/ORM in large global organizations.
- How do these solutions do risk normalization and aggregation (which is ‘table stakes’ for a true enterprise view of risk)?
- Many solutions have a very flat view of risk as they were built for smaller organizations or for a specific department like IT security/risk management. They fail when you have a complex enterprise implementation. One department’s high risk may be another department’s low risk. Large organizations need a legitimate department view of risk as well as an enterprise view of risk in a solution that makes sense. To compare apples to apples and not apples to oranges you need advanced risk aggregation and normalization.
- What are the solution’s capabilities for risk analytics and modeling?
- Too many solutions have a very flat heat-map approach to risk, and that is a recipe for disaster. Large organizations need a variety of risk analysis techniques that require advanced analytics and modeling. You should understand the range of risk analytics and modeling capabilities in the solution (e.g., bow-tie risk analysis, monte carlo, decision tree, FAIR, and more).
- How does the solution show risk interrelationships or interconnectedness?
- Risk modeling is complex in today’s dynamic business environment. You cannot depend on a solution that simply allows for a cascading risk hierarchy (e.g, register). Risks have relationships across the hierarchy and any risk may have many-to-many relationships with other risks in the hierarchy.
- How does the solution support a top-down approach to risk management aligned with objectives?
- The official definition of GRC is that GRC is a “capability to reliably achieve objectives while addressing uncertainty and act with integrity.” Any solution in the GRC space needs to show how it can document and manage the reliable achievement of objectives and manage risk in that context. Whether these are strategic entity objectives down into division, department, process, project, and even asset level objectives. Risk management requires context and it is the strategy and objectives of the organization that provides context for risk assessment.
- Does the solution have the data and application architecture to scale?
- Large organizations require a data and application architecture that can scale to their complex environments. This means that the solution needs to be able to address varying complex and distributed organizational structures.
- Does the solution support business process modeling?
- The complex risk and compliance challenges of today require that organizations look for solutions that support business process modeling. The operational resiliency requirements coming out of the UK, GDPR/CCPA, and even the changes in SOX compliance over the past few years require that organizations have the capability to model and document business processes in a risk and compliance context.
- How does the solution do quantitative risk modeling?
- There are functional uses for qualitative risk modeling and reporting, but organizations need to be able to quantify risk. Large organizations require actual objective financial numbers to risk that are defensible and not subjective.
- Does the solution truly integrate and support an enterprise view of risk?
- This may seem redundant, but it needs to be emphasized. Can the solution actually deliver on a true enterprise view of risk where it can bring together disparate risk areas such as strategic risks in context with the wide array of operational risks across operations, third parties, environmental, health and safety, quality, conduct, compliance/ethics, IT risk, and more. This may require integration with a range of other risk and business solutions.
- How does the solution bring together both a top-down and bottom-up view of risk?
- Large organizations need an integrated view of risk that aligns with the objectives and strategy of the organization (top-down) as well as the controls and risks down in the bowels of the organization (bottoms-up). Too many solutions only focus on the bottoms-up, and to my previous point, often only one or a few areas.
If you apply criteria around these questions you will get a completely different ranking of solutions than what Forrester delivers, but you will also find no one solution is perfect and does everything.
Here are some other thoughts, insights, and experiences on the Forrester GRC Wave:
- Inconsistent criticisms. I do not understand how SAI Global gets called out for having separate platforms under the hood when the dominant ‘Leader’ Galvanize has the same thing? SAI Global is working hard, like Galvanize, to bring about a consistent architecture from their acquisitions. But Forrester downplays Galvanize by referring to ‘modules’ not having the same interface, while SAI Global is criticized for separate applications. The ‘modules’ in Galvanize are separate applications, not modules. These currently are different code bases for the ACL product and Rsam products that form Galvanize HighBond with different user experiences. Galvanize is a great solution, but I find the Wave evaluation not to be consistent in evaluation.
Forrester gives Galvanize a score of 5 on Mobile and yet highlights Mobile as an area of weakness on the commentary of Galvanize. Others, like MetricStream who have some of the largest adoptions of enterprise GRC mobility, get a score of 1.
Next, consider risk and control management. This is a broad category with many sub-criteria. One of the sub-criteria for the highest score required a dedicated team to maintain content. Both ServiceNow and MetricStream are criticized in their profiles for using UCF for content, though ServiceNow still receives the highest score in the category, while others are not. On the topic of content – bringing in content from authoritative sources is critical for GRC and could be a range of criteria. Large organizations expect integrations with various content sources. A requirement for a GRC vendor to maintain their own content team hardly makes sense except for a few narrow use cases in IT Risk where pre-mapped controls from a couple of common frameworks may be sufficient for the mid-market. - What are the full GRC capabilities? I am a fan of Workiva, it is doing some great things in internal control management, audit management, and policy management. But Forrester states that “one-third of customers use Workiva’s full GRC capabilities.” What are they measuring? If Forrester means internal control management, then I can agree with that. Workiva states they have 3,400 clients. Forrester scored them across risk and control management, document management, policy management, audit management, IT risk management, third-party risk management, and risk scoring. That would mean that over 1,100 companies are using Workiva for all of these capabilities? This simply is not true. Internal control management they have had for years. Other modules in their ‘full’ GRC capabilities are newer. There is no way 1,100 companies are using all these use cases scored by Forrester on Workiva. Workiva is doing some great things, but Forrester has the breadth of their use cases wrong.
- Where are the greatest risks organizations face? According to the World Economic Forum and Davos, the most significant risks we face are environmental risks (and with that health and safety risks with the current virus threat). Enablon has moved from a strong position in previous Waves to the back of the pack, but it is the one solution tackling and managing the most significant risks organizations are facing. Other analysts that understand this, like Verdantix, put Enablon in a clear-leader position.
Other analyst firms, like Chartis that understand the range of financial and non-financial operational risk in large organizations, place IBM and MetricStream as leaders in their most recent market quadrant. RSA scores high in IT Risk with Chartis. Galvanize, ServiceNow, and Logic Manager do not even appear on the Chartis quadrant as relevant, but this could be because Chartis if focusing on the challenges of large organizations and not the SMB market. I feel the Forrester scoring in the Wave may be heavily weighted to SMB organizations without clearly stating this or for use cases predominantly focused on IT risk/security that lowers the score and positioning of the systems doing broader enterprise/operational risk management.
- Conflict of Interest. Another critical issue I have is the fact that this is an official research report and conflicts of interest should be documented. I am not stating there was any wrongdoing, but any conflict of interest should be footnoted for the reader. Part of any compliance program (as well as research) is managing and documenting conflicts of interest on anything that can influence bias. The fact that the lead analyst has six years in a senior role at one of the solutions being evaluated (and the one that ends up being the leader of leaders) should be documented in the report so readers can take this into account. Any research publication from Wall Street financial analysts would require management of conflicts of interest, the same should be true of industry/technology analysts. Besides, there is also experience with the solution. The lead analyst is intimately familiar with the capabilities of the new leader having worked there for 6 years, while other solutions in the Wave get a 90-minute demo?
- That brings us to Sandbox and demos. Forrester requested a sandbox environment to go into and experience the solution. This was provided, but solutions in the Wave are reporting no logins at all to just a few minutes of activity actually in the solution. Forrester states that they only use the sandbox to validate things and not for scoring. This is a huge issue. Organizations are investing hundreds of thousands and some cases millions on software and much more on implementation and the analysts recommending the solutions are not even kicking the tires themselves. One constant criticism of Forrester in this process is the level of due diligence and response to issues in this research. Eight vendors have complained about this. How can Forrester claim to have the insight by reviewing 80 pieces of functionality in a 90-minute demo? They require a data populated sandbox but audit logs show they do not log in or just spend a few minutes looking at the solution. To make it worse, they allow only 300 characters (not words) to explain each piece of functionality/criterion in their spreadsheet answers to capabilities.
[i] At the heart of it is the fact that Gartner does not disclose any of their criteria and is becoming more dependent on recorded videos than live demos and does not actually get hands on with the products. My latest issues with Gartner were the smoke and mirrors of IRM in which the lead IRM analyst stated GRC technology has failed and now we have IRM technology when the IRM MQ had the same exact technology as GRC. What failed? If Gartner had simply come out and stated that they are now calling GRC by the term IRM, I would not have cared. Call it whatever you want: GRC, ERM, ORM, IRM, ABC, XYZ. What matters is what organizations are doing and not what they are calling it. But Gartner had to say GRC tech failed and promotes IRM technology which was the same exact GRC technology as before. Off to battle I went . . .
4.5