The TakeAway: In response to the boom in corporate sustainability ratings, SustainAbility releases the second report in its four-phase “Rate the Raters” project.
Last month, when JustMeans and CRD Analytics released their Global 1000 Sustainable Performance Leaders list (the latest in a raft of corporate sustainability ratings), Ethical Corporation founder Toby Webb dismissed it as “yet another opaque rating system” comparing “apples and oranges.” While this derision isn’t universal (clearly, there’s demand for sustainability ratings), a survey released last week confirms this skepticism.
SustainAbility and Globescan polled over 1,200 sustainability experts, and found they trust these ratings less than nongovernmental organizations (NGOs) and company employees “to accurately judge a company’s sustainability performance.” This week, SustainAbility piggybacked the survey findings with a report on the ratings phenomenon that addressed the debate over the methods and impacts of disclosure—and how to improve it. Their report joins other efforts to review and reform this burgeoning mini-industry.
First, some background: When I began this work in 1983, you could count on one hand the number of credible corporate responsibility ratings schemes—starting with the Sullivan Principles, the most prominent due to the dominance of the South Africa anti-Apartheid movement. The Sullivan Principles served as the prototype for subsequent ratings, including the MacBride Principles for Northern Ireland, the Valdez (now Ceres) Principles, and others.
Nowadays, it’s practically raining ratings, with many sponsored by magazines such as Newsweek (based on research from MSCI and Trucost), Corporate Responsibility (research by IW Financial) and Corporate Knights (research by Inflection Point Capital Management). Add to that the ratings that serve the sustainable investing community – such as the Dow Jones Sustainability Index and FTSE4Good – and you have a veritable deluge. But even the worst corporate offenders – Exhibit A: BP and the Deepwater Horizon rig explosion – can receive high sustainability ratings scores. What gives?
The credibility question animates this week’s SustainAbility report, part two of its four-phase “Rate the Raters” research program launched last May. Its goal: shed light on the universe of external sustainability ratings, while improving their quality and transparency. The project’s second phase (June to mid-September) concentrated on the current universe of sustainability ratings—including source of research, issue focus, whether or not methodology is disclosed. Key findings include:
- Of the 108 ratings examined, only 21 existed in 2000;
- “Universal ratings” spanning multiple issues, industries, and regions remain the norm, even though “it is difficult or impossible … to make meaningful comparisons across industries because issues manifest themselves differently (in terms of level of importance) for each industry; and
- Most ratings (over 60 percent) rely on information submitted directly to them, “thereby rewarding companies with the greatest capacity to respond to ratings requests rather than those with the best performance.”
Phase one (April to May 2010) of the project covered the evolution of the ratings agenda over the past ten years, and identified key trends and challenges, such as the wobbly focus on the “economics” leg of the triple bottom line (a coin termed by John Elkington, SustainAbility’s founder; the other two legs represent environmental and social concerns), and whether – and how – ratings are moving us toward a sustainable future.
Phase three (October to December 2010) conducts a deeper analysis of a select group of ratings schemes, both to understand how they approach the evaluation of sustainability performance, as well as how they address the challenges above. The final phase (December 2010 – January 2011) turns to the future of sustainability ratings, and how best to meet the demands and needs of ratings producers and users.
The SustainAbility project complements the work of the Global Initiative for Sustainability Ratings (GSIR), launched last April. Its goals: design and disseminate a generally-accepted sustainability performance ratings framework. A project of Corporation 20/20, GSIR intends to use a “transparent, multi-stakeholder process that uses a Web 2.0 platform which complements existing ratings tools”.
In the end, as the SustainAbility survey concludes, the ratings process is essential and continues to add value. However, to assure credibility, ratings organizations need to improve both the quality and transparency of their ratings process, consistent with widely accepted academic standards.
All (as just posted on the Compass blog):
‘Rate the Raters’ is a terrific idea, but it would be even more powerful if it included rating criteria that actually rate raters in terms of how well they actually rate the sustainability performance criteria of the raters they rate…(ok, is it just me, or does the riddle of ‘how much wood could a woodchuck chuck if a woodchuck could chuck wood’ come rushing to mind here?)…Anyway, what’s conspicuously missing here is a rating-of-raters framework that itself has been rated, especially one that includes ‘sustainability context’ as a criterion for determining whether any such rating system is worth its weight in salt. To my knowledge, there are no such rating systems in use today. Hence, no sustainability rating systems — indexes or what have you – actually tell us anything about the true sustainability performance of the companies they rate. Now would be a good time to call attention to that, IMHO. Perhaps ‘Rate the Raters’ would care to do so.
Mark
Hi Mark – our framework for assessing the ratings is coming in phase 3 – and we will publicly disclose the details.
Happy to discuss more.
Michael
Michael,
Thanks for chiming in on Mark’s question. Can you share any initial thoughts publicly, perhaps generalizations about the framework — or do we need to wait for Phase 3?
Thanks,
Bill
Hey Bill –
We’re still finalizing the framework we will use to assess the ratings, but in brief it will have four categories: Governance and Transparency, Inputs, Research / Ratings Process, and Outputs. There are criteria within each of these, for example within Inputs we will examine, for example, the sources of raters information, how they engage companies in the process, etc.
We’ll make this all public. And again, we are not aiming to rate these raters (so we’re not going to come out and say rater X is #1).
Michael
Michael,
Very interesting — thanks so much for the followup information!
It sounds like Phase Three of SustainAbility’s Rate the Raters project, as you describe it, would provide a framework for rating raters, even though SustainAbility itself isn’t planning to do such evaluative assessment. But others could.
This sounds similar to the goal of the Global Initiative for Sustainability Ratings that grew out of the first Corporation 20/20 Summit on the Future of the Corporation. I would be very interested to hear of any communication between SustainAbility and GISR.
Thanks,
Bill
Pingback: Tweets that mention It’s Raining Ratings! | The Murninghan Post -- Topsy.com
Bill, this post is timely.
I spent the better part of last week trying to understand how to link Sustainability ratings to Biopharm and now I am going to have to dig into annual reports and not even pay attention to ratings.
What I have decided is to focus my research on health related enterprises that report on CSR outcomes. So far I can only find one company in the UN Global Compact in Biopharm that operates out of a core business strategy for CSR. The company was Sanofi Aventis and I am still not clear that the way the data base is arranged if it is accurate to say that.
This led to an important question to me, do I track industry indexing or do I just simply search for reports that show scale societal outcomes from approaching sustainability as a core business strategy. And do I only read reports that meet the GRI report requirements.
So this is my new Argh track of investigation and maybe a productive exercise to see what is contributing to building sustainable health.
Lavinia,
Thanks so much for expressing your own personal experience with sustainability ratings in trying to assess the actual performance of companies (biopharm in this instance.) I find it interesting that you abandoned ratings as a credible source of information — in essence affirming the SustainAbility/Globescan survey.
By the same token, don’t you have to apply the same degree of healthy skepticism to the other information sources you’re using — companies’ own sustainability reports, and participation in The Global Compact? What about NGO reports, which score a higher level of confidence in the SustainAbility/Globescan survey, but still may warrant a skeptical eye?
I hope this doesn’t exacerbate your ARGH! experience!
Best,
Bill
All:
This is mainly in response to what Michael Sadowski said above. Michael, I’m a tad confused. You say, “we are not aiming to rate these raters”. And yet, you call your program “Rate the Raters”. If not for the purpose of rating the raters, then, what is the real purpose of “Rate the Raters”? And why title it as if it’s something it’s not? Furthermore, why NOT rate the raters? Is there some taboo to doing so?
Next you say, “We’re still finalizing the framework we will use to assess the ratings.” OK, what’s the difference between “assess” and “rate”? Is “assess” a softer form of “rate”? If you assess one rating system as inferior to another, can’t we conclude that the ratings produced by the weaker system are less valid or effective than the other?
Last, the four categories you listed seem to more or less reflect or mirror the categories raters already use. You seem to want to take the status quo as the starting point and go from there. Well, what if the status quo is ill-conceived or flawed? We all know what the starting point is; we don’t need a new report to tell us that.
What seems to be missing here is an analytical framework for evaluating rating systems that transcends the criteria raters are already using to perform their ratings, and which holds them to a higher standard, as it were. Indeed, isn’t the more important question one of what the raters and rankers SHOULD be using in terms of rating criteria, and whether or not they, in fact, are using them?
Here, for starers, is a question I would offer: Are any of the raters actually rating sustainability performance, per se, as they claim? And if you think so, what makes you think so? What actually constitutes a rating of sustainability performance, and is anybody really doing it? Personally, I don’t think so.
Regards,
Mark
Hi Mark – not sure if have read the phase two report (or even the foreword / exec summary) – this explains what we aim to do and why.
We are developing an analytical framework for assessing how raters do actually rate sustainability performance, and rest assured we will hold them to a high standard. We see very few ratings that actually base their evaluation on the underlying SD agenda – which should be the point in this exercise.
Not sure what you mean by “four categories you listed seem to more or less reflect or mirror the categories raters already use.” As you well know, raters have a suite of criteria they use to evaluate companies (e.g. GHG emissions, investment in renewables, existence of Board-level CR committees, etc.), and we are developing a set of criteria to evaluate how well these raters conduct their evaluation (e.g. how do they gather and verify information, how do they consider sector-specific context and issues, how do they manage conflicts such as selling services to rated companies, etc.). This is akin to what SustainAbility did in Values for Money in 2004 – though now we’re looking beyond SRI research shops. Have a look at that report for the sort of aspects we’re examining.
And thanks for your thoughts.
Michael
Bill,
I formulated a question. This does not mean I abandoned ratings. After I sent out my initial email that you got, I got remarks from over 7 other people that include some of the best of our network.
I keep listing out questions and I keep defining my tasks. My net task after reviewing the UN Global Compact is to review some some annual reports. What I am creating at this time is a list of what I learn through my various reviews of data.
I came to an interesting conclusion today about health, impacting health is not something you can look at easily in any database. So I am also thinking about health metrics and other things.
The UN Global Compact characterizes biopharm with biotech. I don’t believe thi includes medical device companies. Then there is an issue on the retail side to see how companies like CVS and Walgren’s impact direct health of people and how(and I have to define that as well). I also pulled up a list from CNN Money of the top biopharm companies. There were 21 of which 2 companies had merged with other companies. Within the UNGlobal Compact list most companies join as country specific units, e.g. Pfizer US, Merck US. Of the 108 companies listed I think only 6 were Fortune 500.
I want to review some annual reports next. CSR reports and annual reports for Fortune 500 not listed in the UN Global Compact and then I plan to review the Newsweek 100 list and the Global 10 list. Elaine Cohen indentified for me 2 indexes that list medical and health companies.
I apologize if my entry here was not presented in a form of detail to define all my research. It is too early stage for me in starting this project to draw conclusions and along the way I will list my questions out to test out all my assumptions.
Lavinia
Oops, I should have previewed my comment. The intial email that I referring to is- an email I sent out asking for help from people. I have also done some preliminary writing, early stage and would value input from experts in industry and CSR practitioners. So anyone can contact me @workecology on twitter, if they wish to connect with me re: with an offer or help or point of view that is worthwhile for me to consider.
Great post Marcy. For the record, when Tony “dismissed” the ranking, he knew nothing about methodology and underlying data. That would be like “dismissing” a candidate for office when knowing of their policies, character, or vision. You might want to think about motive of sources when citing them.
Martin,
Thanks for weighing in.
As CEO of JustMeans, your explanation answering the critiques from Toby (not Tony) Webb would be very interesting to me — and I imagine to other MurnPost readers and the broader sustainability / CSR community.
How does the CRD Analytics avoid the charge of opacity? Is all of the data used in the ratings transparent? If so, please point us to where we can find it.
And how about the apples-to-oranges charge? How do you make clear, relevant comparisons between companies in vastly different sectors?
Looking forward to hearing back — great to see you in the dialogue!
Bill
Hi Bill,
All the data will be available to subscribers on Justmeans by the end of year. You can easily use the dashboard on Justmeans to sort by sector or industry – roll over blue links on this page for sector and industry and you can sort by any sector or industry:
http://www.justmeans.com/clientlist?type=insight
Hey Martin,
Thanks for providing this information. Clearly, there’s a ton of work that CDR Analytics has done to put together its ratings, and it is a huge challenge to convey that information efficiently and effectively.
Why did JustMeans and CRD decide on a lag time between launching the ratings and making the underlying data available “by the end of the year” — and only to JustMeans subscribers? This is similar to what has happened with the CRO (or whatever they’re called now) list — they announce the ratings with great fanfare, but delay publication of the methodology (and take it a step further by not disclosing the underlying data.) It seems to me that transparency should coincide with the unveiling, so that information is available when there is the greatest demand for it — at the moment of exposure.
There’s also a deeper question that Toby Webb of Ethical Corp raises, and that many other (including myself) have addressed in the past: the transparency not only of the methodology and the data, but also the selection process. For public-facing ratings, the ideal is to make it like a scientific experiment, where another researcher could use the same methodology and data to come up with the same results. That isn’t the case for any ratings I know of, because of the inherent subjectivity of the selection process.
On this tricky question of transparency, there’s an important distinction between market-oriented ratings, and public-facing ratings. The market-oriented ratings — those sold to clients, for example as an investable index — invariably rely on proprietary methodology, because the rating firm needs a sustainable business model to continue issuing the ratings. I totally understand this need. At the same time, these proprietary ratings now have significant sway in the public marketplace of ideas, creating tension between the need for proprietary methods and the need for transparency to explain why certain choices are made.
For public-facing ratings, the gold standard would be an open kimono, total transparency, like open source code in computer programming. I see Toby Heaps at Corporate Knights making strides toward this degree of disclosure with the Global 100 Most Sustainable Companies list, but even this effort falls short of full transparency, as far as I know.
I would love to applaud JustMeans and CRD — and all other public-facing ratings — for opening up the kimono. Until then, there will continue to be demand for Rate the Raters and the Global Initiative on Sustainability Ratings.
Best,
Bill
Pingback: Tweets that mention It’s Raining Ratings! | The Murninghan Post -- Topsy.com
Bill,
The delay is merely due to being able to only create so much tech in a finite period of time! We wanted to get things out in time for the reconsitution of the Nasdaq index in mid- November as well. Lots more to come!
I think its unreasonable to expect meta-data (i.e. not the indicators themselves, but the algorithms tied to indicators) to be transparent. As a business model, I think it would be very hard to make that work. Google doesn’t give away its search algorithm but an ecosystem has developed that helps you understand how to improve your search scores.
Martin,
I hear you on the realities of finite resources — such as time and tech. But I would contend that transparency ranks high enough in importance to time a launch when the data is ready to roll. I acknowledge that reasonable people can disagree with me on this opinion — though I imagine many reasonable people see eye-to-eye with me on this.
On the question of full transparency and business models, I hope I made it clear that I totally understand that market-oriented ratings — for example, those underlying an investment index, such as the NASDAQ OMX CRD Global Sustainability 50 Index — understandably rely on proprietary methodologies and data as part of a sustainable business model, as that’s what investors are paying for. (The fact that those ratings also communicate with the broader public is a separate, thorny issue that I won’t take up right now.)
And I hope I was clear that public-facing ratings, those whose primary goal is to communicate with the public (as opposed to serving as the foundation for an investable index), should be as transparent as possible — ideally, fully transparent. Yes, this hits up against the realities of the need to support these indexes with a sustainable business model. But in order for them to serve their primary purpose of communicating the sustainability performance of companies to the broad public, they will be most credible when they also make themselves fully transparent to their audience.
That, or we need to develop a “certify the raters” scheme, such as exists with many other validation regimens (for example, I’m thinking the recent work by Verite to set up a process to verify the certifiers in the cocoa supply chain.)
Until then, how can the public have full confidence in these ratings?
Thanks again for continuing to engage on these important questions — and for doing such a great job with JustMeans to create a platform for so much information and conversation on sustainability issues.
Bill
Pingback: Weekly Fenton “Good Business” Update « Fenton | Progress Accelerated
Pingback: The Sustainability Ratings Industrial Complex: Breaking the Hold | The Murninghan Post
Pingback: Ceres to Build One Sustainability Rating to Rule Them All | จับตาการค้า WTO & FTA