From Raw Data To Informed Decisions: What We Can Learn From The Financial Sector

comments 11/19/2012

The complexity of the “data” question is evident to the point of new cliché. Regardless of whether it is cliché or not, we have to deal with it. After any given round of wrestling with the question, we can often sense ways forward that are simple, powerful, and counterintuitive. That’s a small consolation en route to the next hard part: execution. Sunand Menon, founder of New Media Insight, covers this wide range from complexity to the ways we can act in this entry for Markets For Good.

Approximately US$300 billion in philanthropic giving is distributed annually to more than one million nonprofit organizations in the United States alone. However, there seems to be no clear way to gauge how well these resources are being used, since there is insufficient information, transparency, access, quality, and utility.

If the right data is collected and the right performance analytics are created, they could help pinpoint the highest performers, and result in better decision-making and more efficient allocation of resources, which ultimately will provide greater value to those in need. Sounds good in theory. But how do we do this?

Take the example of the Financial Services industry. Information companies servicing Financial Services firms have been successful in collecting, analyzing, and disseminating data, analytics and research to help investors make better investment decisions. Many of the systems and processes that are readily available and taken for granted in financial services information can also be implemented in the social sector.

Companies like Thomson Reuters, Bloomberg, Standard & Poors, Morningstar and Lipper have thrived by collecting data (no matter how opaque or infrequently generated), developing performance criteria that help make sense of the data (no matter how objective or subjective), and distributing it in a manner that allows for better decision-making. They achieved success by providing value across the spectrum of content services – from “Data”, to “Information” (in the form of value-added analytics such as Classifications, Indices and Ratings), to “Knowledge” (in the form of human insights, research, and best practices). And they maintained that success by investing in high quality, scalable operations, and by building brands that signify independence, accuracy and reliability.

Interestingly, they have all co-existed while developing different types of performance metrics – some of which are more accepted than others. Standard & Poors and Thomson Reuters advocate different data classification schemas (“GICS” vs. “TRBC”). Lipper and Morningstar use different fund ratings criteria (“Lipper Leaders” vs. “Star Ratings”). There is rarely one universally agreed criterion.

As long as the metrics are simple and generally representative; as long as they are being used and are helpful to the customer; and as long as they are initially endorsed and socialized by a few key players in order to gain traction, they can succeed.

Ah, you say. But what about all the failings of the financial services industry, for example, the mortgage crisis? Why should we take lessons from an industry that played a key role in the economic crisis we are currently in? How can we avoid such disruptions in the social sector?

You are right. The above is likely not sufficient. In my view, there are at least two other very important considerations – transparency and aggregation.

Many failures seem to occur when there is a lack of transparency – take the example of the recent ruling by the Federal Court of Australia that S&P “deceived” and “misled” 12 local councils that bought triple-A rated constant proportion debt obligations (CPDOs?). According to the Financial Times, the court said a “reasonably competent” rating agency could not have given a triple A rating to the “grotesquely complicated” securities, and that they had published information that was either “false” or involved “negligent misrepresentations”. Even in this failure, there are lessons to be learned.

The takeaway for the nonprofit industry would be to create easily understandable, transparent methodologies that facilitate better apples to apples comparisons, and therefore more informed decision-making. And of course, to avoid creating a rating entity that is generally paid by the organizations it rates!

Aggregation also plays an important role in avoiding financial market disruptions, allowing us to gain multiple viewpoints before deciding. Let’s take the example of a mutual fund. Look at its Lipper rating. Look at its Morningstar rating. Read up about it. Speak to people. Compare its performance against a benchmark index. Form a view, and then make a decision. That’s “Information Complementarity” at work. And it generally works – as long as there is sufficient transparency, and there is the ability to review multiple, aggregated viewpoints.

So why hasn’t this approach been adopted in the nonprofit world, and what would it take to do so?

Firstly, there seems to be lukewarm interest and incentive from nonprofits and funders to build such metrics and infrastructure – unlike in industries such as Asset Management, where the success of a firm is critically dependent on demonstrating its high performance and low costs. This is said to be slowly changing, since many large foundations are now signaling a desire for increased transparency, efficiency and performance monitoring. This needs to further gain momentum.

Secondly,there seems to be an overly strong emphasis on ensuring that as many stakeholders as possible come together and agree on a set of metrics and taxonomies, before officially launching a solution. This may result in a protracted set of discussions, and produce a “lowest common denominator” set of metrics that may not be optimal. The nonprofit world could consider convening a group of key influencers (e.g. prominent foundations with a history of interest and research in this area, and subject matter experts with “gravitas”) to design these metrics, test them, gain feedback, tweak them, endorse them, and then create programs to gain adoption.

These are valuable lessons that could help make the social sector more performance-oriented and effective in the future. The solutions do not have to be perfect; they should be transparent and good enough to ensure that the end user is able to access and make use of the “raw data” and transform it to actionable, “informed decisions”.

We welcome relevant, respectful comments. Please read our Community Guidelines.
  • David Bank

    Good piece, Sunand. I agree that the financial markets have a lot to teach us about the use of data and analytics. Witness the emergence of ESG (environmental, social, governance) data as market signals for outperformance or risk-mitigation (i.e. separate from social impact). I’d argue that proactive “impact” investments generate such signals as well and so will get rolled up into sustainability reporting in a variety of ways. Let’s stoke the competition between your old employer, Thomson Reuters, and Bloomberg to drive impact into the datastream. Toward that end, I just posted “Yellow Brick Road to the Impact Economy” on HuffPost (as well as on ImpactIQ.org); http://impactiq.org/yellow-brick-road-to-the-impact-economy/.

  • Eric J. Henderson

    The following exchange is a re-posting of comments unintentionally made inaccessible due to a site update.

    Response to Phil Buchanan, President, Center for Effective Philanthropy

    Phil, thank you for your comment! I agree wholeheartedly that assessing the performance of constituents in the nonprofit world is in many ways more difficult than those in the financial services world, particularly given the complexity of the various programs, and the vast range of missions of its constituents.

    In fact, it appears you and I also agree that there is no single metric in either (modern day) world that defines performance outright since even in financial services, (qualitative) weightings are increasingly carried for areas such as corporate governance, religious compliance (e.g. Shariah), environmental factors, etc.

    The article above refers to an overall process and mindset. In evaluating
    “performance”, one might aggregate all the differing metrics that various parties have created, triangulate, and then make a decision. We have found that when there are several different viewpoints, based on different data pools and analyses, they either generally point to the same (or similar) conclusions, or they at least give better insight – which can facilitate more informed decision-making. And that’s not really naïve thinking.

    So that brings us back to the central thesis here: (1) gather whatever data you are able to, no matter how irrelevant it may seem at the time, (2) encourage the building of metrics, no matter how different or controversial they may be, (3) socialize these (segment-specific?) metrics, recognizing full well that “one size does not fit all”, and of course (4) test, tweak, evaluate, and if found useful, drive adoption – all by engaging “key influencer” organizations and thought leaders.
    ….
    from Phil Buchanan, President, Center for Effective Philanthropy
    Phil Buchanan
    November 27, 2012
    Thoughtful post but, as is so often the case, the author looks to financial markets as an analog without adequately acknowledging how much more difficult assessment is in the nonprofit sector. There is no common unit of measurement across very different nonprofits (and there never will be a meaningful one) – no analog to stock appreciation or profitability that would allow us to easily compare the impact of the organization that increases graduation rates to the one that protects natural habitats for wildlife.

    Can we usefully compare organizations going after the same or similar goals? Yes, we can, and this is frequently done in a variety of nonprofit fields, but even that can be difficult since there are many contextual factors at play (such as differences in populations served or the efficacy of other actors working on related problems).
    Measurement of impact in the nonprofit sector is goal-dependent in a way that makes it more complicated than in the corporate world. After all, I can compare P&G and Google, very different companies, finally, on their profitability.

    Suggesting that the judgment of performance of nonprofits and the judgment of performance of mutual funds have much in common is naïve. I say this as someone who believes that focusing on assessing and improving nonprofit – and foundation – performance is vitally important. But oversimplifying, or reaching for easy analogs where they don’t exist, does not help anyone.

    I have written about the tendency to look, starry-eyed, to market analogies and “business thinking” on the CEP blog.

    http://www.effectivephilanthropy.org/blog/2012/06/business-thinking/
    Phil Buchanan
    President
    The Center for Effective Philanthropy

  • Pingback: Reflection on The "Money Path" for Open Data

  • Pingback: Recommended Reading from Jacob Harold | GuideStar Blog

Top