“What is the most accurate measure of ‘hedge fund industry’ returns?”
This is an extremely important question for any investor who is seeking to evaluate or benchmark the performance of individual managers. Consequently, it’s understandably frustrating when numerous firms report different numbers based on proprietary indices and datasets. In the following I seek to break down this question and propose a solution using a modified version of a widely-available hedge fund index.
To begin with, ideally we would define precisely what we mean by “hedge fund,” aggregate the returns data of each fund (and managed account), and construct a reasonable methodology to use this data to estimate returns. We’d need to know about every fund out there that falls into our definition, including funds that are now defunct. We’d want assets under management broken down by fee class and, importantly, information on when each dollar was invested and withdrawn (since high-water marks can result in very different returns after drawdowns). We might have to think about returns differently for different types of investors: a $5 million investor might end up with very different net returns than a $5 billion investor.
Clearly, no one has enough data to do this. What we’re left with instead are various forms of “hedge fund indices” that are put together by firms like Hedge Fund Research, Dow Jones Credit Suisse, Bloomberg and others. Each firm has a distinct, but overlapping, pool of funds; there are some moderate differences in how they manage the data and construct the actual indices. Most index providers publish dozens of indices that cover different sectors, geographic regions and, more recently, investment structures. Not surprisingly, each firm touts its own dataset and construction methodology as the most robust and accurate. We have some views on this, but this comparison falls outside the scope of this memo.
What we lay out below are some of the strengths and weaknesses of the three major categories of hedge fund indices: the traditional, non-investable indices of hedge funds; investable versions of these based on subset of those managers; and (also non-investable) indices of funds of hedge funds. We use three well-known indices from HFR for this analysis. What we hope to demonstrate is that the non-investable indices of hedge funds of funds, adjusted for fees, provide the most accurate (or least-biased) representation of industry returns. But first, we outline a few issues with hedge fund indices in general.
THREE COMMON ISSUES
Every hedge fund index and database will have its own set of biases. This is, unfortunately, unavoidable. There are three broad issues that affect each of them.
First, reporting outside a fund’s investor base is voluntary. Managers that do report generally have discretion on which funds to report, which share class to highlight, and in which category or sector they choose to be classified. It’s been argued that there is rigorous screening on these points, but we haven’t seen it. What we have seen are funds that only report lower fee share classes – presumably to show higher returns – and others that show high fee shares classes – presumably to give more room to negotiate fees. Also, managers can choose which, if any, databases they’ll report to, so each index will have somewhat different constituents. Some databases permit funds to delay reporting for up to a few months – in other words, the manager can wait until December to decide whether to report a difficult September. Some large funds refuse to report altogether or change their reporting methods (Paulson stopped reporting intra-month to one of databases during 2011 after the results kept leaking to the press).
The second issue is attrition rate. Based on the HFR databases, around 15% of funds cease reporting every year, while another 15% or more take their place. Imagine how you’d think about the S&P 500 if 75 companies dropped out every year. We do know that the new companies tended to outperform the index prior to joining, but they quickly gravitate to the mean going forward (to its credit, HFR does not allow those pre-index returns to distort its indices; at least one other index provider still permits this). In other words, the new entrants look a lot like the other funds, so this doesn’t seem to cause serious distortions in the results. It’s much harder to know what happens to the funds that drop out; we assume that the majority of funds drop out due to poor performance, and we’ve been able to make some rough statistical estimates on this point.
A third issue is comparability among the indices. As noted above, not all funds report to each index, so the constituents don’t fully overlap. This becomes more of an issue with the sector-specific indices, where the number of funds is lower: in other words, you’ll see less divergence between two industry-wide indices where 1,500 out of 2,000 funds overlap than a health-care specific index where 30 out of 80 overlap. Construction methodologies also differ: some equally-weight results, which over-emphasizes the returns of smaller funds, while others asset-weight (specifically, the last time I ran a screen, the average fund in the HFRI database had $40 million in AUMs). Screening and selection criteria for specific indices may be quite different as well.
NON-INVESTABLE, INVESTABLE AND FOF INDICES
There are three broad categories of indices. We discuss each of these below.
– Non-investable indices. These are indices of hedge funds in which managers voluntarily report their results. Think of it like the S&P 500, except the companies tell you what they returned every month. When you read about that the “hedge fund industry gained x% last month,” it was most likely one of these. Non-investable indices have been criticized for various forms of data bias: backfill, survivorship, construction and others. Suffice it to say that, from what we’ve seen, the construction and reporting of these indices has improved over time. The non-investable HFRI Fund Weighted Composite includes over 2,200 funds and is equally-weighted (a clear construction bias issue). Statistically, though, the one bias that really matters is reporting bias: managers can wait to report a bad month for some period of time to see if they recover so, presumably, some managers will have terrible months and simply never report them. How do we know this is a real issue? For the past two years, the “stragglers” have caused downward revisions of the initially reported (“flash”) monthly numbers by 20 bps on average, or 240 bps per annum; to further prove the point, in bad months (May 2010 and July-September 2011) the average downward revision has been over 75 bps. Given that 15% or so of managers drop out every year, and many of them will have had a lousy month or two along the way, this is a real issue. Because of this bias, we estimate that the index results are overstated by 100-150 bps per annum.
– Investable indices. These are indices that were supposed to mimic traditional indices like the S&P 500: liquid, investable and fee efficient. The HFRX Global Hedge Fund Index is one of these. You can think of this as a highly, highly diversified fund of funds (it has over 200 managers), but excluding fund of funds level fees. The problem is that this particular index has performed terribly relative to the rest of the industry. Think of an “S&P 200” index that selects the worst companies out of the S&P 500. The best explanation for the underperformance is that the managers who agree to be included in the index (and its stringent liquidity and other terms) are inferior and therefore desperate for new assets. Another possibility is that, by definition, these managers are focused on liquid markets where excess returns are scarce.
– Fund of funds indices. Also “non-investable,” these indices appear to have the fewest data biases. The HFRI Fund of Funds Index includes over 600 funds. The fund of funds results, by definition, reflect asset-weighted returns, and the results have to include even those funds that might drop out of the non-investable indices (funds of funds don’t let their underlying managers wait three months to decide whether to report a bad month….). Although the drop out rate is high – remarkably, 70% of funds that were reporting five years ago are no longer in the HFR database – we don’t see reporting bias as a significant issue. The principal issue is the second layer of fees. Fortunately, this is relatively easy to estimate – 125 bps on average. Consequently, the most accurate approximation of “hedge fund industry” returns is a non-investable index with estimated fees added back in. For simplicity, we call this HFI*.
RESULTS OVER 2001-11
The following chart shows the performance of the HFRXGL, HFRI Fund Weighted Composite and HFRI Fund of Funds indices over the ten year period ending in December 2011. We’ve added the FoF index, adjusted for estimated fees. There are two key points to take away from this. First, the HFRXGL has persistently underperformed the HFRI Fund of Funds index by around 200 bps per annum; however, if you add back fund of funds fees to make them comparable, this figure rises to 300-400 bps per annum (vs. HFI*). Second, note that most of the outperformance of the HFRI Fund Weighted Index occurs after the crisis. During 2009, funds of funds were dealing with a flood of redemption requests, gating issues and other problems. Consequently, they were effectively delevered during the sharp rebound in the markets later that year. The non-investable indices, on the other hand, did not have similar issues and therefore recovered more.
Here we show the same chart over the past three years. We see the same pattern of persistent underperformance of the investable index, although in fairness this underperformance has been less severe over the past few years.
It is clear that investable indices have materially underperformed over time. The question is, why? One important difference is that the funds that comprise investable indices are likely to be restricted to more liquid investments. Many would argue that illiquid investments should garner a return premium over time (endowment model); arguably, if other hedge funds have more of their assets in illiquid securities, this would explain some of the long-term performance differential. While this argument is compelling, we haven’t found a good way to disentangle illiquidity premia from the adverse selection issue described above. Hence, the jury is still out.
The good news is that some index providers are expanding their offerings into new areas, which will help with comparative analysis in the future. For instance, CSFB introduced a non-investable index specifically designed to track managed accounts and similarly liquid vehicles. We hope that these indices will provide valuable information over time on measures like illiquidity premia, and look forward to sharing the results when they become available.