Many analysts have written about the scope and magnitude of estimated biases in hedge fund indices:  selection, survivorship and reporting bias.  As it relates to hedge fund indices, selection bias has three forms:  when only funds that are likely to do better than their peers are added, when strong performers are added retroactively, or when the fund sample is not representative of the overall population.  Survivorship bias is a problem when only good funds remain in the index and reporting bias arises when funds tend to report better numbers.  For reasons outlined below, as it relates to the HFRI Fund Weighted Index, the principal issue is a combination of survivorship and reporting bias.

As a starting point, funds that elect to report to the HFRIFWI have, on average, outperformed the index by around 7% per annum in the preceding three years.  Not surprisingly, a manager who has outperformed has more of an incentive to voluntarily report than one with poor performance.

That said, selection bias is not a significant issue in the HFRIFWI for two reasons.  First, those managers don’t do materially better than the index after they’ve joined; they revert to the mean almost immediately.  Second, HFR correctly does not adjust the prior index returns to reflect new additions.  (Unlike HFR, at least one other index provider — Eurekahedge — revises prior index returns as though each new addition had actually been in the index since inception.  This causes prior monthly numbers – e.g., December 2011 – to drift upward over time and effectively renders the data useless.  HFR hasn’t done this for as long as I’ve followed it, and Hedgefund.net told me they stopped doing this in 2005.)

HFR’s indices are, however, subject to two forms of data bias that are worth discussing.  First, HFR constructs its monthly indices as an equally weighted average of the monthly returns of the underlying constituents.  The HFRIFWI currently has around 2,200 funds that range in size from $19,000 to over $21 billion, with a median fund size of $45.7 million.  The smallest and the largest are given equal weighting in each month’s results.  This becomes a real issue, obviously, if smaller funds materially outperform over time and skew the results upward.  Interestingly, though, at least as it relates to the HFRIFWI, we don’t find that smaller funds do in fact outperform, so we tentatively conclude that this is not a material issue (at this time).

For the HFRIFWI, the more relevant bias is a combination of survivorship and reporting biases that is caused by the fact that HFR permits any fund in the index to wait up to 90 days to report its results for a given month.  Due to reporting delays, HFR publishes a “flash” estimate around seven days after the end of the month – generally, 20-25% of the index constituents — and will periodically revise that month’s return for until it’s “locked down” at the end of 90 days.  The practical effect is that managers who performed well in a given month generally report earlier than those who did poorly, and some who did really poorly never report at all.

To put this in perspective, since 2009 the flash number has been revised downward 63% of the time and by an average of 36bps, which equates to an annual downward revision of 4.41%.   The downward revisions tend to be much more pronounced during particularly bad months for hedge funds, which likely is due to a combination of intentional delays by certain managers and difficulty in settling on marks for some positions.  The following chart shows the month-by-month revisions since last summer and shows how much more pronounced the downward revisions are in down months (August and September 2011 and May 2012, which won’t be finalized until next week).  This should give you pause when you see the first reported “flash” results come across your Bloomberg screen.

What this doesn’t show us is the effect of managers who stop reporting altogether. Take the following example:  a manager is down 20% in August 2011 decides to delay reporting.  If the fund recovers in September and October, he/she will report August before the end of November and remain in the index.  However, if it doesn’t recover, the manager may never report August or later months and the index will be somewhat overstated for that period. Given that 15% of funds stop reporting during the course of any given year, this is a real issue.

We’ve found that the best way to estimate this effect is to compare the results of the HFRIFWI with those of the HFRI Fund of Funds index – an equally-weighted index of over 600 funds of hedge funds.  The reason is simple:  funds must report down months on a timely basis to their investors, and we see this flow through the fund of funds results.  We first assume that both the HFRIFWI and HFRIFOF are reasonably representative of the overall industry, then normalize the results for funds of funds level fees (which we ballpark at 125 bps on average).  The difference between the HFRIFWI and the fee-adjusted HFRIFOF should primarily reflect reporting bias.  In the chart below, the differential over the past five and ten years is between 100 bps and 200 bps per annum.

The next question is what this tells us about those funds that stop reporting.  The best that we can do is infer how much they underperform the rest of the index during the period immediately following their last reported result.  If we apply a 15% attrition rate, and assume that this bias accounts for around 150 bps per annum, then the average fund will have underperformed the index by around 10%.  Granted, we don’t know whether this underperformance occurred in a single month or over a longer period, but the magnitude is interesting.  This affirms the view that the industry is fairly ruthless about shooting the wounded.

EmailPrintLinkedInEvernoteShare/Bookmark

Leave a Reply