Advertisement
Singapore markets closed
  • Straits Times Index

    3,224.01
    -27.70 (-0.85%)
     
  • S&P 500

    5,248.49
    +44.91 (+0.86%)
     
  • Dow

    39,760.08
    +477.75 (+1.22%)
     
  • Nasdaq

    16,399.52
    +83.82 (+0.51%)
     
  • Bitcoin USD

    70,620.75
    +633.79 (+0.91%)
     
  • CMC Crypto 200

    885.54
    0.00 (0.00%)
     
  • FTSE 100

    7,958.99
    +27.01 (+0.34%)
     
  • Gold

    2,209.60
    +19.00 (+0.87%)
     
  • Crude Oil

    82.11
    +0.76 (+0.93%)
     
  • 10-Yr Bond

    4.1960
    0.0000 (0.00%)
     
  • Nikkei

    40,168.07
    -594.66 (-1.46%)
     
  • Hang Seng

    16,541.42
    +148.58 (+0.91%)
     
  • FTSE Bursa Malaysia

    1,530.60
    -7.82 (-0.51%)
     
  • Jakarta Composite Index

    7,288.81
    -21.28 (-0.29%)
     
  • PSE Index

    6,903.53
    +5.36 (+0.08%)
     

Don’t Accept Smart Beta At Face Value

Rich Wiggins
Rich Wiggins

Rich Wiggins makes a habit of taking aim at golden calves. Most recently, he published a criticism of smart-beta strategies in Institutional Investor titled “Smart Beta Is Making This Strategist Sick,”  as well as a similarly focused piece on ETF.com titled, "Smart Beta's Foundation Problem." Previously, he authored “Cloning DFA,” which appeared in the final issue of ETF.com’s own Journal of Indexes, among many other pieces.

Smart beta has been a buzz phrase in investing, especially in the ETF arena, for several years. Even though the majority of asset flows in ETFs go to plain-vanilla passive funds, roughly half of the launches in recent years have been smart-beta funds. Wiggins’ smart-beta article dug into—and took a critical view of—the research that underlies the argument for factors and other strategies, and found it wanting.

ETF.com: Can you give an overview of what the article’s about?

Wiggins: The gist of the article is that a lot of what we read comes from the marketers of these [smart-beta] products. But the point of the article isn't really “Is smart beta good, or is smart beta bad?”

ADVERTISEMENT

In fact, in the last sentence of the article, I say, "Question everything, even me." It's really more just a reminder that skepticism is the first line of defense, and we should always revisit the facts and do our own work.

For example, the genesis of it was I’ve watched people get brochures of managers that were value managers, and they would say, "Well, my value loading is blank." And I'd ask, "Did you check it?" And they’d say, "no." Well, what is it you do as a fact analyst if you don't check it?

The gist is really just to preach skepticism more than it is to say, yeah, smart beta; or, boo, smart beta.

ETF.com: What has the reaction been?

Wiggins: [Mostly] good. A lot of people emailed, saying, "Spot on; love it." And they would have some big quote, saying, "this was my favorite line.”

But I was surprised at the number of people who were [saying that] well, maybe they were dissenters of some variety.

ETF.com: Have you come across any further supporting evidence since you wrote the article?

Wiggins: Bill Sharpe has a chapter in one of his textbooks ("Investors and Markets: Portfolio Choices, Asset Prices and Investment Advice," Chapter 8, 2008), that points out some interesting things about the Fama/French framework. He has a table that shows what percent of the market cap is in each of those little boxes, e.g., Big Growth, Small Growth, etc. It turns out that only 2.06% of the average percent of the market is in the Small Value bucket.

We're getting all this value premium from just a tiny slice of the sample, which doesn't seem as reliable as if we were getting it from a bigger number—sort of like a small sample size.

 

ETF.com: Are there any criticisms you see as legitimate considerations?

Wiggins: I don't think so, no. I went to [advisors like] Wes Gray and Corey Hoffstein and asked, "There's nothing wrong in here, right?" The most dangerous statement is I say is that, at one point, small-cap hasn't worked ever since it was discovered. Even that's true. I don't think there's anything wrong in there, where someone could say, "You forgot to carry the two" or whatever.

ETF.com: Amidst all the feedback and the reaction, is there anything you're surprised people haven't mentioned, or you think have overlooked about the article?

Wiggins: I've definitely trodden on everyone's corns, but I don't think they've overlooked anything. It's just a light and breezy article. There's really nothing heavy in there.

The whole article really just boils down to, in an implementable format, not long/short—most of us invest long only. And when you just go long only, you go to the oldest value fund, and does it even win? No. And then do the same thing with the small-cap.

And that's really all the paper did, was, instead of debating this, let's just look at the longest-running return stream I can get. If you can't beat the S&P 500, what is all this debate about?

 

ETF.com: Vanguard rolled out actively managed factor funds a few weeks ago. Is that something that might address the concerns you raise in your article?

Wiggins: No, it wouldn't address anything. The active/passive debate, there's so much research behind it—like SPIVA. It's almost impossible to find any long-running research that'll say, “Yeah, go active over passive.” I know the current argument against passive, which is that there's really no such thing as a passive investor because even the index has a bias built into it, which is true.

But the problem is that all those biases built into the cap-weighted indexes are actually pretty good. For example, in a recent article on your website, Larry Swedroe goes back and he refers to the research that says, Hey, the number of stocks that really deliver above-average returns is really a small number.

It's a good point. That's why cap-weighted indexes do pretty well—because they have a couple of winners and they just keep going. Whereas, when you do something like a value fund, if you bought bank stocks in 2009 and they recovered, unfortunately by 2010, your strategy to rebalance those is automatically rebalancing you out of it. You may have caught the bottom, but now you're immediately moving away from it.

There's a lot of good stuff in your old cap-weighted indexes. That's kind of what I was learning as I wrote the article.

ETF.com: Is indexing the way to go?

Wiggins: I think so. And I think the argument that cap-weighted indexes lead you astray when you go into bubbles is true; but for the most part, it's still the best way to go. I would do a cap-weighted index over a smart-beta fund.

And then as in my article, “Cloning DFA,” you can use cap-weighted indexes to capture those same premiums for a fraction of the cost. But I believe in just straight cap-weighted indexing at this point. I keep trying to think of a way to get around it, but I'm like, no, it won't work.

ETF.com: Would you discuss the concept of P-hacking? You reference it in your II article.

Wiggins: Some people call it data mining, and you get the essence of what's going on, but not quite enough. P-hacking is the idea that somebody's going to run a regression and they just keep trying different variables until they get something that fits. And you're not allowed to do that.

When they first came up with P-scores and P-values, there were two competing groups of mathematicians. They didn't like each other; they were competing and they didn't work together. But ironically, that's the approach we got—it's sort of this hybrid.

If you run a regression, and you're going to keep trying a bunch of variables and then looking to see how good the fit is, that means that your P-score—which tells you whether or not it's significant—is totally worthless, because it was designed for a one-test environment.

I can say I'm going to predict the grades of the people at Naropa University. And the first variable's going to be how many hours they studied, and the second variable's going to be what their high-school GPA was. Then you run it and that P-score is good. But if I keep trying stuff and going back and forth, then that's cheating; that's P-hacking.

There’s a related concept, which is HARKing—hypothesizing after the research is known—that's looking at the data's output and then coming up with a theory to fit the data. That totally doesn't work either. Those are two concepts within evidence-based investing, and I think the average guy just doesn't realize that P-hacking and HARKing are widespread.

ETF.com: How would you sum your article up in just a few words?

Wiggins: The goal was to just serve as a caution to people who follow that siren [song of] sales patter of people who are selling products that claim to see patterns in data, especially as people use the term “big data” more and more. I have two other words for you: be careful.

Richard Wiggins is a thought leader on APViewpoint, which is an online community maintained by Advisor Perspectives. He was previously a senior consultant at Summit Strategies Group and chief investment strategist at Citizens First Bancorp. He is a past president of the CFA Society of Detroit and a periodic contributor to Barron's, Institutional Investor, the Journal of Indexes and other practitioner journals. Wiggins is an author/contributor/abstractor to the CFA Digest and past member of the Council of Examiners, which authors the Chartered Financial Analyst Exam.

Permalink | © Copyright 2017 ETF.com. All rights reserved