September 20, 2024

This report is available to subscribers only Inman Intelis the data and research arm of Inman, providing insights and market intelligence into the residential real estate and proptech businesses. Subscribe now.

One of the most widely cited measures of U.S. home prices has come under fire in recent weeks, as criticism from an upstart firm fuels a broader discussion about which data concepts the industry should track and how.

A Public post In late January, Parcl Labs cast doubt on the S&P CoreLogic Case-Shiller Index and captured the imagination of real estate data insiders. The index is a monthly price tracker considered by many to be the go-to source for house price trends.

This is not the first time such a widely cited industry measure has been scrutinized, and it won’t be the last.

To understand these arguments, Intel looked at what Case-Shiller and similar models are trying to achieve, how to interpret them, and the blind spots that other sources are increasingly trying to fill.

Read the full report below to learn more.

The Origin of the “Gold Standard”

For decades, real estate professionals have acknowledged the many problems with simply tracking raw home prices.

One of the biggest problems? One group of homes that sell in one year may not look like the homes that sell in the next period. For example, a sudden spike in mortgage rates could prompt more buyers to move to lower price tiers without putting much downward pressure on home prices within the same tier.

This is one of the problems that the Case-Shiller index is designed to solve.

Economists Karl Case, Robert Shiller and Allan Weiss developed the index in the late 1980s. It is based on the concept of “repeat sales”. Rather than tracking the price of homes sold over time, the index tracks the price of individual homes over time.

This is far from the only measure formulated in this way. The Loan Performance Home Price Index is another CoreLogic data series that, like the FHA Home Price Index, uses repeat sales pricing techniques.

In a blog post discussing the relationship between assessed values ​​and home price changes, FHFA’s Justin Contat and Daniel Lane wrote: “The Repeat Sales Index is the industry gold standard because it has ‘constant quality’ and is affected by Less than the mean or median. From sampling variation.”

Case Shiller’s National Home Price Index is more than a simple up-and-down indicator; over time, it has become a benchmark for housing and the nation’s overall economy. The index and its multiple major metro area subsets are a key tool used by policymakers and investors when making decisions.

While many point to one of its potential drawbacks — the data is lagging by two months — they generally also point to another time-based factor in its popularity. Multi-decade time series with rigorously tested methodologies do not appear every day.

Edward Glaeser, an economics professor at Harvard University, said in an interview with the New York Times: “What Case and Shiller proposed is really the gold standard for price changes in the real estate market.” New York Times Carl Keyes’ obituary. “It has the advantage of being both transparent and reliable.”

shake a fist at the king

As it has every month for years, the S&P CoreLogic Case-Shiller Index is released on the last Tuesday in February. Like clockwork, they were making headlines within seconds.

but it is another titleThe article, published a few weeks ago, caused a stir in the data and research community because it called into question decades of accepted standards for price monitoring.

This bold flag-bearer bow comes from a January article from Parcl Labs, one of a growing number of data providers that are challenging the institutional order that sets their clocks on metrics like Case-Shiller releases.

An S&P Global spokesperson declined to respond in detail to a request for comment on the post, instead directing Intel to the Case-Shiller methodology page.

Parcl Labs is capitalizing on the shift in digital real estate mentality in the pandemic era, offering investors the opportunity to bet on the market rather than physical property. It focuses on determining daily values ​​and trending behavior. Parcl believes this move adds a novel layer of information to real estate pricing and analysis.

Parcl’s article, written by co-founder Jason Lewris and vice president of strategy Lucy Ferguson, argued that Case-Shiller “lacks practicality for the modern real estate market.”

The list of questions they had for Case-Shiller was long and included the following:

  • Retrospective data from two months ago. In recent years, an increasing number of data providers have begun providing clients with daily updated reports, rather than quarterly or monthly reports. Parcl’s post believes this trend drove Case-Shiller away – delaying its launch by two months More behind the curve than ever.
  • Only, but not all, single-family repeat sales are used to measure home value changes. In addition to excluding new homes, co-ops and condos, the Case-Shiller method also negates any transactions that occur within six months. A 2022 Parcl study claims that as a result of these exclusions, the Case-Shiller 10-city composite home price index misses 42% of sales in the 10 largest metropolitan statistical areas.

  • In some cases, discounts are as high as 50% on older or low-turnover homes. While Case-Shiller doesn’t necessarily exclude older homes or homes with significant gaps between sales, the method weight adjustment significantly changes their impact. Parcl concluded that due to recent transactions in San Francisco, the majority of sales within the metro index are discounted, some as much as 45%.
  • Using MSA boundaries in the 10 and 20 metro area indices is too broad. People live in New York City, Boston or Chicago, but like other real estate, supply, demand and value are local dynamics. For example, the chart below illustrates the difference in performance between the San Francisco metropolitan area and the urban area.

Chart by Parcl Labs

Single source of truth?

While Lewris believes Case-Shiller is an imperfect source, he still sees some uses for it: namely, helping the Parcl Labs team get smarter and understand specific market conditions or how most followers use it.

Lewris wrote in a recent blog post that the Parcl team tried to recreate the Case-Shiller method as much as possible to help “predict” its performance in recent weeks.

“This report provides insight into how the market for single-family, repeat-sale homes that do not fall under the definition of flipping is evolving,” Liuris wrote.

Every time a Case-Shiller is released, Parcl provides a post-mortem analysis of how close it was to the predicted results. December was about in line with most months, and Parcl’s estimates were generally very close, even if there were some deviations in direction.

Ultimately, though, Parcl Labs’ goals are as audacious as any industry information provider: its stated mission is to create a new global standard for residential real estate pricing and analysis, primarily by creating a single source for home valuations.

The idea is elegant in concept but daunting in practice. The idea is not to have multiple system servers and access points, but to create a system that integrates, queries, aggregates and disseminates data. Neither the concept nor the pursuit of producing such a repository is new, and it remains to be seen whether Parcl (or another upstart data provider) will convince the industry that it has cracked the code.

However, some experts believe that having different sources to effectively deliver different data products has been effective for decades. They believe that if something isn’t broken, there’s no need to fix it.

“We use the FHFA series, which is a repeat-seller model and we like it. But Case-Shiller has been proven, and I don’t think it’s broken yet,” Zonda chief economist Ali Wolf said. “Parcl is doing something new and different, and their data is valuable. But that doesn’t mean Keith Hiller is wrong or irrelevant.”

Email Chris LeBarton