Vehicle Safety Not Easily Defined

By Jeremy Anwyl May 18, 2011

Dekra Test BMW.jpg

It wasn't so long ago that a popular adage in the auto business was that “safety doesn’t sell.” I am not sure how true that ever was, but it is certainly not the case today. Consumer wants obviously vary, but for many, safety is the most important attribute influencing their purchase choice. Or stated more clearly, they would like to be able to use safety as a reason to buy. It is actually harder to do this than you might think.

The truth is that safety is not easy to define. Many consumers fall back on crash test scores as a default measure of safety. But when you talk with these same consumers what they are really looking for is a way to rate their probabilities. This could be along three attributes: a measure of how likely a vehicle choice is to be in an accident; a measure of how likely a driver is to be injured in an accident; and a measure of how likely a driver is to die in an accident.

It seems like it would be simple enough to provide these answers, right? But what qualifies as an injury? A small scrape? A broken bone? Should they be rated the same? And what about passengers? Should they be included? And pedestrians or cyclists?

In a perfect world, we could look at real world data and with a bit of analysis create a set of ratings that would help consumers evaluate vehicles based on these questions. Unfortunately, the world is not perfect and this is certainly the case with accident data. For starters, it is collected at the state level and each state has its own way of doing things. This introduces all sorts of variation into the data. Additionally, and this one is hard to believe in this age of instant updates on the Internet, it takes years (seriously) for the states to make the data that is collected, available. (Right now, 2009 is the most recent.)

Partially offsetting this, the National Highway Traffic Safety Administration (NHTSA) spends federal money to create its own study of fatalities - the FARS database. This is useful data, but it also means that fatality data is, statistically speaking, the most reliable. For injury data, NHTSA manages another database called GES. This estimates national injury rates, but is based on a smallish sample. Curiously, when we use the GES data to project a total fatality count, we end up about 20 percent below the actual total. Don’t ask me why.

If you want a count of overall accidents, good luck. Even insurance companies - who you would think have this data - have to estimate this as many accidents go unreported. So there are gaps and delays and variations in how accidents are reported. There is a final issue as well. The reports lack granularity. The best example is incomplete vehicle descriptions. Many states refuse to release Vehicle Identification Numbers (VIN). But even when they do, it is impossible to take a VIN and come up with a complete vehicle description, especially when it relates to options, including safety features, in the vehicle.

Let’s say you wanted to study the real world effect of optional safety equipment. Or conversely study if increased sales of navigation systems actually increase (or decrease) accidents. Impossible. (At least today.) Not to be deterred, we decided to work with the data that is available. The idea would be to determine how far we could get in trying to answer a consumer’s three key safety questions: Likelihood of an accident, likelihood of injury and likelihood of fatality.

Edmunds safety analysis sample disposition funnel chart.jpgThe first limitation is that a vehicle platform has to have been unchanged since 2009 - the most recent year that data is available. We also determined that at least 10,000 sales per year would be the cut-off. (Below this the sample gets unstable.) Finally, the vehicle actually had to have been involved in reported accidents or there would be no data. For the current model year, this takes us from 341 total models to 168 that can be included in a study.

To make this interesting, let’s compare our analysis of real world data with NHTSA and IIHS crash test ratings. Working backwards, let’s first try to correlate fatality rates vs. test scores. The size of the bubble shows the relative size of the sample. A perfect correlation would result in all the bubbles on a diagonal from the bottom left to the upper right.

NHTSA IIHS vs Edmunds_fatalities.jpgFor fatalities, the correlations are not bad, especially for larger vehicles. Interestingly, correlations are better for NHTSA than for Insurance Institute for Highway Safety (IIHS) ratings. Next, let’s look at injuries. Here the correlation is not so good.NHTSA IIHS vs Edmunds_injuries.jpgFinally, let’s look at accident rates.

NHTSA IIHS vs Edmunds_Accidents.jpgHere the correlation is poor. This shouldn’t be surprising as crash tests measure what happens in an accident, not whether one will happen. You might think that comparing crash test scores with accidents makes no sense. Statistically I would agree, but practically it makes an important point.

Here’s a real world example. Let’s say a shopper is looking for a vehicle for their teenager who is about to head off to college. (We get this situation all the time.) A shopper expresses the task as, “I have a kid going to college, what is the safest vehicle for them to drive?” Seems a reasonable request. And it would be if vehicles caused accidents. The fact is that drivers cause over 90% of accidents. They make mistakes, they drive drunk, they text, they tend to their kids, etc.

When asked about the safest vehicle, I usually suggest buying a slightly less expensive vehicle and investing the savings in an advanced driving course. From a probabilities perspective, that is the safest way forward. By the way, most shoppers are shocked by this suggestion. Somehow, somewhere, consumers are getting the message that drivers don’t make mistakes or are not a factor in transportation safety.

This exercise points out two other limitations of crash test ratings. The first I actually can’t say for sure (the methodology is not fully transparent), but it looks like the ratings do not try to incorporate probabilities. In other words, if front-end collisions are the most likely, are the test results weighted to emphasize front-end scores? This is important as consumers are looking for scores/ratings that reflect what is most likely to be their experience.

The second limitation is that the ratings are relative. This means that subcompacts are rated against other subcompacts. Full size sedans are rated against other full size sedans. Back in the real world, let’s say a shopper wants to judge a Chevrolet Cruze vs. a Chevrolet Traverse. Both have 5 stars, so both should be equally safe, right? Not necessarily. What our exercise has served to remind us is that those larger, heavier vehicles tend to be safer. I realize that there are other reasons to buy smaller vehicles, but crash test scores create the erroneous impression that small vehicles are just as safe as large vehicles. Not so.

This is not to say I have anything against crash test ratings. I think they are an excellent example of how progress can be made towards a goal without resorting to heavy-handed regulation or legislation. Because there is some flexibility in how good scores can be achieved, the automakers can experiment, looking for the best solutions.

Car companies care greatly about how their vehicles stack up. Crash test scores have had a huge impact; increasing rollover strength, accelerating the adoption of air bags, etc. But we should be honest and also acknowledge what they are not. They provide bragging rights to the manufacturers. They do not help consumers estimate their likely experience. For this, we need to work hard to improve the timeliness, consistency and granularity of real-world accident data.

Related Posts Plugin for WordPress, Blogger...

LEAVE A COMMENT

No HTML or javascript allowed. URLs will not be hyperlinked.