How We Test & Rank Cars

An inside look at the testing procedures that power Edmunds' industry-leading car reviews.

By Jonathan Elfalan | (updated March 1st, 2022)

How We Rank Cars

Edmunds has a Rankings page for every type of vehicle — SUV Rankings, Truck Rankings and so on. In simplest terms, these pages group vehicles into competitive segments and rank them based on their Edmunds Ratings. What's an Edmunds Rating? It's a score on a 10-point scale that sums up our testing team's exhaustive evaluation of the vehicle in a single number.

Behind that number are eight rating categories that cover all aspects of today's vehicles, from Driving and Comfort to Tech and Value. Once we've evaluated the vehicle and scored it in each category, we do the math and record its Edmunds Rating, which determines where the vehicle sits within its Rankings segment.

So that's how we rank, but how about how we test? First of all, we should note that instrumented testing — meaning performance metrics and other measurements recorded at our test track — is only one part of the Edmunds Rating process. We also spend many hours driving and exploring each vehicle in the real world to figure out what it's like to live with. Nonetheless, our instrumented testing program makes us one of the few publications in the country that publish proprietary track numbers. It's worth taking some time, then, to explain how exactly we get those numbers when it's time to test.

How We Test Cars

Burnouts, powerslides, closed roads, racetracks — that's what people think the Edmunds test crew does all day, right?

Not always. Like most of you, we also drive our test cars mundane miles through suburban sprawl, inch forward on hopelessly jammed freeways and go on grocery runs.

But behind every story, supporting and quantifying every editor's driving impressions, you'll find what we call "the numbers." They're the backbone of our credibility as a completely independent and unbiased car-shopping resource. The numbers are proprietary — we record them ourselves using our own staffers and equipment at a private facility. And there's never a thumb on the scales for one automaker or another. We report the results exactly as they happened, because our mission is to equip the shopper with the best possible information during the decision-making process.

The numbers are familiar to anyone who has spent any amount of time reading car reviews. Acceleration time from zero to 60 mph; quarter-mile acceleration (elapsed time and speed); emergency braking distance from 60 mph to a dead stop; lateral acceleration, or "grippiness," around a skid pad.

Yet producing a trustworthy set of numbers for a test vehicle is anything but simple. That's why we take our time to get it right, and we don't skip over the hard parts, either. It requires an unrelenting insistence on standardized procedures, accurate measurement equipment, controlled conditions, dedicated testing locations and highly skilled drivers with a knowledgeable and consistent approach.

We're all about the numbers. Here's how Edmunds' Vehicle Testing Team powers the most trusted reviews in the business.

Pre-Flight Check

Since our track tests require maneuvers at the limit of vehicle performance, our test cars go through a standard check-in procedure before they even turn a wheel. We inspect the cars for anything untoward that might compromise safety. Fluid levels are verified and, if necessary, topped off. The lug nuts on the wheels are set to the proper torque. Tire pressures are adjusted to the recommended cold inflation pressure found on the door placard and in the owner's manual. Even the track gets attention, as the courses are cleared of debris and the marking cones are set out.

There's also a weigh-in to determine the "as-tested" curb weight. The Society of Automotive Engineers (SAE) defines curb weight as the mass of a car without driver, passengers or cargo but with a full complement of fluids — including a full tank of fuel. So every vehicle we test has a full tank of the recommended fuel before it rolls onto our Longacre portable digital scales for a reading of total weight and weight distribution.

Could we get more impressive numbers with a near-empty tank? Certainly, but that would be a departure from accepted test practices and would dilute the real-world value of our results. After all, testing ain't racing. And once you start down that path, it becomes too easy to rationalize other questionable tactics to reduce weight like removing floor mats, emptying the glove compartment and ditching the jack and spare tire. We don't play the game that way. In fact, if a test vehicle comes to us with a cargo cover, cross bars for the roof rack or headphones for the rear entertainment system, we leave it all in.

Our as-tested curb weights are nearly always higher than a manufacturer's published curb weight. That's because published curb weights typically represent a base model with few extras, while many of our test cars come equipped with popular options. As a result, our test weight and measured performance are closer to that of the typical examples of a vehicle you might see on a dealer's lot.

We derive our straight-line measurements from data recorded by Racelogic's VBOX, an instrument-grade GPS-based data logger used by automotive engineers and even professional race teams because its 100-hertz sampling rate improves accuracy. This self-contained unit fits on the front passenger seat, and talks to as many as 18 satellites in order to track a vehicle some 12,600 miles below. A head-up display for the VBOX is suction-cupped to the windshield to give instant feedback. A compact flash-memory card holds our precious data until it can be processed.

Braking Tests

Acceleration and braking tests are the first order of business once a vehicle is cleared for takeoff. Quarter-mile and 0-60 results are the most talked about aspects of performance, so we always want to make sure they're in the bag in case the weather turns sour. But braking tests — namely, panic stops from 60 mph and 30 mph to simulate a real-world emergency — nevertheless precede acceleration runs on the day's schedule. We've found that slowing a car after a series of quarter-mile speed runs can stress the brakes, so to ensure that we don't inadvertently corrupt the braking tests, we conduct them first.

Our policy is to test on a surface that complies with Federal Motor Vehicle Safety Standards (FMVSS) for brake certification testing, so the surface we use is smooth, flat asphalt. We never test on concrete and we don't use drag strip launch areas coated with sticky, exotic rubber. In general, the Edmunds brake test is a simple matter of accelerating to 70 mph or so, coasting until velocity drops to 62 mph and then stomping on the brake pedal to fully engage the antilock braking system (ABS) until everything comes to rest.
Historically, non-ABS cars required far more finesse to engage the brakes firmly and hold the car at the point of impending wheel lockup throughout the stop, a skill that our test drivers still possess in case it's needed. But the reality of modern cars is that ABS has completely taken over, and that's a very good thing for vehicle safety. The question, then, is just what happens when you slam on the brakes at speed, which most drivers will never do outside of an actual panic-stop situation.

Stops are performed about two minutes apart until we reach the point where the stopping distance begins to trend irrevocably longer. Some performance cars can make a dozen such stops with no degradation, while others show signs of stress after just three or four. To determine the point at which serious brake fade begins, we factor in the number of stops and the magnitude of any performance fall-off, and then take into account sensory cues like changing pedal feel and even the smell of the brakes. The stopping distance we publish comes from looking at the 60-0 and 30-0 slices of the data and selecting the shortest runs.

Acceleration Tests

Acceleration tests follow as soon as the brakes cool. While we're waiting, let's review our fuel policy. Nominally, we use the minimum required fuel octane for our test runs, and if a manufacturer recommends a higher grade "for best performance," we'll use that. The only exception occurs when 93 octane is recommended, a grade of gasoline that isn't available here in California and many other states. Fortunately, cars that present this problem are few in number and all of them list 91 octane as the minimum requirement, a fuel we can readily obtain. If the runs come out a little slower than a manufacturer's claim, so be it. It's the manufacturer's fault if it optimizes an engine for a grade of fuel that isn't widely available.

All of the published acceleration data — the quarter-mile time and trap speed plus the 0-60 time and all of the intermediate speeds — come from the best single run. In other words, we don't mix and match. But getting that best run requires a skilled driver who can feel out the ideal launch technique that optimizes speed, yet doesn't damage the car. This is no easy task due to the wide variety of vehicle configurations, engine outputs, transmission types, gear ratios and tires in the marketplace. And then there are traction control systems, launch control and a variety of sport settings to wade through.

When testing a car with many adjustments, we'll make a first pass with all the settings at default, ignoring the electronic enhancements to establish a baseline number. After that, more aggressive settings are engaged to see if they represent any real improvement.

For vehicles with manual transmissions, some cars are quicker with moderate wheelspin at launch, while others are quicker with none at all. Some cars like a little clutch slip; others require instant clutch engagement and lots of drivetrain shock. Trial and error is the only way to be certain which technique is best.

Vehicles equipped with automatic transmissions don't require as much imagination. If it yields a better test result, we'll brake-torque the vehicle by revving the engine at the start line while it's held in check by the brake. Often, a simple brake-off, throttle-on technique is best, since the electronic stability control in many cars won't allow simultaneous brake/throttle application.

The automated manual transmission requires a similar brake-torque technique, even though there's typically no torque converter and the transmission could be damaged. The use of simultaneous brake/throttle application on this kind of transmission can, occasionally, result in quicker acceleration. It is a technique we use under the assumption that most of these cars have drivetrain protection that will keep this behavior from destroying the clutch pack. It's an unrealistic technique for real-world driving, but some cars are considerably slower without it.

A Few Words About Rollout

The term "rollout" might not be familiar, but it comes from the drag strip. The arrangement of the timing beams for drag racing can be confusing, primarily because the 7-inch separation between the "pre-stage" and "stage" beams is not the source of rollout. The pre-stage beam, which has no effect on timing, is only there to help drivers creep up to the starting position. Rollout comes from the 1-foot separation (11.5 inches, actually) between the point where the leading edge of a front tire "rolls in" to the final staging beam — triggering the countdown to the green light that starts the race — and the point where the trailing edge of that tire "rolls out" of that same beam, the triggering event that starts the clock. A driver skilled at "shallow staging" can therefore get almost a free foot of untimed acceleration before the clock officially starts, effectively achieving a rolling-start velocity of 3-5 mph and shaving the 0.3 second it typically takes to cover that distance off the elapsed time (ET) in the process.

We believe the use of rollout for quarter-mile timed runs is appropriate, as this test is designed to represent an optimum drag strip run that a car owner can replicate at a drag strip. In the spirit of consistency, we also follow NHRA practice when calculating quarter-mile trap speed at the end of the run. So we publish the average speed over the final 66 feet of the quarter-mile run, even though our VBOX can tell us the instantaneous speed at the end of the 1,320-foot course, which is usually faster.

On the other hand, the use of rollout with 0-60 times is inappropriate in our view. For one, 0-60-mph acceleration is not a drag-racing convention. More importantly, it's called ZERO to 60 mph, not 3 or 4 mph to 60 mph, which is what you get when you apply rollout. While it is tempting to use rollout in order to make 0-60 acceleration look more impressive by 0.3 second, thereby hyping both the car's performance and the apparent skill of the test driver, we think it's cheating.

Nevertheless, some car magazines and some automobile manufacturers use rollout anyway — and fail to tell their customers. We've decided against this practice. We publish real 0-60 times instead, only mentioning the rollout times if they happen to be relevant to the story, as with the 2021 Tesla Model S Plaid, for example.

Weather-Corrected Data

Correction factors are another source of controversy in vehicle testing. Weather conditions vary from day to day, which affects performance for internal combustion engines. As a consequence, acceleration times can be effectively compared only if the results are adjusted to a set of standard atmospheric conditions. The most widely recognized correction factors are those the SAE specifies within its horsepower measurement procedure.

SAE correction factors have undergone a revision or two over the years, and it is our policy to use the one contained in the most recent horsepower measurement procedure, SAE J1349. Turbocharged engine performance is not corrected by this standard, because modern turbocharged engines with electronic controls essentially produce and optimize their own atmosphere. As for electric vehicles, no correction is warranted, since electric motors don't consume air.

Meanwhile, the weather data we use for the correction calculations comes from a Novalynx WS-18 portable weather station we set up at the track. It records ambient temperature, wind speed and direction, barometric pressure and relative humidity at five-minute intervals throughout the day.

Handling Tests

Our primary handling test is conducted on a skid pad with a 100-foot radius. On our circular skid-pad course, the driver puts the inside tires as close as possible to the line that marks the circumference of the circle without drifting wide. We measure and report the average lateral acceleration a car can sustain for a full 360-degree circuit of the course rather than the often-fleeting instantaneous lateral Gs.

Few cars do well with hair-raising tail-out drifts, but we switch off stability control systems to the greatest extent possible anyway, because most such systems intervene so early and often that they interfere with the ability of our drivers to feel the balance of the car.

Very few runs are required to find the limit, and that's a relief because tire degradation occurs quite rapidly if too many runs are made. To avoid the tire issue as much as possible, we alternate between clockwise and counter-clockwise and try not to make more than two circuits in each direction. The fastest clockwise and counterclockwise passes are averaged for the final  lateral G value.

What Happens Next?

At the end of the day, we end up with a little less rubber, an exhausted crew and a pile of data. Back at the office, the data is given a final once-over and the reportable runs are selected. Using a time stamp, we match up our weather data with the selected (best) acceleration run, and apply the appropriate correction. The final results are ultimately calculated and recorded along with the driver's comments. Although our vehicle testing at the track is only one component of the Edmunds Rating process, it's a vital one, giving us the proprietary, unbiased results that we need to be fully confident in our findings.

Accordingly, whenever you read one of our expert reviews on Edmunds, you'll find nuggets of information from our day at the test track. Are the numbers everything? Not remotely. But they're the objective foundation on which Edmunds' peerless vehicle testing process is built.

And we're all about them.

Postscript: The Numbers Through Time

When magazines began testing cars in the 1950s, there was a second person riding in the car who would start a set of stopwatches with a master lever, then click them off one-by-one as the speedometer reached 60 mph, when the quarter-mile mark was crossed and when the speedometer reached 100 mph. When it came time to measure braking distances, a .22-caliber bullet would blast a hole in a can of chalk to draw a measuring line on the pavement.

Jump ahead a decade or so, when cars still had substantial chrome bumpers, and a bicycle wheel or so-called "fifth wheel" was bolted to the rear bumper of the car to record the number of its rotations and thus time over distance. Alignment, tire pressure and surface irregularities were all challenges.

Another decade passed and the radar gun made its debut. Sitting atop a tripod (or even worse, suction-cupped to the inside of the windshield), it bounced radar waves off the vehicle (or something in the distance) and, when analyzed by a computer, the data generated a bunch of neat graphs. But the system was also susceptible to pervasive interference and required error-inducing assumptions and lots of data smoothing.

Finally, the U.S. government put a bunch of machines in orbit above the Earth, and our VBOX system talks to as many as 18 satellites to track a vehicle some 12,600 miles below. Our data is now far more precise than ever before.

So, test gear has gone from being inside the car, to being bolted to the outside of the car, to standing behind the car, to now sending microwaves from space to tell you how fast a car is. How far we've come, eh?

Meet the Vehicle Testing Team