Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
Your correct
I stopped reading CR about 30 years ago for the reasons I posted. The source I posted is strictly automotive so they leave the toasters, TV's to CR
I don't want people giving auto reviews when in the last issue he was reviewing breakfast cereal.
JD Power's sample size is less than 60,000 vehicles. Statistically it is far less likely to produce long term credible results than CR, which includes over 1,000,000 responses. JD Power might be basing their ratings for less popular vehicles on just a few samples. Out of 60,000, how many responses do you think they get for Jaguar?
JD Power also accepts money from automakers. There is at least the possibility of bias - just like advertisers in magazines.
As for CR - the same people who test toasters don't do cars.
I have not found CR to be reliable either. Many of the top picks turn out to be junk. This is true whether it is a car, computer, dishwahser, paint. . . whaever. Some are actually good picks, but lots of them are very poor choices. Persoanlly I think you are better off chosing by flipping a coin.
This is ridiculous. CR is not perfect. But applying objective criteria to evaluating consumer goods is a sound thing to do - and so much better than just picking something off the shelf.
You will increase the odds of buying something "not crappy" if it has been tested scientifically by a body that accepts no advertising. They are unaffected by fanboy behavior.
I have not found CR to be reliable either. Many of the top picks turn out to be junk. This is true whether it is a car, computer, dishwahser, paint. . . whaever. Some are actually good picks, but lots of them are very poor choices. Persoanlly I think you are better off chosing by flipping a coin.
I disagree.
I find CR auto rating to be the closest to my experience.
People like to knock them but there are no better compilation with more survey returns than CR's.
CR does have much larger sample sizes. But if you ask a badly worded question, you will end up with flawed results no matter how many people you ask. And CR's question is very badly worded. They have people report problems "you considered serious." Some people report rattles. Others don't report a transmission replacement because the warranty covered it. If a respondent doesn't like a car for some reason, they'll report every last thing. If they like it (perhaps because they've been hearing for decades how reliable the make is), they might consider even a major problem not serious, and honestly not report it.
In short, CR's survey wording opens the door wide for whatever biases respondents might have.
Beyond this, CR's survey is once a year. Few people have memories good enough to accurately report on car problems that occurred more than a few months ago, if that. Also, with time problems seem less serious. So the memory effect compounds the impact of the badly worded question.
An annual survey plus a lengthy lag between the survey and when results are released means that CR's results are always old. The stats posted by the OP as news are based on a survey conducted nine months ago.
Finally, CR presents results in a way that is potentially misleading. We get no actual stats, only vague dots. The information in the OP--reliability by brand--is pointless. You don't buy an entire brand of cars. You buy a single model. If you can get reliability information for a model, there's no reason to have it for a brand unless you're trying to oversimplify reality. There's often a lot of variation by model within a brand. Base your decision on a brand's reliability score, and you could easily end up with an unreliable model.
I started my own survey because of these and other flaws in CR's approach. We ask a question that is as objective as possible. We survey quarterly, not annually. We post the actual repair frequencies, not just vague dots. Our current stats cover through the end of September. With the February update, they'll cover through the end of 2012--making them about eight months ahead of CR.
I bet the vast majority of Ford's problems have to do with MyFordTouch... lots of software bugs that need to be sorted out, plus their touchscreens are slow to respond. Basically you're forced to use voice commands, which is problematic if kids are screaming in the car or your wife or mistress is nagging you or yapping on the phone.
Something some of you haven't realized is that the CR ratings are not part of their reviews. The ratings are provided by automobile owners who answer the surveys mailed to CR subscribers. It's much like when you read the "star" ratings from consumers who purchase products at Amazon, B&H, and all the stores online. In other words, if the ratings are skewed, then automobile owners are not providing factual information.
This is ridiculous. CR is not perfect. But applying objective criteria to evaluating consumer goods is a sound thing to do - and so much better than just picking something off the shelf.
You will increase the odds of buying something "not crappy" if it has been tested scientifically by a body that accepts no advertising. They are unaffected by fanboy behavior.
Yes. It makes sense if it were true. However I simply do nto beieve their claims that they are truely objective and recieve no compensation. Some of their picks make no sense whatosever.
Using survey results as empirical data is misleading at best. They need to be taken for what they are. Data revealing owner satisfaction with perceived reliability.
I would use the term "reliability" loosely too. I have seen cars rank low on surveys (JD Power, CR, etc) and be labelled as unreliable, when the main negative response ends up being for reasons of bad ergonomics, or poor gas mileage (when it should have been expected), or other issues which are not related to what most people term "reliability."
CR does have much larger sample sizes. But if you ask a badly worded question, you will end up with flawed results no matter how many people you ask. And CR's question is very badly worded. They have people report problems "you considered serious." Some people report rattles. Others don't report a transmission replacement because the warranty covered it. If a respondent doesn't like a car for some reason, they'll report every last thing. If they like it (perhaps because they've been hearing for decades how reliable the make is), they might consider even a major problem not serious, and honestly not report it.
In short, CR's survey wording opens the door wide for whatever biases respondents might have.
Beyond this, CR's survey is once a year. Few people have memories good enough to accurately report on car problems that occurred more than a few months ago, if that. Also, with time problems seem less serious. So the memory effect compounds the impact of the badly worded question.
An annual survey plus a lengthy lag between the survey and when results are released means that CR's results are always old. The stats posted by the OP as news are based on a survey conducted nine months ago.
Finally, CR presents results in a way that is potentially misleading. We get no actual stats, only vague dots. The information in the OP--reliability by brand--is pointless. You don't buy an entire brand of cars. You buy a single model. If you can get reliability information for a model, there's no reason to have it for a brand unless you're trying to oversimplify reality. There's often a lot of variation by model within a brand. Base your decision on a brand's reliability score, and you could easily end up with an unreliable model.
I started my own survey because of these and other flaws in CR's approach. We ask a question that is as objective as possible. We survey quarterly, not annually. We post the actual repair frequencies, not just vague dots. Our current stats cover through the end of September. With the February update, they'll cover through the end of 2012--making them about eight months ahead of CR.
In many ways - I agree with you. I'd like to know for example have 2008 Ford Fusions been prone to problems with rear brake rotors. But that requires enormous detail in the surveys. And it requires highly attentive, motivated survey respondents. I think there are very few of those overall - and they might not be representative of the general population. These people would enthusiasts.
I am a car enthusiast in general. I work on my own vehicles. There are currently seven vehicles in my family, soon to be six. But completing a survey once a year for CR is pretty easy. It is not hard to remember, even with seven vehicles, if a vehicle had a repair I consider serious. "Serious" is subjective of course. Some people will consider it serious that a cup holder broke. Others, especially those that are passionate about their car, might not think an ABS repair is serious because of the sophistication of the car and that "goes with the territory." I think when you smash all of this together, CR's results are reasonable and can be used with some credibility when choosing to buy a car. It isn't good enough to plan ahead for maintenance is specific areas.
I think the typical CR responder can remember "I had to get the air conditioning fixed" or "the radio was broken" or "the brakes squealed." They won't even look at the bill from the garage to see what parts were replaced.
Something some of you haven't realized is that the CR ratings are not part of their reviews. The ratings are provided by automobile owners who answer the surveys mailed to CR subscribers. It's much like when you read the "star" ratings from consumers who purchase products at Amazon, B&H, and all the stores online. In other words, if the ratings are skewed, then automobile owners are not providing factual information.
This is very true. Some very highly rated cars that perform well in their tests, end up not recommended because of subpar reliability. Examples are some BMWs and Mercedes cars.
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.