Consumer Reports
Wow...
I couldn't agree more with crazyphud. I'm currently looking at CR's 2004 "best and worst cars". And, I have to say....it seems straight forward. In spite of what some here seem to believe, CR does, in fact, break down (in terms of percent) what aspects of what cars are trouble spots. The RED dot....less than 2% reported trouble. The BLACK dot....more than 14.8%. I recognize that in our world of non-stop marketing, people will (and do) have preconceived notions....but, some folks are fooling themselves if they think for a moment that subscribers to CR don't commonly drive GM products. I should add that CR regularly notes "insufficient data" when (from their million car returned surveys) they, apparently, do not have some minimum threshhold of collected data.
Having said the above, I'm quick to note the system is *not* perfect.....and, for once, I may even agree with SteveC....you have to initially "place the stake somewhere". CR has been an exceptionally reliable source for my car buying needs....much more so than, say, Aunt Martha - who loved her Chevy Vega.....for all 22,600 miles.
I couldn't agree more with crazyphud. I'm currently looking at CR's 2004 "best and worst cars". And, I have to say....it seems straight forward. In spite of what some here seem to believe, CR does, in fact, break down (in terms of percent) what aspects of what cars are trouble spots. The RED dot....less than 2% reported trouble. The BLACK dot....more than 14.8%. I recognize that in our world of non-stop marketing, people will (and do) have preconceived notions....but, some folks are fooling themselves if they think for a moment that subscribers to CR don't commonly drive GM products. I should add that CR regularly notes "insufficient data" when (from their million car returned surveys) they, apparently, do not have some minimum threshhold of collected data.
Having said the above, I'm quick to note the system is *not* perfect.....and, for once, I may even agree with SteveC....you have to initially "place the stake somewhere". CR has been an exceptionally reliable source for my car buying needs....much more so than, say, Aunt Martha - who loved her Chevy Vega.....for all 22,600 miles.
Originally Posted by CrazyPhuD,Oct 27 2005, 05:55 PM
*sigh* this is the kind of article that makes all scientists look bad.
It's one of the most poorly reasoned articles I've ever seen. Why? Where does he present real(statistically significant data) to back up any of his 'theories'. He made no effort to prove it beyond a few isolated cases of statistics(without reference to who or how they were collected). One of the biggest complaints is his use of random web comments to support his arguments. You just can't do that.
Who are these people that he quotes? Why should I believe anything they say as accurate? Are they experts? Are they relatively unbiased? Do they support their beliefs with facts or with assumptions? The answer is we have no idea. Who couldn't find 1000 different opinions and viewpoint on any number of topics. If you wanted to construct an argument you can always find random people to support it. In any scientific article you always cite your sources. Why, because it allows the reader to go examine those sources and determine for themselves if those sources are believable. This 'author' fails to do that and as such all testimonials must be considered with the utmost suspect. Considering that very few if any of the testimonials make an attempt to back up their beliefs with facts you cannot assume they are anything but opinions. What do opinions mean in a scientific article? Unless they come from an established expert in the field they mean nothing.
The truly sad thing is that the author uses the very methods that he bitches that consumer reports uses. What is the sample rate for the 'quotes' that he makes? Think he chose 20 or so consumer experiences out of the how many millions on the web? That's well under 0.002% response sampling. Wasn't he bitching about consumer reports getting a response rate of 6-12%? Also he threads together a very few individual experiences to attempt to draw a conclusion. Is 20 experiences a statistically significant number? It wouldn't be to me, but wasn't he claiming that consumer reports too often uses a statistically insignificant sample? Shouldn't he hold himself to the same standard that he wishes to hold consumer reports to?
The reality is there is little evidence presented that suggests that the conclusions he draws is anything more than theories. Considering the lack of evidence and the potential for conflict of interest you cannot reliably believe the article. There are just too many questions and not enough answers. It is unfortunate it would be an interesting topic to explore.
It's one of the most poorly reasoned articles I've ever seen. Why? Where does he present real(statistically significant data) to back up any of his 'theories'. He made no effort to prove it beyond a few isolated cases of statistics(without reference to who or how they were collected). One of the biggest complaints is his use of random web comments to support his arguments. You just can't do that.
Who are these people that he quotes? Why should I believe anything they say as accurate? Are they experts? Are they relatively unbiased? Do they support their beliefs with facts or with assumptions? The answer is we have no idea. Who couldn't find 1000 different opinions and viewpoint on any number of topics. If you wanted to construct an argument you can always find random people to support it. In any scientific article you always cite your sources. Why, because it allows the reader to go examine those sources and determine for themselves if those sources are believable. This 'author' fails to do that and as such all testimonials must be considered with the utmost suspect. Considering that very few if any of the testimonials make an attempt to back up their beliefs with facts you cannot assume they are anything but opinions. What do opinions mean in a scientific article? Unless they come from an established expert in the field they mean nothing.
The truly sad thing is that the author uses the very methods that he bitches that consumer reports uses. What is the sample rate for the 'quotes' that he makes? Think he chose 20 or so consumer experiences out of the how many millions on the web? That's well under 0.002% response sampling. Wasn't he bitching about consumer reports getting a response rate of 6-12%? Also he threads together a very few individual experiences to attempt to draw a conclusion. Is 20 experiences a statistically significant number? It wouldn't be to me, but wasn't he claiming that consumer reports too often uses a statistically insignificant sample? Shouldn't he hold himself to the same standard that he wishes to hold consumer reports to?
The reality is there is little evidence presented that suggests that the conclusions he draws is anything more than theories. Considering the lack of evidence and the potential for conflict of interest you cannot reliably believe the article. There are just too many questions and not enough answers. It is unfortunate it would be an interesting topic to explore.
Finally, many of his issues are not statistical at all. He points out that CR doesn't address topics such as which engine or transmission was the problematic one (I have on occasion seen it mentioned in the text, never noted in the charts). They don't clearly define major failure (what do each of us consider major failures). They chose to address sibling cars by grouping them. The author claims (and others have mentioned) that sometimes otherwise identical cars would receive significantly different quality ratings on aspects that were identical between the two. This at least should cause some question of the analysis. I think it was the author's intent to illustrate examples of how CR's methods could present skewed data rather than say "if I had the data here is how I would normalize and quantify it." If nothing else it would help explain the significant difference between JD-Powers 3 year data and CR's 3 year data.
Given the influence CR's reviews have on the buying public I do think they should have statistically valid methods. Funny, I hate Bose, I think their products are junk, but I actually can see how they could have made a claim against CR for a bad speaker review. CR has enough sway that it could significantly hurt Bose if their product was reviewed badly (which in general I think it deserves to be). However, if CR said we are going to tell people this is the best or worst speaker based on 'some questionable method' then I think Bose is correct in demanding a fair and reasonable test. Being an audio lover this isn't always easy. Just like with cars, personal preference comes into play.
Originally Posted by Slamnasty,Oct 27 2005, 11:28 AM
Palmateer said that as a veiled shot at an opinion of CR I posted in another thread.
Originally Posted by Palmateer,Oct 27 2005, 11:29 AM
It would be helpful if those commenting on this thread admitted in advance whether they actually read Consumer Reports before giving their opinion on its merits.
BTW do you have a Japanese car or a European car.

Just curious.
Originally Posted by The Hoth,Oct 28 2005, 06:08 AM
BTW do you have a Japanese car or a European car.
Just curious.
Just curious.
Originally Posted by Palmateer,Oct 28 2005, 11:21 AM
The Audi hasn't been that reliable.
Just foolin' with ya. My Audi experience was a mixed bag of nice car but always in the shop with some issue or another. It really pissed me off. I'm over it now, though ----
Thread
Thread Starter
Forum
Replies
Last Post
xviper
Archived Member S2000 Classifieds and For Sale
13
Jun 25, 2003 06:31 PM







