09292011, 11:47 AM



Location: Washingtonville
2,506 posts, read 1,951,870 times
Reputation: 441


Quote:
Originally Posted by Nozzferrahhtoo
Welcome to Science. That is how it works. Nearly all of science is about attempting to falsify claims.

And that suggests bias. If you are dead set on disproving something, how will that not effect the tests. With Randi, he hires his own crew to do the testing, don't you think he would make sure to have people that share his view point?

09292011, 04:51 PM



7,811 posts, read 5,117,155 times
Reputation: 2973


Quote:
Originally Posted by raison_d'etre
And that suggests bias.

Not at all. It suggests well thought out constraints. It is nothing to do with bias and everything to do with working a methodology which minimises the effect of human bias and error.
Also nothing in science can be proven 100% ever. So science is forced to falsify claims not prove claims. It is the only system that works. So it is not bias, it is simply the choice of the best working system.
Nothing in science is proven 100% ever. Except in the special case of mathematics. That is different. Instead what happens is people create hypothesis which explain facts. That hypotheis is tested, tested and tested again and if it passes all tests it gets called a "Theory". The highest accolade one can get in Science.
So again it is nothing to do with bias. It is only to do with going with the best methodology available.
When we see people talk about bias in this system all we see is someone who literally knows nothing about the scientific method. I strongly urge you to read about it. The scientific method and how we arrived at it and why it works the way it does is actually a very interesting topic to learn about.
Check it yourself if you doubt me however. No science is considered 100%. You will find that a failure to falsify is one of the strongest methods of establishing "Theory" in science. Falsification is almost a holy word in science if one were to be so crass as to apply the term.

10172011, 05:20 AM



7,811 posts, read 5,117,155 times
Reputation: 2973


Quote:
Originally Posted by Nozzferrahhtoo
So you keep claiming. I will withhold judgement until I get the actual title of the magazine and article so I can read it myself. In fact feel free to buy it and transcribe the article in question here. Email me a scan of the receipt and I will refund you double the cost of the purchase.

I thought that my offer to refund double the cost was a generous one indeed. You would have made a 100% profit merely by buying the magazine, transcribing one part of it, and posting it here and sending me an email of the receipt.
I question therefore the lack of a result from this offer. Did you forget or could it be.... shock horror... the magazine and the article within it simply did not exist?

10172011, 06:53 AM



2,031 posts, read 2,323,242 times
Reputation: 1375


Quote:
Originally Posted by raison_d'etre
And that suggests bias. If you are dead set on disproving something, how will that not effect the tests. With Randi, he hires his own crew to do the testing, don't you think he would make sure to have people that share his view point?

Testing for falsification is not being 'dead set on disproving' any more than proofchecking a solved mathematical problem is being 'dead set on disproving' the work, or trying the doorknob to make sure you locked it behind you is being 'dead set on disproving' that it is locked.
They're all just checks, attempts to verify.

10172011, 09:02 AM



Location: Vermont
10,127 posts, read 10,762,346 times
Reputation: 13572


Quote:
Originally Posted by raison_d'etre
Call it what you want. Until you have experimented and experienced it firsthand, I don't think you can make an assessment. Reading about something and actually discovering it for yourself is very different.

Firsthand experience is not the same as experimentation. As many of us are fond of saying, anecdote is not the singular of data.
You presumably are familiar with the placebo effect. It is one of many reasons that personal testimonials, which consist of a single person's report of a personal experience, or that person's interpretation of what happened, are among the least reliable types of evidence. If you think that the results of experiments are wrong and unreliable because they don't agree with some personal experience you had you're no more rational that Jenny McCarthy and the rest of the antivaxxer crowd.

10172011, 09:33 AM



Location: Vermont
10,127 posts, read 10,762,346 times
Reputation: 13572


Quote:
Originally Posted by raison_d'etre
And that suggests bias. If you are dead set on disproving something, how will that not effect the tests. With Randi, he hires his own crew to do the testing, don't you think he would make sure to have people that share his view point?

If you're not aware of this, you should know that every time Randi accepts a challenge, he and the person making the claim agree on the test protocols, standards for evaluating the claim, and every other aspect of the process. In order to be accepted, the proponent of the claim must agree that the test protocol they have agreed to is a valid test of the claim they are trying to prove.
The reasons should be obvious: first, there's a million dollars at stake and the two contestants need to be clear on what conditions will result in the award of the million dollars (an elementary principle of contract law); second, it ensure that if the claim fails and the claimant later argues that the test was biased in some way, Randi can point out publicly that the claimant had every chance in the world to specify the terms of the challenge and that they agreed in advance that they would accept the results as valid.
Think of any claim you might want to make. If you believe that it is objectively verifiable, and you agree with another person on the test conditions that would prove or disprove your claim, and you agree to submit to the test under those test conditions, and the test proceeds under exactly the test conditions you agreed to, what possible basis would you have for later disputing a failed result?

10172011, 10:30 AM



Location: OKC
5,426 posts, read 5,611,668 times
Reputation: 1760


BOXCAR'S GUIDE TO RESEARCH METHODOLOGY
There is perhaps a little confusion on the role of falsification in the experimental method. I want to point out what the role of falsification is, and why it is used.
First, it should be understood that not all experiments require falsification. It is generally used only when one has a sample, (instead of a census) and the person wants to prove that the results they have obtained aren't just a spurious artifact of an unusual sample they have drawn, but instead that the difference in their sample reflects the actual difference found throughout the wider population they are studying. So what does all that mean?
Imagine I wanted to prove that men in the U.S. are taller then women in the U.S.
If we had a complete census, (that is to say we had the heights of every man and women in the U.S.) then falsification would not be necessary. We could simply determine the average height of both sexes and determine if U.S. men are indeed taller then U.S. women.
But what if we don't have a complete census? In that case we take a smaller, random sample of men and women and see if men in the sample are taller then women in the sample. But this leads to a problem: what if by mere chance we just happened to draw a strange sample, and the strange sample we drew doesn't accurately reflect the height diffference across the overall U.S. population.
This is where the role and purpose of falsification comes into play. The role of falsification is to prove that the difference we have found in the sample are not the spurrious result of chance, but instead reflect the actual differences found in across the entire population we wish to study.
When I collect my sample, and I find that men are on average 4 inches taller then women, there are two possibilities:
Possability 1 is that the difference in height reflected in the sample is actually representative of the difference of height across the overall U.S. population. That is called the hypothesis.
Possability 2 is that the difference in height reflected in the sample is a product of chance, and just an artifact of the strange sample we drew, but there is in fact no actual difference in height across the U.S. population. This is called the null hypothesis.
The role of falsification is to detemine if the null hypothesis is false. If so, we are only left with possability 1, that our hypothesis is true.
So how do we determine if the null hypothesis is false? It's through a statistical tool called probability theory, which determines the likelyhood of a strange draw based on the number of people in the sample, and the variation in the distribution of the sample.
That sounds pretty complicated, and maybe it is. But I'll try to explain it the best I can.
If in my example my sample only included 2 participants, (which we indicate as N=2), then even if men and women were the same height in the U.S. (null hypothesis) there would be a 50/50 chance that we would happen to find a taller man than women in our sample. Based on that small of a sample size, we can not reasonably falsify the null hypothese.
On the other hand, if we had 1000 men and 1000 women in our sample, then we can be more certain that the height difference reflected in our sample actually reflects the height differences in the entire U.S. If we found a large difference in heights after looking at 1000 men and 1000 women, that difference is probably real and not just a spurious sample we happen to draw by chance.
Probability theory is used to determine if the null hypothesis has been disproven. Based on the sample size and the distribution of the results, it can tell with accuracy what percentage of the time a certain outcome would occur from a random sample.
By convention, most science requires at least 95% certainty that we did not arrive at our conclusion by chance, (indicated by p=.95) If we can look at our sample size, (and the distribution of the results) and we can determine that chance would have only allowed us to draw a sample with those types of results 5% of the time, we conclude that the difference we have seen is real, (the hypothesis) and we reject the possibility that there is no difference in the population, and our difference in samples is the result of chance, (the null hypothesis.) (It should be noted that some disciplines use more conservative p values before rejecting the null hypothesis, typically p = .975 or .995.)
Did all that help? Probably not, but I'll recap anyway:
When we are trying to determine if the results of our sample are a product of chance or if they reflect an actual difference in the population, we use probability theory to determine how often chance would result in the difference we have discovered. If we determine that there is a less than 5% chance that the outcome could have been spurious, we reject the null hypothesis and conclude that the results of our sample are reflected in the overall population we are interested in.

Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: Citydata.com.

