Please register to participate in our discussions with 2 million other members - it's free and quick! Some forums can only be seen by registered members. After you create your account, you'll be able to customize options and access all our 15,000 new posts/day with fewer ads.
Yeah, their best result, by far was on mortality. p=0.09. Go through that article, and find a p value for their other results that come close to that. You cannot.
But no matter, this is what meta-analysis is for. Their data will be included with the other dozens of studies showing mortality reduction, and p will fall even more.
Already, just how they dismissed that data, but held up everything else, let's me know about their original intent behind the study.
Oh, and did they define 'severe' disease in an objective manner? And did they define 'mild' disease? These things can be easily done sloppily to get "statistically insignificant' data but mortality less so, and it was their most statistically significant data point, by far.
I'm not statistician, but.....LOWER P-value is what you would want to see, not higher, to indicate a strong rejection of the null hypothesis.
A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.
I'm not statistician, but.....LOWER P-value is what you would want to see, not higher, to indicate a strong rejection of the null hypothesis.
A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis.
That's exactly what I'm saying.
p = 0.09. It's above 0.05, but not by much. Check their other p values.
p = 0.30 for progression to severe disease. Terrible p value.
p=0.68 for rate of progression to severe disease. Basically worthless value.
Out of all their data, their best data was on mortality.
What's even more puzzling is how they dismiss it, don't question their study methodologies to see where the discrepancy could have been resulted. This lets me know they're biased.
p = 0.09. It's above 0.05, but not by much. Check their other p values.
p = 0.30 for progression to severe disease. Terrible p value.
p=0.68 for rate of progression to severe disease. Basically worthless value.
Out of all their data, their best data was on mortality.
What's even more puzzling is how they dismiss it, don't question their study methodologies to see where the discrepancy could have been resulted. This lets me know they're biased.
So, nearly double the number to even be in the discussion is "not much"? OK.
Is p value the only metric by which to evaluate? Seems to me the statisticians hired to do the analysis also didn't like the fact that there were "only" 13 deaths to use as a data set. Right?
So, nearly double the number to even be in the discussion is "not much"? OK.
Is p value the only metric by which to evaluate? Seems to me the statisticians hired to do the analysis also didn't like the fact that there were "only" 13 deaths to use as a data set. Right?
p value is probability at the end of the day. p = 0.09 is 9%, or in other words 91% chance to reproduce these values provided the underlying assumption is true.
p < 0.05 is an arbitrary cut-off (95%).
It doesn't just have to do with sample size but also distribution.
The problem is, as you said, you're not a statistician. These values might as well be greek for you. So you're left with sloppy comments. Until I pointed it out, you didn't even realize the p-values of their results and the significances.
The issue I have here is they have relatively strong results on reduction in mortality and garbage results on severe disease progression, and rates. A normal, unbiased researcher would do a mea culpa. Would either re-do the study so results match, or do some retrospective on why strong signals are appearing in reduction of mortality but not anywhere else (hint: were their methods in detecting and tracking severe disease sloppy?).
Since they didn't do that, they were happy to keep repeating not statistically significant, not statistically significant, let's me know they had an agenda. This is why they dismiss their strongest result.
p value is probability at the end of the day. p = 0.09 is 9%, or in other words 91% chance to reproduce these values provided the underlying assumption is true.
p < 0.05 is an arbitrary cut-off (95%).
It doesn't just have to do with sample size but also distribution.
The problem is, as you said, you're not a statistician. These values might as well be greek for you. So you're left with sloppy comments. Until I pointed it out, you didn't even realize the p-values of their results and the significances.
The issue I have here is they have relatively strong results on reduction in mortality and garbage results on severe disease progression, and rates. A normal, unbiased researcher would do a mea culpa. Would either re-do the study so results match, or do some retrospective on why strong signals are appearing in reduction of mortality but not anywhere else (hint: were their methods in detecting and tracking severe disease sloppy?).
Since they didn't do that, they were happy to keep repeating not statistically significant, not statistically significant, let's me know they had an agenda. This is why they dismiss their strongest result.
Like I said, you disagree - take it up with the folks to ran and analyzed the trial.
I'll defer to the folks that earn a living in that field over an obviously biased anonymous internet poster.
Like I said, you disagree - take it up with the folks to ran and analyzed the trial.
I'll defer to the folks that earn a living in that field over an obviously biased anonymous internet poster.
Have a great night, PineTree!
No need to disagree. Their mortality findings will be included in all IVM meta-analysis. And computers will re-compute p-values across this study as well as the other studies in the grouping.
Their summary of their conclusion is for people who don't understand data. For us who understand, we just go to the data.
Please register to post and access all features of our very popular forum. It is free and quick. Over $68,000 in prizes has already been given out to active posters on our forum. Additional giveaways are planned.
Detailed information about all U.S. cities, counties, and zip codes on our site: City-data.com.