this post was submitted on 09 Jul 2023
76 points (100.0% liked)

Science

13018 readers
33 users here now

Studies, research findings, and interesting tidbits from the ever-expanding scientific world.

Subcommunities on Beehaw:


Be sure to also check out these other Fediverse science communities:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

Consuming fruits, vegetables, legumes, nuts, fish and whole-fat dairy products is key to lowering the risk of cardiovascular disease, including heart attacks and strokes. The study also found that a healthy diet can be achieved in various ways, such as including moderate amounts of whole grains or unprocessed meats.

The World Health Organization estimates ~18 million people died from cardiovascular disease in 2019, representing 32% of all global deaths. Of these deaths, 85% were due to heart attacks and strokes. Population Research Health Institute researchers and their global collaborators analyzed data from 245,000 people in 80 countries from multiple studies.

you are viewing a single comment's thread
view the rest of the comments
[–] pglpm@lemmy.ca 5 points 1 year ago* (last edited 1 year ago) (2 children)

P-values-based methods and statistical significance are flawed: even when used correctly (e.g.: stopping rule decided beforehand, various "corrections" of all kinds for number of datapoints, non-gaussianity, and so on), one can get results that are "statistically non-significant" but clearly significant in all common-sense meanings of this word; and vice-versa. There's a constant literature – with mathematical and logical proofs – dating back from the 1940s pointing out the in-principle flaws of "statistical significance" and null-hypothesis testing. The editorial from the American Statistical Association gives an extensive list.

I'd like to add: I'm saying this not because I read it somewhere (I don't like unscientific "my football team is better than yours"-like discussions), but because I personally sat down and patiently went through the proofs and counterexamples, and the (almost non-existing) counter-proofs. That's what made me change methodology. This is something that many researchers using "statistical significance" have not done.

[–] pwacata@beehaw.org 5 points 1 year ago* (last edited 1 year ago) (1 children)

This is interesting and something I've not heard of - can you recommend a starter link for someone with a basic stats background? I had some in undergrad, but this sounds like a topic that could get very tinfoil-hat-y if not searched correctly and with good context.

[–] pglpm@lemmy.ca 8 points 1 year ago* (last edited 1 year ago) (1 children)

There's still a lot of debate around this topic. It's obviously difficult for people who have used these methods for the past 60 years to simply say "I've been using a flawed method for 60 years" – although in the end that's how science works. The problem moreover is double: the method has built-in flaws, and on top of that it's often misused.

Some starters:

What's sad is that these discussions easily end in political or "football-team"-like debates. But the mathematical and logical proofs are there, for those who care to go and read them.

[–] pwacata@beehaw.org 3 points 1 year ago (1 children)

Thanks, I appreciate it - looks like I've got some bedtime reading for awhile :)

[–] pglpm@lemmy.ca 1 points 1 year ago
[–] vin@lemmynsfw.com 1 points 1 year ago (1 children)

Ah, I thought you were talking about p-values - which is just a simple metric and gets a bad rep from being used for statistical significance. Statistical significance certainly is trash.

[–] pglpm@lemmy.ca 1 points 1 year ago

Yes I'm talking about p-values. Statistical "significance" is based on p-values.