Kaidi Kang
@kaidikang.bsky.social
87 followers 67 following 12 posts
Assistant professor at Wake Forest University School of Medicine | PhD in Biostatistics from Vanderbilt University 🎓 | Kendo player 🤺 https://kaidik.github.io/
Posts Media Videos Starter Packs
Reposted by Kaidi Kang
wakeforeststats.bsky.social
👏 Huge congratulations to Assistant Prof @sarahlotspeich.bsky.social Lotspeich for her Societal Impact Award from the @cwstat.bsky.social! Dr. Lotspeich is being recognized for her outstanding efforts and impact on social justice through your collaboration, leadership, and partnerships.
Reposted by Kaidi Kang
vandyatvandy.bsky.social
Don't throw out your data though! If you model separate between- and within-subject effects, you'd see that they're different! Brain-behavior associations across individuals are different than changes within individuals. You can model those separately in your own data without any fancy stats!
Reposted by Kaidi Kang
vandyatvandy.bsky.social
Tidbit that you can implement in your own analyses: If you collect longitudinal data you improve efficiency by throwing out half your data! WTF?! If you use baseline ("1st") versus a common longitudinal analysis ("All") effect size decreases! Read on! @kaidikang.bsky.social @meharpist.bsky.social
kaidikang.bsky.social
If they are accurate measurements (i.e., the participant has extreme cognitive ability on that task), then I would not consider them outliers, even if their values are extreme. (#2)
howardchiu.bsky.social
How should we handle outlier data points, given that one of the recommendations was that “individuals scoring at the extremes on a testing scale or battery ('phase I') could be prioritized for subsequent brain scanning ('phase II')”? Are we looking for data that is extreme but not too extreme?
kaidikang.bsky.social
Thanks for the good question. If the outlier is caused by technical measurement error (i.e., the measure is not capturing what it is intended to; e.g., the person was falling asleep during the task), they should be excluded from the analysis. (#1)
howardchiu.bsky.social
How should we handle outlier data points, given that one of the recommendations was that “individuals scoring at the extremes on a testing scale or battery ('phase I') could be prioritized for subsequent brain scanning ('phase II')”? Are we looking for data that is extreme but not too extreme?
kaidikang.bsky.social
My greatest pleasure to have worked with the amazing editor @meharpist.bsky.social, the excellent reviewers, and the awesome editorial team at Nature on our paper!! Thrilled to share our research!
Reposted by Kaidi Kang
kaidikang.bsky.social
Thank you, Arielle!
kaidikang.bsky.social
Improving the replicability of BWAS is undoubtedly a complex challenge w/ no one-size-fits-all solution. Nonetheless, we hope our work offers overarching guidance to enhance the reliability of small-scale BWAS studies, considering their sample size constraints and specific research objectives! 🤗
kaidikang.bsky.social
Thank you @roselynechauvin.bsky.social sky.social and @ndosenbach.bsky.social for this awesome Nature News & Views on our work!
roselynechauvin.bsky.social
Great conversations with @ndosenbach.bsky.social while preparing this Nature News & Views on the new Kang et al. Ready to have the same with everyone here. Thoughts? What other practice should be investigated to help BWAS reproducibility?
www.nature.com/articles/d41...
Design tips for reproducible studies linking the brain to behaviour
Sampling schemes for reproducible brain-wide association studies.
www.nature.com
Reposted by Kaidi Kang
kaidikang.bsky.social
Thank you, Dr. Fair! It was really a rewarding experience for me to collaborate with your team from #MIDB! @drdamienfair.bsky.social @tervoclemmensb.bsky.social @bart-larsen.bsky.social Thank you for all your efforts and expertise you put into this work; they truly made this work better!
kaidikang.bsky.social
Hi. Just joined bluesky 😋