Erin Campbell
@erinecampbell.bsky.social
120 followers 99 following 49 posts
Posts Media Videos Starter Packs
erinecampbell.bsky.social
as in: don't send them research statements / CVs, just start showing up at faculty meetings?
Reposted by Erin Campbell
ianhussey.mmmdata.io
Proposal for how to fix family wise error rates.

For every uncorrected p value you must add an extra letter to the claim.

“Eating chocolate maaaaaaaaay be associated with lower rates of stroke”
erinecampbell.bsky.social
At #SNL2025? Go check out this poster with @zedsehyr.bsky.social
Poster "Does sensory and linguistic experience shape ERP signatures in deaf signers?". Poster displays images of ERP traces and topo plots. 
Research goal: Determine whether canonical ERP components are modality-neutral or shaped by language and sensory experience.​ Participants were 22 deaf ASL signers and 26 hearing English speakers. EEG was recorded while participants completed 6 tasks from the ERPCORE battery. 

Deaf signers show larger, more bilateral N170 for object recognition​. Deaf signers show typical but more negative N2pc​. Deaf signers show typical but smaller P3 with variable timing​. Deaf signers show smaller and bilateral LRP​. Deaf signers show smaller ERN than hearing​. Deaf signers show typical but smaller N400​. Sign N400 similar to word N400, marked by later onset​.

Overall, canonical ERPs were observed in both groups, with group differences in distribution, timing, and amplitude.​
Reposted by Erin Campbell
mehr.nz
psych departments post a faculty job that has nothing to do with AI challenge
Reposted by Erin Campbell
bostonaruban.bsky.social
Bicycles Deliver the Freedom that Auto Ads Promise.
Reposted by Erin Campbell
thomasp85.com
I am beyond excited to announce that ggplot2 4.0.0 has just landed on CRAN.

It's not every day we have a new major #ggplot2 release but it is a fitting 18 year birthday present for the package.

Get an overview of the release in this blog post and be on the lookout for more in-depth posts #rstats
ggplot2 4.0.0
A new major version of ggplot2 has been released on CRAN. Find out what is new here.
www.tidyverse.org
Reposted by Erin Campbell
bergelsonlab.bsky.social
lil blogpost ab a cool paper led by the amazing @erinecampbell.bsky.social thnx to NSF. also features a coda ab the very active threat to science funding we’re facing (cc @standupforscience.bsky.social) & a ht to @sciencehomecoming.bsky.social’s good work. thnx @infantstudies.bsky.social!
infantstudies.bsky.social
New on the Baby Blog - From the mouths of babes: Saying the (im)perceptible by Elika Bergelson @bergelsonlab.bsky.social #EarlyYears #DevPsySky #PsychSciSky #science #research #infantstudies infantstudies.org/from-the-mou...
Text: Latest from the Baby Blog - From the mouths of babes: Saying the (im)perceptible by Elika Bergelson against an image background of three infants giggling together on the left corner and a toddler drawing a picture of a globe on the right
erinecampbell.bsky.social
Thanks for putting this together! May I join?
Reposted by Erin Campbell
asommese.bsky.social
Data collection is underway!

We're using mobile eye tracking to study action comprehension in dogs. Django is helping us understand how dogs see and interpret our actions — more coming soon! 🐶👁️ #Science #DogResearch #CognitiveScience

Thanks @asab.org for funding this project!
Reposted by Erin Campbell
jofrhwld.bsky.social
I was finally able to make these visualizations of what what "light", "dark" and other modifiers do to colors jofrhwld.github.io/blog/posts/2...
erinecampbell.bsky.social
ooooh, can you share the % that end up being real / still-valid addresses?
Reposted by Erin Campbell
ravenscimaven.bsky.social
NASA is more than rockets and moonwalks. NASA is behind much of our everyday technology. From space discovery, to Air Jordans, to CAT scans, NASA has played a role. We get it all on less than a penny of every federal dollar. Now their science may be gutted by 50%.
#NASADidThat
Reposted by Erin Campbell
jenna-m-norton.bsky.social
At the Science Fair for canceled grants, I had the privilege of speaking with @naomicaselli.bsky.social. Her team (which includes deaf researchers) were making breakthroughs to better identify & address language deprivation — when the Trump administration terminated their grant.
Jenna Norton and Naomi Caselli in front of Naomi’s poster at the science fair for canceled grants.
Reposted by Erin Campbell
naomicaselli.bsky.social
Okay y’all, gather round for a chat. It’s been a roller coaster, and I thought I’d share what we’ve learned. 🧵 (1/16)
bsky.app/profile/luck...
luckytran.com
BREAKING: Scientists are staging a “science fair” in the lobby of a Congressional building to tell elected officials about the critical knowledge the US will lose because their research grants have been canceled.
Reposted by Erin Campbell
mehr.nz
samuel mehr @mehr.nz · Jun 23
every year my lab does a re-read + edit of our Handbook, a documentation resource for how we do science

this year we also updated our Public Handbook, an open-access version for folks wanting to improve their own docs

it's at handbook-public.themusiclab.org and available for noncommercial re-use
screenshot of our public handbook
Reposted by Erin Campbell
shutupmikeginn.bsky.social
its amazing how chatgpt knows everything about subjects I know nothing about, but is wrong like 40% of the time in things im an expert on. not going to think about this any further
Reposted by Erin Campbell
tsay.bsky.social
As we age, we move slower and less precisely—but how much, exactly?

We analyzed one of the largest datasets on motor control to date—2,185 adults performing a reaching task.

Findings
• Reaction time: –1.2 ms/year
• Movement time: –2.3 ms/year
• Precision: –0.02°/year

tinyurl.com/f9v66jut

1/2
Reposted by Erin Campbell
bergelsonlab.bsky.social
those of us staring down the @bucld.bsky.social #BUCLD50 deadline feel seen.
jmxpearson.bsky.social
Posted on the bulletin board in the department:
A picture of a bedraggled cat captioned “i fear i have edited my own soul down to 500 words.”
Reposted by Erin Campbell
meredithschmehl.com
I'm happy to share a new paper from my PhD research!

Exciting work about how visual info helps us process sound, and an example of federal funding that benefits all of us - from your health to community health.

doi.org/10.1152/jn.0...

With @jmgrohneuro.bsky.social & Jesse Herche

Thread: 🧵👇

1/10
Graphical abstract from the Journal of Neurophysiology with 3 panels. The first panel is titled "Task" with the text: "Record responses in the inferior colliculus (IC) while monkeys perform a multi-modal localization task." A diagram depicts a behavioral task requiring a subject to fixate their eyes on a central light, wait for a target (auditory, visual, or both at a single location) to appear, and then move their eyes to the location of the target. The second panel is titled "Local Field Potential (LFP)" with the text: "Visually-evoked responses to both fixation and target lights." Two figures show the average local field potential (LFP) from multiple recording sites over time during a trial, showing a response that deviates from the pre-stimulus baseline in response to the fixation light (left figure) and visual targets (right figure). Finally, the third panel is titled "Single-Unit Spiking Activity" with the text: "Visually-induced modulation of auditory responses even when the visual spiking response is weak." Two figures follow. The first figure is a peri-stimulus time histogram (PSTH) from one neuron, showing the response to a visual, auditory, and audiovisual target over time. The second figure is a bar plot quantifying the first figure, showing that the audiovisual response has a lower firing rate than the auditory response, despite the visual response for this neuron being near zero. Below the 3 main panels of the graphical abstract is a footer with the logo of the American Physiological Society and the Journal of Neurophysiology.
Reposted by Erin Campbell