Jeff Zemla
@jeffzemla.bsky.social
120 followers 270 following 15 posts
Asst Prof of Psychology @ Syracuse University. Interested in memory, reasoning, aging, Alzheimer's, computational modeling.
Posts Media Videos Starter Packs
jeffzemla.bsky.social
#gpt5 #openai Hype: overstated. I asked gpt for a crossword in the style of NYT. Failure on so many levels.
jeffzemla.bsky.social
Is Prolific overrun with AI bots like MTurk? 300 participants in the 10 minutes since it was posted seems faster than usual...
jeffzemla.bsky.social
They mention POs several times (who have absolutely nothing to do with indirects) but leave out the DFAS, who negotiates indirects at NIH on behalf of taxpayers. I would rather have accountants negotiate indirects than a dude and his vibes about whether indirects are too high
jeffzemla.bsky.social
It'd be nice to see a real discussion of this, but the article ignores the negotiation process (with real budgets and audits), regulations (esp at public Us), and positive externalities. Instead they make up figs and say "I don't know where the $ goes" That's not a productive way to lower indirects.
jeffzemla.bsky.social
6/ 3) Don't trust, verify. For things that matter, carefully review every single line of AI generated code. Step through with a debugger. There are lots of great IDEs that make this easy by using diffs to integrate AI generated code with your own (Cursor, Windsurf, Copilot)
jeffzemla.bsky.social
5/ 2) Context helps. AI is great at generating code that runs. Problems are often introduced in the translation of ideas. This is reduced by providing context. In research, this may mean giving AI your manuscript or a few papers that carefully describe the procedures you are trying to implement
jeffzemla.bsky.social
4/ 1) Consider the costs of a mistake. In data analysis, costs are large. Retractions, reputational costs, money, etc. But if there's a mistake in my class demo? Not a big deal.
jeffzemla.bsky.social
3/ A lot of people tell me they don't trust AI to code correctly. Good, you shouldn't! I've seen it make mistakes on much simpler tasks. But keep a few things in mind:
jeffzemla.bsky.social
2/ I did this to replace paid solutions that are less versatile for psychology instruction. I think it is a example of how instructors can use AI to revamp their classes in ways that are traditionally onerous (costing time or money).
jeffzemla.bsky.social
1/ I built a classroom polling tool using AI with zero coding in about 30 minutes. It allows me to present multiple choice or numeric judgements to students in a lecture, with built-in counterbalancing to demo experiments, and then display results in real time.

youtu.be/k45YidHH_lU
Classroom polling tool created by AI
YouTube video by ffejmai
youtu.be
jeffzemla.bsky.social
Obviously not a surprise that it's happening, but this is more than speculation
jeffzemla.bsky.social
@caseynewton.bsky.social This should be discussed on your podcast. Grok instructed not to tattle on Musk and Trump
www.reddit.com/r/ChatGPT/co...
jeffzemla.bsky.social
I don't care what people do with their cars, but increases in supply of the used car market hurt the new car market. So if the goal is to affect Tesla D2C sales, it seems like a plausible strategy if enough people actually followed through?
Reposted by Jeff Zemla
sgadarian.bsky.social
This but for academic papers. A lot of criticism of work is simply, “why didn’t they focus on some other variable/process that I think is important”. Taking papers on their own terms first is much more interesting and valuable.
emilystjams.bsky.social
"Why wasn't the film this?" is a really common criticism of, like, every movie and always has been, but I encourage you to look past it to figure out what the movie is doing and engage with that. You don't have to like it! You just can't assume you've defeated the film in intellectual combat.
jeffzemla.bsky.social
Attenuation strategies like attention checks might only make things worse, because smart bots are more likely to pass these than humans. For those who use online participant pools - do you have plans to move away from them?
jeffzemla.bsky.social
Are the days of online participant pools numbered? Creating an AgentGPT-like bot to complete surveys for passive income does not seem very difficult. Bots have always been an issue on platforms, but they have largely been dumb bots: easily detectable in the data or by platforms like Prolific.
Reposted by Jeff Zemla
pniedent.bsky.social
Tell your undergrads: PREP at UW offers mentored research, data science training and prof. development for undergraduates from groups historically underrepresented in psych/neurosci. 28 May–9 August 2024. Application window opens 1 November 2023. @spspnews.bsky.social @affectscience.bsky.social