Ethan Mollick's late 2025 AI guide: Use Pro models for fewer hallucinations. Choose model/mode deliberately. Prompt engineering matters less now. Also paid versions opt out of data training.
The evolving nature of AI has question the relevance of our roles/workflows. There needs to be a role shift: we should be supervising AI and validating its results.
Great overview of how ChatGPT and LLMs are changing scholarly research. Beyond literature reviews, GenAI is already reshaping peer review and research integrity. The next frontier? Society engaging with research through AI agents. scholarlykitchen.sspnet.org/2025/10/13/t...
AI bots attacking library systems could push users away from libraries and toward AI vendors themselves - compounding the problem of people stopping at AI overviews. Kate Dohe's Scholarly Kitchen post: libraries need urgent investment to survive.
The GAMER checklist ( ebm.bmj.com/content/earl...) is a great guide for researchers using GenAI in writing. Along with the GAIDeT framework and statement generator, it forms a trio for responsible AI use in research writing. GAIDeT Generator: panbibliotekar.github.io/gaidet-decla...
I find the concept of AI tools being able to decrease the "price" for research interesting. If we argue further, the quality might be predicted to be lower (or more variable). The oft-ignored aspect of AI is that it is leveler and helps ESL researchers and ECRs more.
Three takeaways from Ian Mulvany on AI & publishing: Publishers must engage with AI and license content. Some think AI could eliminate publishers entirely. Human cognition is like token processing—but LLMs operate at impossibly large scales.