krishnakumarv.bsky.social
@krishnakumarv.bsky.social
Science communication needs to be a core publisher competency. The opportunity: communication ecosystems around research - summaries, podcasts, explainers. Shift from gatekeeper to amplifier.

scholarlykitchen.sspnet.org/2026/02/06/g...

#ScienceCommunication
Guest Post — Why Science Communication Must be the Next Competitive Edge for Scholarly Publishers - The Scholarly Kitchen
Today's guest bloggers assert that the future of the scholarly publishing depends on mastering science communication with the same rigor that global consumer brands apply to marketing.
scholarlykitchen.sspnet.org
February 14, 2026 at 10:01 AM
Three SubStracks worth following for scholarly publishing:

James Butcher's Journalology
journalology.substack.com

Helen King's PubTech Radar
pubtechradar.substack.com

And mine - STM Innovation Brief - inspired by the two above.
chachoch.substack.com
Journalology | James Butcher | Substack
Journalology collates and analyses scholarly publishing news. Click to read Journalology, by James Butcher, a Substack publication with thousands of subscribers.
journalology.substack.com
February 14, 2026 at 3:04 AM
AI training may not be the big revenue source for publishers. The opportunity is "inference" - charging when AI uses content to answer queries.

scholarlykitchen.sspnet.org/2026/01/29/g...

#AI #ScholarlyPublishing
Guest Post — AI Isn’t Going to Pay for Content … Part Two: The Path Forward - The Scholarly Kitchen
Today’s post paves a clear path forward in making AI work for publishers in the brave new agentic world.
scholarlykitchen.sspnet.org
February 13, 2026 at 12:00 PM
Anthropic's Super Bowl ads mocking ChatGPT's advertising decision are worth watching. "Deception," "Betrayal," "Treachery" - "Treachery" hits closest to academia.

Tagline: "Ads are coming to AI. But not to Claude."

youtu.be/3sVD3aG_azw?...

#AI #SuperBowl
Is my essay making a clear argument?
Ads are coming to AI. But not to Claude. Keep thinking. Read more about why: https://www.anthropic.com/news/claude-is-a-space-to-think
youtu.be
February 13, 2026 at 6:00 AM
Prism (OpenAI) and Papers (Opennote) aren't just writing tools - they capture "cognitive signal": how researchers think, not just what they write.

Publisher workflows must adapt.

robotscooking.substack.com/p/prism-open...

#AIinResearch #OpenAI
Prism: OpenAI’s Play for Scientific Cognition
Why a free LaTeX editor isn’t about LaTeX at all
robotscooking.substack.com
February 12, 2026 at 12:01 PM
5% AI disclosure rate across 49,000 JAMA submissions. Maybe we're asking the wrong question. We should perhaps ask how AI was used and how accuracy was evaluated. Writing is thinking - outsourcing it entirely means something gets lost.

blogs.lse.ac.uk/impactofsoci...

#AIinResearch #ResponsibleAI
What we lose when we outsource scientific writing - LSE Impact
Writing is often said to be thinking. Based on a study of scientific writing practices, this post analyses what is lost when academic writing is outsourced.
blogs.lse.ac.uk
February 12, 2026 at 4:18 AM
The gravity of global science has been moving East for 20 years. Now accelerating. China has challenges, but most researchers are doing legitimate work. Support them, don't isolate them.

www.nature.com/articles/d41...

#SciencePolicy #GlobalScience
The US is quitting 66 global agencies: what does it mean for science?
The United States is leaving some of the world’s oldest and most influential scientific networks involved in biodiversity research, climate science and conservation. Affected organizations tell…
www.nature.com
February 6, 2026 at 1:01 PM
A Science paper found that 13% of special issues had guest editors authoring over a third of papers. Good to make the case that publishers aren't enforcing their own rules. Isn't it time for automated editor integrity checks as well?

www.science.org/content/arti...

#ResearchIntegrity
Some guest editors pack special issues with their own articles
Thousands have penned more than one-third of a journal issue, raising conflict-of-interest concerns
www.science.org
February 5, 2026 at 11:30 AM
Reviewers find "technobabble"—reviewers' in the same manuscript submitted to different journals. Author's profiles vanished when questioned. The big point: we need to distinguish legitimate AI use from this.

retractionwatch.com/2026/01/15/t...

#ResearchIntegrity #AIinResearch
Technobabble papers by professor and editor under scrutiny
After we reached out to Eren Öğüt, his profiles at Google Scholar, ORCID and Frontiers’ Loop all vanished. The reviewer, a neuroscientist in Germany, was confused. The manuscript on her screen, des…
retractionwatch.com
February 4, 2026 at 2:30 PM
STM report on publisher efforts for research integrity mentions three pillars: capacity (teams and tech), practice (standards and protocols), and coordination (shared infrastructure). Dare I say the latter two need more attention?

stm-assoc.org/new-report-d...

#ResearchIntegrity #Publishing
New Report Documents Publisher Investment in Research Integrity Infrastructure - STM Association
THE HAGUE, Netherlands (January 13, 2026) – A new report, released today, offers the first collective look at the range of approaches scholarly publishers are deploying to tackle threats to research…
stm-assoc.org
February 4, 2026 at 9:01 AM
Should AI write peer reviews? ACS Nano's editors say no.

Agree if the AI use is irresponsible. Yet the reviewer shortage is real and responsible AI is needed to scale up.

Also, the trust relationship between humans in peer review can't be transferred.

pubs.acs.org/doi/10.1021/...

#AIinResearch
Peer Review and AI: Your (Human) Opinion Is What Matters
pubs.acs.org
February 3, 2026 at 7:30 AM
1 in 12 retracted papers could've been flagged from critical tweets. Yes, social media is a cesspool. But why not screen it for hidden integrity signals anyway? Shoot holes in my argument.

www.nature.com/articles/d41...

#ResearchIntegrity
Critical social-media posts linked to retractions of scientific papers
Online discussions can catch errors or fraud in articles that can be missed in peer review.
www.nature.com
February 2, 2026 at 2:01 PM
After ICLR, now NeurIPS submissions show AI fingerprints—100 hallucinated references in 51 papers. arXiv is instituting endorsements for first-submitters.

Along with researcher education, we need reliable tech to guide, not just detect and penalize.

www.theregister.com/2026/01/22/n...
AI conference's papers contaminated by AI hallucinations
: 100 vibe citations spotted in 51 NeurIPS papers show vetting efforts have room for improvement
www.theregister.com
February 2, 2026 at 6:00 AM
Publishers want AI disclosure. Authors mostly don't—fear of rejection, confusing guidelines, many don't realize Copilot counts. Detection tools won't fix this. Shouldn't we normalise responsible AI use rather than criminalise all AI use?

scholarlykitchen.sspnet.org/2026/01/27/w...

#ResponsibleAI
Why Authors Aren't Disclosing AI Use and What Publishers Should (Not) Do About It - The Scholarly Kitchen
Only a negligible percentage of authors seem to actually be disclosing their AI use. Here's why I think that's the case.
scholarlykitchen.sspnet.org
February 1, 2026 at 1:55 PM
Ask AI how to think, not what to think. Work with AI to figure out how to do things, not just ask it for answers. Human oversight remains key. Collaborative partnership, not unsupervised delegation.

www.nature.com/articles/d41...

#ResponsibleAI
AI can spark creativity — if we ask it how, not what, to think
Studies aiming to maximize human creativity demonstrate that people work best when buoyed up by others who show them new ways to innovate.
www.nature.com
January 29, 2026 at 2:02 PM
COPE says retraction notices can name individuals if investigations confirm it. Meanwhile, institutions rarely uphold misconduct and researchers find ways around accountability. Scientific credit/responsibility issue needs more balancing.
www.nature.com/articles/d41...

#ResearchIntegrity #COPE
Credit in research goes hand in hand with responsibility
Trust in science needs researchers, journals and institutions to correct the scientific record quickly and transparently when errors are found.
www.nature.com
January 29, 2026 at 4:00 AM
Four years ago ChatGPT changed everything. Now users are getting "Claude-pilled." Local file access, agentic workflows, subagents. I've been using Claude Code daily and it's the same shift we felt in 2022. If you haven't tried it, now is the time.

www.wsj.com/tech/ai/anth...

#ClaudeCode #AgenticAI
www.wsj.com
January 28, 2026 at 2:03 PM
The APC debate keeps heating up. NIH wants to crack down, CSP opened their books at $2,600 per article, EMBO proposes competitive grants for journals. Ginny Herbert's question - are we even measuring the right value?

scholarlykitchen.sspnet.org/2026/01/15/g...

#OpenAccess #APC #ScholarlyPublishing
Guest Post — Open Scholarship is Poised to Create More Value than Ever, but Are We Ready? - The Scholarly Kitchen
Today's guest blogger observes how advances in technology create unprecedented opportunities in open scholarship, and asks: Can incentive structures keep up?
scholarlykitchen.sspnet.org
January 28, 2026 at 6:00 AM
AI in peer review is mainstream and there is more evidence. Stanford launched a free Agentic Reviewer. Elsevier's LeapSpace went live with Emerald, IOP, NEJM, Sage integrations. Purpose-built will preside.

www.researchinformation.info/news/elsevie...

#AIinPeerReview #ScholarlyPublishing
Elsevier launches 'research-grade AI-assisted workspace' - Research Information
Publishers and societies join LeapSpace at launch, with 18+ million full-text research articles and books included
www.researchinformation.info
January 27, 2026 at 2:02 PM
AI-assisted scientists publish more, get cited more, advance faster. A study shows AI narrows research focus to established areas. We talk about AI bias in training data. This is AI bias in research direction and it should not be ignored.

www.nature.com/articles/d41...

#AIinResearch #AIBias
AI tools boost individual scientists but could limit research as a whole
Analyses of hundreds of thousands of papers in the natural sciences reveal a paradox: scientists who use AI tools produce more research but on a more confined set of topics.
www.nature.com
January 27, 2026 at 4:00 AM
Academia might have failed Wikipedia for too long, but AI is now taking its content without attribution. Wikimedia has inked deals with big AI like Reddit. Curated research content needs to be as well.

www.nature.com/articles/d41...

#Wikipedia #ContentLicensing #ScholarlyCommunication
The academic community failed Wikipedia for 25 years — now it might fail us
Artificial-intelligence systems are feeding on Wikipedia without giving back, and academic indifference is threatening the survival of what is arguably the most widely used reference work on the…
www.nature.com
January 26, 2026 at 2:02 PM
"Here is the mathematical logic of the spirit: If love is the quality of attention we pay something other than ourselves and hate is the veil of not understanding ourselves..." (1/2)
How to Be an Instrument of Kindness in a Harsh World: George Saunders on Unthinking the Mind, Unstorying the Self, and the 3 Antidotes to Your Suffering
Here is the mathematical logic of the spirit: If love is the quality of attention we pay something other than ourselves and hate is the veil of not understanding ourselves, then loving the world mo…
www.themarginalian.org
January 26, 2026 at 6:23 AM
Anthropic and OpenAI are pushing into healthcare. Google just pulled AI Overviews for some medical queries after accuracy concerns. The opportunity is real, but so is the risk.

www.nbcnews.com/tech/tech-ne...

#AIinHealthcare #Claude #MedComm
Something Went Wrong
Anthropic joins OpenAI's push into health care with new Claude tools
www.nbcnews.com
January 26, 2026 at 4:00 AM
Haseeb Irfanullah proposes 100% AI-reviewed preprints are the future. aiXiv is already accepting AI-authored/reviewed papers. 53% of reviewers use AI. The shift is happening faster than governance but human oversight remains essential.

tsp.scione.com/cms/fulltext...

#Preprints #PeerReview
100% AI-Reviewed Preprints are the Future of Open Research
100% AI-Reviewed Preprints are the Future of Open Research
tsp.scione.com
January 25, 2026 at 2:00 PM
Back after a few weeks! Can you retract a paper from an LLM? Probably not - but filtering via metadata at the app level could help. I agree.

www.the-geyser.com/can-you-retr...

#ResearchIntegrity #AIinResearch
Can You Retract from an LLM?
Atomized, tokenized, and weighted, papers may not be addressable anymore
www.the-geyser.com
January 25, 2026 at 6:50 AM