Koki Ikeda
banner
kokiikeda.bsky.social
Koki Ikeda
@kokiikeda.bsky.social
Research fellow at Meiji Gakuin University in Tokyo.
https://sites.google.com/view/dlpsychology/home
Reposted by Koki Ikeda
I had intended to post something about this new Google DeepMind paper that appeared yesterday in Nature, but the press coverage has added to what there is to say. So this is a long 🧵
www.nature.com/articles/s41...
Advancing regulatory variant effect prediction with AlphaGenome - Nature
AlphaGenome, a deep learning model that inputs 1-Mb DNA sequence to predict functional genomic tracks at single-base resolution across diverse modalities, outperforms existing models in variant effect...
www.nature.com
January 30, 2026 at 9:47 AM
Reposted by Koki Ikeda
If you tell an AI to convince someone of a true vs. false claim, does truth win? In our *new* working paper, we find...

‘LLMs can effectively convince people to believe conspiracies’

But telling the AI not to lie might help.

Details in thread
January 20, 2026 at 3:00 PM
Reposted by Koki Ikeda
An attempt to express how I principally use LLMs.

Rotating the Space: On LLMs as a Medium for Thought
sbgeoaiphd.github.io/rotating_the...
January 16, 2026 at 5:16 PM
Reposted by Koki Ikeda
How complex should network models be?

🚨 In our latest paper we quantify (if and) when higher-order interactions are informative versus reducible to pairwise structure without losing functional signal (e.g., diffusion behavior).

👉 www.nature.com/articles/s41...

1/
January 15, 2026 at 1:36 PM
Reposted by Koki Ikeda
What if animals emerged by installing a new biological operating system that repurposed what already existed, much like the rise of the smartphone? Here's our new paper in @embojournal.org @ibe-barcelona.bsky.social @melisupf.bsky.social @sfiscience.bsky.social link.springer.com/article/10.1...
January 15, 2026 at 11:00 AM
Reposted by Koki Ikeda
We keep saying: "AI will handle the boring stuff, and humans will supervise." But the problem is--as AI reliability improves, it becomes really hard to motivate a human to conscientiously monitor it.

In a new WP with Gerard Cachon, we describe the "human-AI contracting paradox."
January 7, 2026 at 4:35 PM
Reposted by Koki Ikeda
📄 Measuring Intrinsic Dimension of Token Embeddings (2025) arxiv.org/abs/2503.02142
📄 Do We Really Need All Those Dimensions?
An Intrinsic Evaluation Framework for Compressed Embeddings (EMNLP 2025) aclanthology.org/anthology-fi...
Measuring Intrinsic Dimension of Token Embeddings
In this study, we measure the Intrinsic Dimension (ID) of token embedding to estimate the intrinsic dimensions of the manifolds spanned by the representations, so as to evaluate their redundancy quant...
arxiv.org
January 3, 2026 at 10:21 PM
Reposted by Koki Ikeda
A new, long paper on evolution - natural induction - split into 2:

royalsocietypublishing.org/rsfs/article...

royalsocietypublishing.org/rsfs/article...

@RichardWatson90 and Tim Lewens
Evolution by natural induction
Abstract. It is conventionally assumed that all evolutionary adaptation is produced, and could only possibly be produced, by natural selection. Natural ind
royalsocietypublishing.org
December 19, 2025 at 11:49 PM
Reposted by Koki Ikeda
I know some of you have strong views on LLMs and might not agree with me on this, but if you genuinely value diversity in academia, e.g., in welcoming neurodivergent researchers and non-native speakers, then I think you should acknowledge the positive influence AI can have in fostering inclusivity.
On the plus side, LLMs also reduce barriers for non-native speakers, facilitate the discovery of prior literature, and remove traditional signals of scientific quality such as language complexity. www.science.org/doi/10.1126/...
December 29, 2025 at 10:28 AM
Reposted by Koki Ikeda
Happy to share my new paper w/ @cgershen.bsky.social, just published at @royalsocietypublishing.org Interface!

Open Access🔓: royalsocietypublishing.org/rsif/article...

Instead of proposing a new theory, we offer a synthesis in theoretical biology. Want to know more? Read the full thread./1 👇🧵
Closing the loop: how semantic closure enables open-ended evolution?
Abstract. This study explores the evolutionary emergence of semantic closure—the self-referential mechanism through which symbols actively construct and in
royalsocietypublishing.org
December 22, 2025 at 3:19 PM
Reposted by Koki Ikeda
Benchmarks from historians show that AI transcription from handwriting is now better than human, and a very cheap model is as good as people.

There are now massive troves of documents that could be made available for research that would have been impossible or prohibitive to transcribe before.
December 18, 2025 at 4:31 PM
Reposted by Koki Ikeda
🚨 What if evolution is the ”law”… and networks are the machines that do the work?

In this paper (just published) I try to formalize how living systems are non-equilibrium, information-processing, adaptive matter. With a great biological flavor! 🧪🌐🌍🧬🦠

👉 iopscience.iop.org/article/10.1...

🧵 1/
December 16, 2025 at 9:04 AM
Reposted by Koki Ikeda
Centuries of ontological dualisms (even recently permeating the literature) have muddied goal-directedness as something mystical. 🔮

It’s time to naturalize this concept and unpack its relationship to agency!/1

#complexitycat 😼

www.complexitycat.org/posts/goal-d...
Review The meaning and origin of goal-directedness: a dynamical systems perspective
Goal orientation is perhaps one of the most intriguing corollaries of living systems. Can we naturalize a concept that for centuries has been treated as something beyond reductionist explanations? Tod...
www.complexitycat.org
November 28, 2025 at 2:33 PM
Reposted by Koki Ikeda
𝗕𝗲𝘆𝗼𝗻𝗱 𝗻𝗲𝘁𝘄𝗼𝗿𝗸𝘀: 𝗧𝗼𝘄𝗮𝗿𝗱 𝗮𝗱𝗮𝗽𝘁𝗶𝘃𝗲 𝗺𝗼𝗱𝗲𝗹𝘀 𝗼𝗳 𝗯𝗶𝗼𝗹𝗼𝗴𝗶𝗰𝗮𝗹 𝗰𝗼𝗺𝗽𝗹𝗲𝘅𝗶𝘁𝘆
Discusses how more standard network models miss key points of brain complexity. And some more radical points at the end.
Wrote paper having in mind younger researchers more open to new ideas :-)
#neuroskyence
doi.org/10.1016/j.pl...
December 8, 2025 at 6:13 PM
Reposted by Koki Ikeda
We have no idea what alien intelligences are like (or if they even exist at all). What little we know about what is “alien” or “intelligent” is put in stark relief if AI is truly the most exotic ‘intelligences’ we can imagine, when these are a mirror of the human niche
December 14, 2025 at 5:58 PM
Reposted by Koki Ikeda
I recommend reading this piece as well as her book. One thing the piece made me think about is that behaviour geneticists do not really study many traits that are a good fit for the Waddington landscape metaphor in the picture.
December 6, 2025 at 1:56 PM
Reposted by Koki Ikeda
Let me tell you a story. Perhaps you can guess where this is going... though it does have a bit of a twist.

I was poking around Google Scholar for publications about the relationship between chatbots and wellness. Oh how useful: a systematic literature review! Let's dig into the findings. 🧵
December 5, 2025 at 10:35 PM
Reposted by Koki Ikeda
Thoughtful review with some good recent historical perspective on the ongoing paradigm shift that is radically changing the way we think about what brain areas do.

www.nature.com/articles/s41...
How distributed is the brain-wide network that is recruited for cognition? - Nature Reviews Neuroscience
Both localized and distributed views on the functional organization of the brain have been put forward. In this Perspective, Rosen and Freedman examine the degree to which these two views account for ...
www.nature.com
December 4, 2025 at 5:56 PM
Reposted by Koki Ikeda
🚨 New in Nature+Science!🚨
AI chatbots can shift voter attitudes on candidates & policies, often by 10+pp
🔹Exps in US Canada Poland & UK
🔹More “facts”→more persuasion (not psych tricks)
🔹Increasing persuasiveness reduces "fact" accuracy
🔹Right-leaning bots=more inaccurate
December 4, 2025 at 8:43 PM
Reposted by Koki Ikeda
Networks are #complex and their dynamics often look chaotic. But we can reconstruct latent spaces where their behavior becomes strikingly regular, revealing functional organization across biology, society and technology.

How? 👉 rdcu.be/eSsqn 🧪🧠🦠🧬🌐

Kudos to Andrea, @dzanc.bsky.social & Sebastiano!
December 2, 2025 at 10:18 AM
Reposted by Koki Ikeda
It's easy to see shoddy research as a bad actor problem. But if AI slop like this can make it through editors and peer reviewers, it means there are systemic problems at work. And I'd argue that at least part of the problem is the overwork culture in academia-- pressure to do more while caring less.
"Runctitiononal features"? "Medical fymblal"? "1 Tol Line storee"? This gets worse the longer you look at it. But it's got to be good, because it was published in Nature Scientific Reports last week: www.nature.com/articles/s41... h/t @asa.tsbalans.se
November 28, 2025 at 5:18 PM
今週末 11/30 (日) に、日本人間行動進化学会で「説得 AI の理論:信念体系のアトラクター仮説」というタイトルで発表します。詳細は以下をご覧ください。
sites.google.com/view/dlpsych...
Deep Learning Psychology - HBES-J 2025
日本人間行動進化学会 2025 大会 口頭発表: 説得 AI の理論:信念体系のアトラクター仮説
sites.google.com
November 25, 2025 at 7:49 AM
Reposted by Koki Ikeda
We can no longer trust that survey responses are coming from real people.”
A Researcher Made an AI That Completely Breaks the Online Surveys Scientists Rely On
We can no longer trust that survey responses are coming from real people.”
www.404media.co
November 17, 2025 at 8:15 PM