Can Demircan
@candemircan.bsky.social
56 followers 260 following 8 posts
phd student in Munich, working on machine learning and cognitive science
Posts Media Videos Starter Packs
Reposted by Can Demircan
taylorwwebb.bsky.social
LLMs have shown impressive performance in some reasoning tasks, but what internal mechanisms do they use to solve these tasks? In a new preprint, we find evidence that abstract reasoning in LLMs depends on an emergent form of symbol processing arxiv.org/abs/2502.20332 (1/N)
Emergent Symbolic Mechanisms Support Abstract Reasoning in Large Language Models
Many recent studies have found evidence for emergent reasoning capabilities in large language models, but debate persists concerning the robustness of these capabilities, and the extent to which they ...
arxiv.org
Reposted by Can Demircan
mirkothm.bsky.social
Every experience is unique 🌟 light shifts, angles change, yet we recognize objects effortlessly. How do our minds do this? And (how) do they differ from machines? In our new preprint with @ericschulz.bsky.social, we review human generalization and compare it to machine generalization: osf.io/k6ect
Reposted by Can Demircan
lucaschubu.bsky.social
In previous work we found that VLMs fall short of human visual cognition. To make them better, we fine-tuned them on visual cognition tasks. We find that while this improves performance on the fine-tuning task, it does not lead to models that generalize to other related tasks:
Reposted by Can Demircan
marcelbinz.bsky.social
We are currently building the largest, cross-domain data set of human behavior as part of an open collaborative project. Contributions of any form are welcome, but especially experiments with meta-data from developmental, cross-cultural, or clinical studies.

More details: github.com/marcelbinz/P...
GitHub - marcelbinz/Psych-201
Contribute to marcelbinz/Psych-201 development by creating an account on GitHub.
github.com
candemircan.bsky.social
Lastly, we found that previously established alignment methods do not consistently yield better results compared to non-aligned baselines.
candemircan.bsky.social
Several other factors were important for alignment, such as model size, how separated class representations were, and intrinsic dimensionality.
candemircan.bsky.social
We found that this cannot be fully attributed to pretraining data size in additional analyses.
candemircan.bsky.social
CLIP-style models predicted human choices the best across the tasks, suggesting multimodal pretraining is important for representational alignment.
candemircan.bsky.social
We tested humans on reward and category learning tasks using naturalistic images, where the underlying functions were generated using the THINGS embedding.
candemircan.bsky.social
Alignment is more than comparing similarity judgments! How well do pretrained neural networks align with humans in few-shot learning settings? Come check our poster #3904 at #NeurIPS on Wednesday to find out
candemircan.bsky.social
tried messaging you, but the app says you cannot be messaged
Reposted by Can Demircan
mgarvert.bsky.social
🚨Join our team! We’re hiring a PhD student in Cognitive & Clinical Neuroscience 🧠 🎓 at @uni_wue & @UKW_Wuerzburg! Explore mechanisms of decision-making in healthy people & Parkinson’s using new deep brain stimulation methods. German & English required. Apply by 20 Dec! 🌟🎄
Details: shorturl.at/IcNa0
PhD_Wessel_Garvert.pdf
shorturl.at