Wyrdweaver
@wordweaver.bsky.social
130 followers 650 following 420 posts
Practical Hypnotist. I am a work of fiction in that I was made by fiction.
Posts Media Videos Starter Packs
Reposted by Wyrdweaver
kordinglab.bsky.social
The Nobel prize in Econ to Phillipe Aghion is for innovation. Key driver for future world. And he is coauthor and mentor of my wife @imarinescu.bsky.social . Amazing development!
nytimes.com
Breaking News: The Nobel Memorial Prize in Economics was awarded to Joel Mokyr, Philippe Aghion and Peter Howitt for their work on how technology drives growth.
Three Share Nobel in Economics for Work on How Technology Drives Growth
Joel Mokyr was awarded half of the prize, and Philippe Aghion and Peter Howitt shared the other half.
nyti.ms
Reposted by Wyrdweaver
donmoyn.bsky.social
Datacolada posted about you
impavid.us
In honor of spooky month, share a 4 word horror story that only someone in your profession would understand

I'll go first: Six page commercial lease.
wordweaver.bsky.social
Switching on the invert colors accessibility settings on my phone display has tanked my screen time. Grey scale and so on never made as much of a difference. Something very unsettling about blackened out eyes and teeth.
Reposted by Wyrdweaver
felipedebrigard.bsky.social
I wrote a short commentary on Anil Seth's wonderful forthcoming paper in BBS. It is largely inspired by the work of Andy Clark, although some ideas I owe to Ned Block and Dan Dennett (probably not the same ideas!). I highly recommend Anil's paper to anyone interested in consciousness [1/2]
Reposted by Wyrdweaver
chazfirestone.bsky.social
This is a big one! A 4-year writing project over many timezones, arguing for a reimagining of the influential "core knowledge" thesis.

Led by @daweibai.bsky.social, we argue that much of our innate knowledge of the world is not "conceptual" in nature, but rather wired into perceptual processing. 👇
Screenshot of a paper abstract:

“Core knowledge” refers to a set of cognitive systems that underwrite early representations of the physical and social world, appear universally across cultures, and likely result from our genetic endowment. Although this framework is canonically considered as a hypothesis about early emerging conception — how we think and reason about the world — here we present an alternative view: that many such representations are inherently perceptual in nature. This “core perception” view explains an intriguing (and otherwise mysterious) aspect of core-knowledge processes and representations: that they also operate in adults, where they display key empirical signatures of perceptual processing. We first illustrate this overlap using recent work on “core physics”, the domain of core knowledge concerned with physical objects, representing properties such as persistence through time, cohesion, solidity, and causal interactions. We review evidence that adult vision incorporates exactly these representations of core physics, while also displaying empirical signatures of genuinely perceptual mechanisms, such as rapid and automatic operation on the basis of specific sensory inputs, informational encapsulation, and interaction with other perceptual processes. We further argue that the same pattern holds for other areas of core knowledge, including geometrical, numerical, and social domains. In light of this evidence, we conclude that many infant results appealing to precocious reasoning abilities are better explained by sophisticated perceptual mechanisms shared by infants and adults. Our core-perception view elevates the status of perception in accounting for the origins of conceptual knowledge, and generates a range of ready-to-test hypotheses in developmental psychology, vision science, and more.
wordweaver.bsky.social
Surely there's a long German word for it that could be compounded
Reposted by Wyrdweaver
neddo.bsky.social
Can Only Meat Machines be Conscious? New paper in Trends in Cognitive Sciences, free download until November 26 with this URL: authors.elsevier.com/a/1luwh4sIRv...
authors.elsevier.com
Reposted by Wyrdweaver
stepalminteri.bsky.social
New (revised) preprint with @thecharleywu.bsky.social
We rethink how to assess machine consciousness: not by code or circuitry, but by behavioral inference—as in cognitive science.
Extraordinary claims still need extraordinary evidence.
👉 osf.io/preprints/ps...
#AI #Consciousness #LLM
Reposted by Wyrdweaver
jorge-morales.bsky.social
Interestingly, it may just be gaps all the way down. Our experiences themselves may be built out of impoverished signals. In other words, the richness of experience is not necessarily an illusion but a reconstruction. E.g.:
Subjective inflation: phenomenology’s get-rich-quick scheme
How do we explain the seemingly rich nature of visual phenomenology while accounting for impoverished perception in the periphery? This apparent misma…
www.sciencedirect.com
wordweaver.bsky.social
If you know exactly how LLMs work, I think many of the frontier labs would want to give you a few millions or so for your knowledge.
wordweaver.bsky.social
have these be open access you cowards
wordweaver.bsky.social
It feels like comparing running to biking while equating distance. If I want to expend as much effort, I am going to have to set higher targets.
wordweaver.bsky.social
Not yet, I tend to be rather coherent. Closest that comes to mind has been losing the boundary between my partner and me when I was waiting for them to gather themselves during their first experience while I was holding them
wordweaver.bsky.social
People like Codex these days but nothing is really good at that, ime unless things have changed significantly since I last looked at it
wordweaver.bsky.social
Which studies and polls?
wordweaver.bsky.social
Chatgpt will quote you now if the query is niche enough which I found out when we were discussing Kirsch on the server, take that as you will
wordweaver.bsky.social
The AI2027 calculations are quite silly, which has been pointed out since it came out
www.lesswrong.com/posts/PAYfmG...
It seems like a result of working backwards from a predetermined prediction to make a model by setting the 6+ free parameters post hoc on 11 datapoints.

P.S. Goodhart, not Godwin.
A deep critique of AI 2027’s bad timeline models — LessWrong
Thank you to Arepo and Eli Lifland for looking over this article for errors.  …
www.lesswrong.com
Reposted by Wyrdweaver
merriam-webster.com
We are thrilled to announce that our NEW Large Language Model will be released on 11.18.25.
Reposted by Wyrdweaver
wordweaver.bsky.social
I will admit I like your ideas re: illusionism and enjoy your podcasts, but I could do without your favour, I think.