↑Lionel Yelibi↓ @ neurips 2025
banner
spiindoctor.bsky.social
↑Lionel Yelibi↓ @ neurips 2025
@spiindoctor.bsky.social
Research Scientist. Houston, TX.
Research interests: Complexity Sciences, Matrix Decomposition, Clustering, Manifold Learning, Networks, Synthetic (numerical) data, Portfolio optimization. 🇨🇮🇿🇦
Pinned
Weekend project on signal separation: How can we isolate a weak, nonlinear signal when it's mixed with a dominant, linear one?
Reposted by ↑Lionel Yelibi↓ @ neurips 2025
New study by @iaciac.bsky.social + co-authors and published on 𝘯𝘱𝘫 𝘴𝘤𝘪𝘦𝘯𝘤𝘦 𝘰𝘧 𝘧𝘰𝘰𝘥 models global cuisines as networks of ingredient pairings, revealing unique culinary signatures and patterns, with AI models able to identify a cuisine from just a few recipes.
www.nature.com/articles/s41...
The networks of ingredient combinations as culinary fingerprints of world cuisines - npj Science of Food
npj Science of Food - The networks of ingredient combinations as culinary fingerprints of world cuisines
www.nature.com
November 26, 2025 at 12:12 AM
They were never gone.
November 26, 2025 at 2:21 AM
Reposted by ↑Lionel Yelibi↓ @ neurips 2025
Yale University | Postdoc and PhD fellowships on linguistics, cognitive science, and AI - Application open on a rolling basis
📆 Nov 25, 2025
Home
Overview of the Position: The Yale Department of Linguistics seeks candidates for a Postdoctoral Associate in Computational Linguistics, who would work under the guidance of Professor Tom McCoy. Applicants...
rtmccoy.com
November 24, 2025 at 2:55 PM
Reposted by ↑Lionel Yelibi↓ @ neurips 2025
Our team just released a comprehensive and accessible review of Signed Networks — two years in the making! Theory, methods, applications, all in one place. Feedback welcome.
arxiv.org/abs/2511.17247
Signed Networks: theory, methods, and applications
Signed networks provide a principled framework for representing systems in which interactions are not merely present or absent but qualitatively distinct: friendly or antagonistic, supportive or confl...
arxiv.org
November 24, 2025 at 10:08 AM
Just made quiet posters my main tab, the discover tab sucks.
November 24, 2025 at 5:05 AM
Reposted by ↑Lionel Yelibi↓ @ neurips 2025
Being Interdisciplinary feels like practicing non-attachment. Different disciplines come in & out of focus in waves, each a whole world. Engaging with philosophy gives access to different ontologies than engaging in neuroscience or AI. This helps us evaluate each field from within & outside itself.
November 23, 2025 at 11:35 PM
Actually no. You need confirmation. With grok we have witnessed it. With other models you actually have to do this work instead of jumping to conclusions which maybe baseless.
If one model is transparently manipulated, you should assume the others are manipulated — just more skillfully.

Grok is sloppy about it.
Other companies are subtle about it.
The only difference is competence, not intent.
November 24, 2025 at 1:56 AM
totally random but now that I'm touching on sturm liouville, learning about operators, seeing the connection with the schrodinger equation, being much more mature about linear algebra than I was in my 2011 ugrad quantum mechanics. I feel like they throw a lot at students in that course 🫠
November 23, 2025 at 6:17 PM
Reposted by ↑Lionel Yelibi↓ @ neurips 2025
I had a popular account with a valuable audience and my Twitter payout was $80 a month-ish, to the point where I disabled monetization instead of uploading my ID. Payouts are only material if you live in a developing country, so “guy in Nigeria posting right-wing Amerislop” has taken over the site.
Twitter pays people based on engagement (views, retweets, comments, etc). It appears that many MAGA accounts are based abroad and they use AI technology to generate low-effort rage bait.

My guess is that this will get worse as AI tech improves. For instance, fake videos of minorities doing crime.
November 23, 2025 at 3:31 PM
The discover tab on this app has way too much politics
November 23, 2025 at 8:46 AM
During my Msc I experimented with genetic algos and simulated annealing. Genetic algos were so slow which is why I am confused whenever they're mentioned with neural networks given how scalable gradient based optimization has been? I also am a bystander in that arena so...
Evolutionary Algorithms for optimizing LLM weights

Gradient descent and backpropagation have a lot of problems, alignment becomes a nightmare. Evolutionary algos fix this, but they don’t scale

A recent paper, EGGROLL, makes it computationally feasible to do now

www.alphaxiv.org/abs/2511.16652
November 23, 2025 at 8:41 AM
You don't have to worry about where I am tweeting from.
November 23, 2025 at 8:00 AM
Need more tech crypto and finance bros on this platform. Badly.
November 23, 2025 at 1:26 AM
Reposted by ↑Lionel Yelibi↓ @ neurips 2025
Factor Learning Portfolio Optimization Informed by Continuous-Time Finance Models

Sinong Geng, houssam nassif, Zhaobin Kuang, Anders Max Reppen, K. Ronnie Sircar

Action editor: Reza Babanezhad Harikandeh

https://openreview.net/forum?id=KLOJUGusVE

#portfolio #finance #financial
November 21, 2025 at 5:18 AM
Reposted by ↑Lionel Yelibi↓ @ neurips 2025
One thing about PCA/embeddings/political-leaning that should get more attention is the role of zero or “the origin”. It’s often special in a way that depends upon how you do the embedding.

This post is a good example of that.

Once you accept that the origin is special, then….
November 20, 2025 at 6:28 PM
Timeline full of people losing their minds because markets are tanking.
November 20, 2025 at 7:31 PM
Reposted by ↑Lionel Yelibi↓ @ neurips 2025
𝗗𝘂𝗻𝗰𝗮𝗻 𝗪𝗮𝘁𝘁𝘀 & 𝗦𝘁𝗲𝘃𝗲𝗻 𝗦𝘁𝗿𝗼𝗴𝗮𝘁𝘇 will give their first-ever 𝗷𝗼𝗶𝗻𝘁 𝗸𝗲𝘆𝗻𝗼𝘁𝗲 at NetSci 2026! Their groundbreaking work has shaped how we understand networks, & this session will be a highlight of NetSci’s 20th anniversary.
Call for abstracts: tinyurl.com/3tbj2v83
Call for satellites: tinyurl.com/42sru6kz
November 20, 2025 at 7:22 PM
Reposted by ↑Lionel Yelibi↓ @ neurips 2025
What people are actually trying to say: Natural data distributions live near a low-dimensional tame geometric object. Neural networks can represent functions whose images are also tame. PS: This implies learning is possible because both sides reside in the same o-minimal universe.
Find someone who loves you more than ML people love saying manifold
November 20, 2025 at 2:46 AM
Slowly learning that if you want to be a theoretical physicist it's not a bad idea to befriend a pure mathematician.
November 19, 2025 at 7:46 PM
I will probably attend my first netsci next year.
🚨 𝗡𝗲𝘁𝗦𝗰𝗶 𝟮𝟬𝟮𝟲 𝗿𝗲𝗴𝗶𝘀𝘁𝗿𝗮𝘁𝗶𝗼𝗻 𝗶𝘀 𝗻𝗼𝘄 𝗼𝗽𝗲𝗻!
Secure your spot at the flagship conference of the Network Science Society and take advantage of the early bird registration special - www.netsci2026.com/registration
📅 June 1–5, 2026 | Hyatt Regency, Cambridge, MA
Join us as we celebrate 20 years of NetSci!
November 19, 2025 at 4:24 PM
I am really excited to try @waymo.bsky.social in houston! over the past 1-2yrs I've waymo and Cruise cars drive around downtown. It's good to have confirmation.
November 19, 2025 at 4:14 PM
Reposted by ↑Lionel Yelibi↓ @ neurips 2025
What happens when a network is neither perfectly ordered nor completely random? Watts & Strogatz’s 1998 “small-world” insight: add a few random shortcuts to a local network, you keep the clustering but gain short paths.
𝙎𝙩𝙖𝙮 𝙩𝙪𝙣𝙚𝙙, 𝙮𝙤𝙪 𝙢𝙖𝙮 𝙨𝙚𝙚 𝙢𝙤𝙧𝙚 𝙨𝙢𝙖𝙡𝙡-𝙬𝙤𝙧𝙡𝙙 𝙣𝙚𝙩𝙬𝙤𝙧𝙠𝙨 𝙞𝙣 𝙩𝙝𝙚 𝙉𝙚𝙩𝙎𝙘𝙞 𝙥𝙧𝙤𝙜𝙧𝙖𝙢 𝙨𝙤𝙤𝙣…
November 19, 2025 at 3:51 PM
Shocker
The Epstein Files bill that passed today allows Pam Bondi to withhold or redact ANY material.
November 19, 2025 at 2:22 PM
Reposted by ↑Lionel Yelibi↓ @ neurips 2025
Stop by this afternoon to chat about community structure in the 🪰connectome!

📍ZZ2/PSTR368.03
November 18, 2025 at 4:39 PM
Who is Olivia Nuzzi and why is the timeline full of tweets about her
November 18, 2025 at 2:22 PM