🔥囧Robert Osazuwa Ness囧🔥
osazuwa.bsky.social
🔥囧Robert Osazuwa Ness囧🔥
@osazuwa.bsky.social

Probabilistic machine Learning, causal inference, language models. Teach at http://Altdeep.ai & @Northeastern, work at @MSFTResearch.
newsletter.altdeep.ai/p/my-book-is... The connection between genAI and causality is obvious but could nevery find any good learning material that made the connection.

So I wrote a book
My Book Is Out! Why I Wrote It and How You Can Help
Bridging the Gap Between Deep Learning and Causal Inference—A Code-First Approach
newsletter.altdeep.ai
February 24, 2025 at 10:57 PM
Glad to hear this! Was hoping the 2nd chapter primer would hit but wasn't sure.
Happy to see that @osazuwa.bsky.social's book Causal AI from Manning is shipping my way. I have the ebook and the 2nd chapter Primer is like a "missing manual" connecting the lofty glossary worlds of stats books and things my pedestrian mind can grasp. www.manning.com/books/causal...
Causal AI
Build AI models that can reliably deliver causal inference.</b> How do you know what might have happened, had you done things differently? Causal AI gives you the insight you need to make predictions...
www.manning.com
February 24, 2025 at 10:13 PM
My team at
@MSFTResearch
is seeking an intern interesting in task-specific distillation of #largelanguagemodels. Join us! Apply now: jobs.careers.microsoft.com/global/en/jo... #AIInternship
November 29, 2023 at 10:39 PM
My team at MSR is hiring an intern to explore the intersection of structured probabilistic reasoning and LLMs, and generative AI in general. Touches on causal reasoning, Bayesian modeling, and probabilistic ML. Join us! jobs.careers.microsoft.com/global/en/jo... #AIResearch #Internship
November 29, 2023 at 10:14 PM
Reposted by 🔥囧Robert Osazuwa Ness囧🔥
I forget if I've already shared this but I'm so obsessed with this paper from the Toms (McCoy & Griffiths):

arxiv.org/abs/2305.14701
Modeling rapid language learning by distilling Bayesian priors...
Humans can learn languages from remarkably little experience. Developing computational models that explain this ability has been a major challenge in cognitive science. Bayesian models that build...
arxiv.org
November 1, 2023 at 8:59 PM
Anyone know of any work that evaluates the relationship between LLM prompting strategies and generalizability? Eg, if one applies a bunch of prompting hacks to ramp up accuracy on a benchmark, you might be sacrificing the ability for that prompt to generalize to new settings?
November 1, 2023 at 12:41 AM
I got #COVID19. I have one toddler. We're a two-parent household with external help, so I can self-quarantine and just wait to stop feeling shitty while my wife does heavy lifting.

My heart weeps for parents who had to do this alone, especially during the pandemic.
October 17, 2023 at 11:09 AM
October 10, 2023 at 5:31 PM