Lukas N.P. Egger
brusik.bsky.social
Lukas N.P. Egger
@brusik.bsky.social
VP of Product Strategy & Innovation at SAP Signavio. Philosophy, GenAI & Zeitgeist.
Reposted by Lukas N.P. Egger
@stephaniemlee.bsky.social has written a fine piece about the NeurIPS high school AI contest!
www.chronicle.com/article/teen...
Teens Are Doing AI Research Now. Is That a Good Thing?
No need for a Ph.D.: At artificial intelligence’s biggest conference, high schoolers competed to present.
www.chronicle.com
January 15, 2025 at 1:55 AM
Reposted by Lukas N.P. Egger
I wish OpenAI would focus on playing the orchestration layer among all my apps/services, than to become one more place for me to manage tasks. www.theverge.com/2025/1/14/24...
January 14, 2025 at 10:18 PM
Reposted by Lukas N.P. Egger
I think in the future, the line between “pre-training” and “post-training” will be gone, and our models and agents will continuously adapt and self-improve.
January 15, 2025 at 5:57 AM
Reposted by Lukas N.P. Egger
Every now & then I come across this view, and my reaction is - why? We’ve developed AI systems that can converse & reason and that can drive vehicles w/o an understanding at level of fundamental principles, why should AGI require it? Esp since the whole point of ML is the system learns in training
January 6, 2025 at 5:13 AM
Reposted by Lukas N.P. Egger
"A study of federally funded research projects in the United States estimated that principal investigators spend on average about 45% of their time on administrative activities related to applying for and managing projects rather than conducting active research"

www.pnas.org/doi/10.1073/...
January 4, 2025 at 1:26 PM
Reposted by Lukas N.P. Egger
Apparently, you can run DeepSeek-V3 locally, provided that you have 8 M4 Pro 64GB Mac minis.

~5 tok/sec.
December 27, 2024 at 3:03 AM
Just because AI is highly capable doesn’t mean one can immediately make use of it. Jobs will need to be reorganized before any serious automation will take hold aka. process re-engineering.

I expect to see many more visualizations like this in the near future.
December 25, 2024 at 5:31 PM
Reposted by Lukas N.P. Egger
December 24, 2024 at 8:52 PM
Reposted by Lukas N.P. Egger
Ok, this genuinely freaks me out. I had thought this had taken longer.
x.com/_jasonwei/st...
December 20, 2024 at 10:29 PM
Reposted by Lukas N.P. Egger
From @fchollet.bsky.social over on the other site.

"they demand serious scientific attention" -- indeed! But how to do science on these results without more openness?
December 20, 2024 at 6:15 PM
Reposted by Lukas N.P. Egger
Independent evaluations of OpenAI’s o3 suggest that it passed math & reasoning benchmarks that were previously considered far out of reach for AI including achieving a score on ARC-AGI that was associated with actually achieving AGI (though the creators of the benchmark don’t think it o3 is AGI)
December 20, 2024 at 6:26 PM
Reposted by Lukas N.P. Egger
Very cool to see the progress on a variety of logic/math/other reasoning tasks with LMs! I especially appreciate that the researchers are being explicit that it's *just* an LM + RL (x.com/__nmca__/sta...). A few reflections: 1/5
x.com
x.com
December 20, 2024 at 7:21 PM
Reposted by Lukas N.P. Egger
intelligence is starting to get good, but context is still siloed for stupid reasons.
get models that do human-level computer-use already, please...!
December 20, 2024 at 8:59 PM
Reposted by Lukas N.P. Egger
I think GANs are going to come back in a big way in 2025, but with a twist. History doesn’t repeat itself but it often rhymes.
December 18, 2024 at 1:39 AM
Reposted by Lukas N.P. Egger
What could possibly go wrong? The latest OpenAI model sometimes actively tries to deceive users:
OpenAI's o1 model sure tries to deceive humans a lot | TechCrunch
OpenAI finally released the full version of o1, which gives smarter answers than GPT-4o by using additional compute to "think" about questions. However,
techcrunch.com
December 6, 2024 at 9:07 AM
Reposted by Lukas N.P. Egger
Models like o1 suggest that people won’t generally notice AGI-ish systems that are better than humans at most intellectual tasks, but which are not autonomous or self-directed

Most folks don’t regularly have a lot of tasks that bump up against the limits of human intelligence, so won’t see it
December 7, 2024 at 12:49 AM
Reposted by Lukas N.P. Egger
I've been around the block a few times. When deep learning first became hot, many older colleagues bemoaned it as just tinkering + chain rule, and not intellectually satisfying. Then came SSL, equivariance, VAEs, GANs, neural ODEs, transformers, diffusion, etc. The richness was staggering.

🧵👇
Great post that captures the tension between classic ML approaches and modern deep learning while acknowledging the nuances of both.

“Working with LLMs doesn’t feel the same. It’s like fitting pieces into a pre-defined puzzle instead of building the puzzle itself.”

www.reddit.com/r/MachineLea...
From the MachineLearning community on Reddit
Explore this post and more from the MachineLearning community
www.reddit.com
December 6, 2024 at 5:06 PM
Reposted by Lukas N.P. Egger
Here is a fun o1 test. I gave it this XKCD comic & the prompt: "make this a reality. i need a gui and clear instructions since i can't code. that means you need to give me full working software"

It took less than 15 minutes and it didn't get caught in any of the usual LLM loops, just solved issues.
December 5, 2024 at 6:55 PM
Reposted by Lukas N.P. Egger
Exponentially growing number of open-source AI models over the course of the past 30 months – from a few thousands to over 1 million and more

Interactive data viz: huggingface.co/spaces/huggi...
December 6, 2024 at 8:14 AM
I think OpenAI’s o1 pro mode (the $200/month tier) is a test balloon. A lot of people will buy it to just be part of the hype. But what OpenAI is testing the price elasticity when offered an incremental improvement in reliability. 🧵
December 6, 2024 at 8:25 AM
Reposted by Lukas N.P. Egger
When o1 was first previewed, I wrote this about what it means. The quality of the model is important, but what it potentially means for the future of AI is more important. www.oneusefulthing.org/p/something-...
Something New: On OpenAI's "Strawberry" and Reasoning
Solving hard problems in new ways
www.oneusefulthing.org
December 6, 2024 at 2:21 AM
Reposted by Lukas N.P. Egger
Derek Sivers once said “Mastery is the best goal because the rich can’t buy it, the impatient can’t rush it, the privileged can’t inherit it, and nobody can steal it. You can only earn it through hard work. Mastery is the ultimate status.”

Does it still hold in the age of LLMs? 😎
December 6, 2024 at 4:27 AM