Ksenia Se / Turing Post
@turingpost.bsky.social
86 followers 3 following 130 posts
Founder of the newsletter that explores AI & ML (https://www.turingpost.com) - AI 101 series - ML techniques - AI Unicorns profiles - Global dynamics - ML History - AI/ML Flashcards Haven't decided yet which handle to maintain: this or @kseniase
Posts Media Videos Starter Packs
turingpost.bsky.social
• The amount of work an LLM can handle on the same hardware is growing even faster than the improvements in model density or chip power alone.

That's why researchers suggest focusing on improving "density" instead of just aiming for bigger and more powerful models.
turingpost.bsky.social
Here are the key findings from the study:

• Costs to run models are dropping as they are becoming more efficient.
• The release of ChatGPT sped up the growth of efficiency of new models up to 50%!
• Techniques like pruning and distillation don’t necessarily make models more efficient.
turingpost.bsky.social
Estimating of effective parameter size:

It combines a two-step process:

- Loss Estimation: Links a model's size and training data to its accuracy
- Performance Estimation: Uses a sigmoid function to predict how well a model performs based on its loss.
turingpost.bsky.social
Scaling law:

The density of a model is the ratio of its effective parameter size to its actual parameter size.

If the effective size is close to or smaller than the actual size, the model is very efficient.
turingpost.bsky.social
Why is density important?

A higher-density model can deliver better results without needing more resources, reducing computational costs, making models suitable for devices with limited resources, like smartphones and avoiding unnecessary energy use.
turingpost.bsky.social

Interestingly, they found a trend, called Densing Law:

The capacity density of LLMs is doubling every 3 months, meaning that newer models are getting much better at balancing performance and size.

Let's look at this more precisely:
turingpost.bsky.social
Reading about scaling laws recently I came by the interesting point:
Focus on a balance between models' size and performance is more important that aiming for larger models

Tsinghua University and ModelBest Inc propose the idea of “capacity density” to measure how efficiently a model uses its size
turingpost.bsky.social
2. AI system from World Labs, co-founded by Fei-Fei Li:

Transforms a single image into interactive 3D scenes with varied art styles and realistic physics. You can explore, interact with elements and move within AI-generated environments directly in your web browser

www.youtube.com/watch?v=lPYJ...
World Labs Unveils AI System That Transforms Single Images into Interactive 3D Worlds
YouTube video by Maginative
www.youtube.com
turingpost.bsky.social
1. GoogleDeepMind's Genie 2

Generates 3D environments with object interactions, animations, and physical effects from one image or text prompt. You can interact with them in real-time using a keyboard and mouse.

Paper: deepmind.google/discover/blo...

Our example: www.youtube.com/watch?v=YjO6...
turingpost.bsky.social
An incredible shift is happening in spatial intelligence!

Here are 2 latest revolutional World Models, which create interactive 3D environments:

1. GoogleDeepMind's Genie 2

2. AI system from World Labs, co-founded by Fei-Fei Li

Explore more below 👇
turingpost.bsky.social
In our new AI 101 episode we discuss:

- FM concepts for optimizing the path from noise to realistic data
- Continuous Normalizing Flows (CNFs)
- Conditional Flow Matching
- Difference of FM and diffusion models

Find out more: turingpost.com/p/flowmatching
Topic 20: What is Flow Matching?
Explore the key concepts of Flow Matching, its relation to diffusion models, and how it can enhance the training of generative models
turingpost.com
turingpost.bsky.social
What is Flow Matching?

Flow Matching (FM) is used in top generative models, like Flux, F5-TTS, E2-TTS, and MovieGen with state-pf-the-art results. Some experts even say that FM might surpass diffusion models👇
turingpost.bsky.social
INTELLECT-1 by Prime Intellect

INTELLECT-1 is a 10B open-source LLM trained over 42 days on 1T tokens across 14 global nodes, leverages the PRIME framework for exceptional efficiency (400× bandwidth reduction).

github.com/PrimeIntelle...
turingpost.bsky.social
MultiFoley by Adobe Research

MultiFoley is an AI model generating high-quality sound effects from text, audio, and video inputs. Cool demos highlight its creative potential.

arxiv.org/abs/2411.17698
turingpost.bsky.social
ShowUI by Show Lab, NUS, Microsoft

ShowUI is a 2B vision-language-action model tailored for GUI tasks:

- features UI-guided token selection (33% fewer tokens)
- interleaved streaming for multi-turn tasks
- 256K dataset
- achieves 75.1% zero-shot grounding accuracy

arxiv.org/abs/2411.17465
turingpost.bsky.social
OLMo 2 by Allen AI

OLMo 2, a family of fully open LMs with 7B and 13B parameter, is trained on 5 trillion tokens.

allenai.org/blog/olmo2
turingpost.bsky.social
Alibaba’s QwQ-32B

It excites with strong math, coding, and reasoning scores, ranking between Claude 3.5 Sonnet and OpenAI’s o1-mini.

- Optimized for consumer GPUs through quantization
- Open-sourced under Apache, revealing tokens and weights

huggingface.co/Qwen/QwQ-32B...
Qwen/QwQ-32B-Preview · Hugging Face
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co
turingpost.bsky.social
Amazing models of the week:

• Alibaba’s QwQ-32B
• OLMo 2 by Allen AI
• ShowUI by Show Lab, NUS, Microsoft
• Adobe's MultiFoley
• INTELLECT-1 by Prime Intellect

🧵
turingpost.bsky.social
Boundless Socratic Learning with Language Games, Google DeepMind

This framework leverages recursive language-based "games" for self-improvement, focusing of feedback, coverage, and scalability. It suggests a roadmap for scalable AI via autonomous data gen and feedback loops
arxiv.org/abs/2411.16905