Justin Salamon
@justinsalamon.bsky.social
330 followers 130 following 15 posts
Head of Sound Design AI Research at Adobe. Machine learning and signal processing for audio & video. Musician. He/him. www.justinsalamon.com
Posts Media Videos Starter Packs
justinsalamon.bsky.social
FLAM is trained jointly on instance (global) and frame-wise (local) objectives.

The secret sauce: A memory-efficient and calibrated frame-wise objective with logit adjustment to address spurious correlations, such as event dependencies and label imbalances during training
justinsalamon.bsky.social
Enter FLAM: Frame-Wise Language-Audio Modeling.

A model trained to produce a calibrated likelihood for *any* text prompt.

FLAM outperforms prior self-supervised models on both closed-set and open-set SED, while preserving strong retrieval and zero-shot classification accuracy
justinsalamon.bsky.social
Our goal is for the model to detect *any* sound via free form text queries.

"So use CLAP", some of you will say.

The problem is its output likelihoods are not calibrated for different prompts :(

That's ok ranked retrieval, but for detection it's a no go.
justinsalamon.bsky.social
Sound Event Detection models, ie finding sounds in audio/video recordings, are typically constrained to a predefined "closed" set of sounds, like in this (old!) model below for urban sound detection.

It has some applications, but it doesn't address general purpose sound search.
justinsalamon.bsky.social
I think we finally cracked it? FLAM can detect *any* sound via text prompts

arXiv (ICML'25): arxiv.org/abs/2505.053...
demos: flam-model.github.io

Led by Yusong Wu, with @tsirif.bsky.social Ke Chen, Cheng-Zhi Anna Huang, Aaron Courville, @urinieto.bsky.social @pseeth.bsky.social
justinsalamon.bsky.social
Generative Extend in Premiere Pro just won *five* awards at NAB 2025, including the NAB Show Product of the Year award! SODA, our group, created the audio GenAI model in charge of audio extensions in the feature. Couldn't be more proud of the team!
w/ @urinieto.bsky.social @pseeth.bsky.social
justinsalamon.bsky.social
Generative Extend just released in Premiere Pro! Use GenAI to extend your video *and audio* clips for a perfectly timed edit.

The audio model was built by our team, the Sound Design AI (SODA) group at Adobe Research w/ @pseeth.bsky.social and @urinieto.bsky.social 🙌

www.youtube.com/watch?v=_Bv5...
Generative Extend | Premiere Pro 2025 Updates | Adobe Video
YouTube video by Adobe Video & Motion
www.youtube.com
justinsalamon.bsky.social
We didn't expect this... our Sketch2Sound demo video has gone viral on IG with more than 5.2 million views 🤯

Amazing job @hugofloresgarcia.bsky.social @pseeth.bsky.social @urinieto.bsky.social

I should've done my hair...
www.instagram.com/reel/DEEBRhd...
justinsalamon.bsky.social
Really cool to see our AI for bioacoustics work in the MIT Technology Review! BirdVox was a fantastic project that beyond posing a fascinating research problem, introduced me to the world of bird watching and deepened my appreciation of the natural world
www.technologyreview.com/2024/12/18/1...
AI is changing how we study bird migration
After decades of frustration, machine-learning tools are unlocking a treasure trove of acoustic data for ecologists.
www.technologyreview.com
Reposted by Justin Salamon
jonathanleroux.bsky.social
@waspaa.com is on Bluesky!
Big changes are coming for WASPAA 2025!
waspaa.com
WASPAA is moving, for the first time in its almost 40-year history. Stay tuned for the announcement of the new venue!
justinsalamon.bsky.social
Sketch2Sound is out!

Takes a text prompt + vocal (or sonic) imitation and generates sound effects that perfectly match the energy and dynamics of your voice.

It's an extremely intuitive (and fun!) way to create SFX that are perfectly timed to your video.

Led by @hugofloresgarcia.bsky.social 👏
hugofloresgarcia.bsky.social
new paper! 🗣️Sketch2Sound💥

Sketch2Sound can create sounds from sonic imitations (i.e., a vocal imitation or a reference sound) via interpretable, time-varying control signals.

paper: arxiv.org/abs/2412.08550
web: hugofloresgarcia.art/sketch2sound
justinsalamon.bsky.social
I'm hiring a Research Engineer for my team at Adobe Research.

Full details in the link in the post below, DM me if interested.
justinsalamon.bsky.social
📢 Audio AI Job opportunity at Adobe!

The Sound Design AI Group (SODA) is looking for an exceptional research engineer to join us in building the future of AI-assisted audio and video creation.

Strong ML background, GenAI experience a plus.

Details: adobe.wd5.myworkdayjobs.com/external_exp...
justinsalamon.bsky.social
Here's another example of work from our group:

MultiFoley, a Video-to-Audio model that generates perfectly synced audio for video at 48 kHz and supports multimodal conditioning.

More on MultiFoley here: bsky.app/profile/czya...
justinsalamon.bsky.social
📢 Audio AI Job opportunity at Adobe!

The Sound Design AI Group (SODA) is looking for an exceptional research engineer to join us in building the future of AI-assisted audio and video creation.

Strong ML background, GenAI experience a plus.

Details: adobe.wd5.myworkdayjobs.com/external_exp...
justinsalamon.bsky.social
New model from the SODA group at Adobe and UMich!

MultiFoley generates perfectly synced audio for video at full 48 kHz and supports multimodal conditioning.

You can define the generated sound via a text prompt, an example SFX, or audio you want to extend.

Led by our intern @czyang.bsky.social 👇
czyang.bsky.social
🎥 Introducing MultiFoley, a video-aware audio generation method with multimodal controls! 🔊
We can
⌨️Make a typewriter sound like a piano 🎹
🐱Make a cat meow like a lion roars! 🦁
⏱️Perfectly time existing SFX 💥 to a video.

arXiv: arxiv.org/abs/2411.17698
website: ificl.github.io/MultiFoley/