Jun-Yan Zhu
@junyanz.bsky.social
1.3K followers 140 following 3 posts
Assistant Professor of the Generative Intelligence Lab at Carnegie Mellon University. Understanding and creating pixels. All the code and models are available at http://github.com/junyanz.
Posts Media Videos Starter Packs
Reposted by Jun-Yan Zhu
jamestompkin.bsky.social
The AI for Content Creation workshop is kicking off today at #CVPR2025 - Grand Ballroom A1 - @magrawala.bsky.social Kai Zhang (Adobe), Charles Herrmann (Google), Mark Boss (Stability AI), Yutong Bai (UC Berkeley), Cherry Zhao (Adobe), Ishan Misra (Meta) and @jonbarron.bsky.social ! See you soon!
Reposted by Jun-Yan Zhu
jamestompkin.bsky.social
AI for Content Creation workshop @ #CVPR2025 - Grand Ballroom A1 - 4pm - panel on "Open Source in AI and the Creative Industry" - with @magrawala.bsky.social (Stanford), Cherry Zhao (Adobe), Ishan Misra (Meta) and @jonbarron.bsky.social (Google) - go go!
junyanz.bsky.social
[2/2] Work led by @avalovelace.bsky.social, @kangledeng.bsky.social, Ruixuan Liu, and CMU faculty Changliu Liu and Deva Ramanan. LegoGPT is a small first step towards generative manufacturing of physical objects. Current version is limited to 20x20x20, 21 object categories, and simple brick types.
junyanz.bsky.social
[1/2] We've released the code for LegoGPT. Our autoregressive model generates physically stable and buildable designs from text prompts by integrating physics laws and assembly constraints into LLM training and inference.

Code: github.com/AvaLovelace1...
Website: avalovelace1.github.io/LegoGPT/
Reposted by Jun-Yan Zhu
reveai.bsky.social
Reve Image is our first step towards world-class image generation — and you can experience it for free today 🌜
(🔊)
Reposted by Jun-Yan Zhu
jamestompkin.bsky.social
The AI for Content Creation workshop #CVPR2025 is accepting paper submissions. ai4cc.net Deadline March 21st 2025 midnight PST. 4 page extended abstracts, 8 pagers, and previously published work (ECCV, NeurIPS, even CVPR)! Many topics 📷📹🎬🎲✒️📃🖼️👗👔🏢 - come spend the day with us!
AI4CC 2025
ai4cc.net
Reposted by Jun-Yan Zhu
nupurkmr9.bsky.social
Can we generate a training dataset of the same object in different contexts for customization? Check out our work SynCD, which uses Objaverse assets and shared attention in text-to-image models for the same.
cs.cmu.edu/~syncd-proje...
w/ Xi Yin, @junyanz.bsky.social, Ishan Misra, and Samaneh Azadi
SynCD: Generating Multi-Image Synthetic Data for Text-to-Image Customization
cs.cmu.edu
Reposted by Jun-Yan Zhu
aaronhertzmann.com
One day walking home, I noticed a dry cleaners across the street. “Was that always there?” I thought. A little Googling revealed that it was on my street longer than I have.
Here's a blog post on why we often miss what's right in front of us. #visionscience
aaronhertzmann.com/2024/05/09/i...
The Illusion of Awareness: Why We See Much Less Than We Think We Do
A few years ago, while walking home, I noticed a dry cleaners across the street from my house. “Was that always there?” I thought, surprised. I’d walked by that spot many, many times over the years, b...
aaronhertzmann.com
Reposted by Jun-Yan Zhu
elliottwu.bsky.social
Excited to bring the 5th CV4Animals Workshop to #CVPR2025

We welcome submissions in 2 tracks:
1) unpublished work up to 4 pages
2) papers published within last 2 years

Submit by Mar 28 & join us with amazing speakers in Nashville:
www.cv4animals.com
🦒🪼🐬🐿️🦩🐢🦘🦜🦥🦋

@cvprconference.bsky.social
Reposted by Jun-Yan Zhu
ruihangao.bsky.social
3D content creation with touch!

We exploit tactile sensing to enhance geometric details for text- and image-to-3D generation.

Check out our #NeurIPS2024 work on Tactile DreamFusion: Exploiting Tactile Sensing for 3D Generation: ruihangao.github.io/TactileDream...
1/3
Reposted by Jun-Yan Zhu
maxwelljones14.bsky.social
I created a huggingface space for my current work PairCustomization - You can choose from a set of pretrained LoRAs trained with our method, and run inference with our novel style guidance:
huggingface.co/spaces/pairc...

I demo'ed this at #SIGRAPHASIA2024 and it went great! :)
3/3
junyanz.bsky.social
Check out Maxwell et al.'s recent SIGGRAPH Asia paper on model customization with a single image pair. The code is available at github.com/PairCustomiz...
Reposted by Jun-Yan Zhu
Reposted by Jun-Yan Zhu
jbhuang0604.bsky.social
Introducing Generative Omnimatte:

A method for decomposing a video into complete layers, including objects and their associated effects (e.g., shadows, reflections).

It enables a wide range of cool applications, such as video stylization, compositions, moment retiming, and object removal.
Reposted by Jun-Yan Zhu
shiryginosar.bsky.social
I am recruiting exceptional PhD students & postdocs with an adventurous soul for my 💫new TTIC AI lab💫! We aim to understand intelligence, one pixel at a time, inspired by psychology, neuroscience, language, robotics, and the arts. Apply: www.ttic.edu/studentappli...

sites.google.com/ttic.edu/ope...
TTIC building. Photo credit, TTIC.