@Autodesk • Fellow @StanfordHAI • Prev: MSc CS @ETH
Work done by myself and the amazing @ir0armeni.bsky.social
Paper: arxiv.org/abs/2506.02459
Website: respace.mnbucher.com
Code: github.com/GradientSpac...
Work done by myself and the amazing @ir0armeni.bsky.social
Paper: arxiv.org/abs/2506.02459
Website: respace.mnbucher.com
Code: github.com/GradientSpac...
Test-time scaling with Best-of-N shows further potential to improve preference alignment on this task! (7/8)
Test-time scaling with Best-of-N shows further potential to improve preference alignment on this task! (7/8)
We train SG-LLM via SFT+GRPO—first to apply preference alignment with verifiable rewards for 3D scene synthesis. (3/8)
We train SG-LLM via SFT+GRPO—first to apply preference alignment with verifiable rewards for 3D scene synthesis. (3/8)
Lightweight, interpretable, and editable.
We then formulate scene synthesis and editing as next-token prediction. (2/8)
Lightweight, interpretable, and editable.
We then formulate scene synthesis and editing as next-token prediction. (2/8)