Work done by [Jongmin Jung, DongMin Kim, Sihun Lee, Seola Cho and Dasaem Jeong]@ MALer Lab, Sogang Univ, Hyungjoon Soh, [Irmak Bukey, Chris Donahue @chrisdonahue.com] at CMU🥳
By training a model to generate audio tokens from given score image, the model learn how to read notes from the score image. This led our model to break SOTA for OMR! Vice versa for AMT can work, while the gain was not significant enough compared to the OMR.
Score videos are slideshow of audio-aligned score image. Although they does not include any machine-readable symbolic data, we thought these score image - audio pairs can be used for understand each modality, because they share same semantic in (hidden) symbolic music domain.
Music exists in various modal, and the translation between modality is important MIR Tasks. Score Image→Symbolic Music: OMR Audio → MIDI: AMT MIDI → Audio: Synthesis Score → Performance MIDI: Performance Rendering Audio → Music Notation: Complete AMT
🎶Now a neural network can read scanned score image and generate performance audio in end-to-end😎 I'm super excited to introduce our work on Unified Cross-modal translation between Score Image, Symbolic Music, and Audio. Why does it matter and how to make it? Check the thread🧵