VAT-SNet: A Convolutional Music Separation Network Based on Vocal and Accompaniment Time Domain Features
Xiaoman Qiao , Min Luo , Fengjing Shao , Yi Sui , Xiaowei Yin and Rencheng Sun
This page shows the results of using our model VAT-SNet to separate vocal and accompaniment in music. In the following table, the first column is the input mixed music, the second column is the separated vocals, and the third column is the separated accompaniment. All of the following music clips were randomly selected from the 200 music clips in MIR1K that were not used as training.