Kohei Asai, Wataru Nakata, Yuki Saito, Hiroshi Saruwatari
The University of Tokyo
Code : https://github.com/koacai/geneses
paper: https://arxiv.org/abs/2601.18456
Real-world audio recordings often contain multiple speakers and various degradations, which limit both the quantity and quality of speech data available for building state-of-the-art speech processing models. Although end-to-end approaches that concatenate speech enhancement (SE) and speech separation (SS) to obtain a clean speech signal for each speaker are promising, conventional SE-SS methods suffer from complex degradations beyond additive noise. To this end, we propose Geneses, a generative framework to achieve unified, high-quality SE--SS. Our Geneses leverages latent flow matching to estimate each speaker's clean speech features using multi-modal diffusion Transformer conditioned on self-supervised learning representation from noisy mixture. We conduct experimental evaluation using two-speaker mixtures from LibriTTS-R under two conditions: additive-noise-only and complex degradations. The results demonstrate that Geneses significantly outperforms a conventional mask-based SE--SS method across various objective metrics with high robustness against complex degradations. Audio samples are available in our demo page.

Inference pipeline of Geneses. Flow matching is performed on the latent representations of a pre-trained VAE.

Architecture of flow predictor.
Degraded Mixture Speech

Original - Speaker 1

Original - Speaker 2

Estimated - Speaker 1

Estimated - Speaker 2

Estimated - Speaker 1

Estimated - Speaker 2

Degraded Mixture Speech

Original - Speaker 1

Original - Speaker 2

Estimated - Speaker 1

Estimated - Speaker 2

Estimated - Speaker 1

Estimated - Speaker 2
