1 minute read

I worked on two separate papers regarding synthesizers during the first half of 2021, which was accepted to DAFx20in21 and ISMIR2021 respectively. Both of them deal with the “synthesizer sound matching” task, which aims to find synthesizer parameters that can best recreate a certain sound. “Synthesizer sound matching” is a term I made up because “parameter estimation” seemed odd for sounds not made using the same synthesizer. Earlier work on what to call this task seems to vary: “synthesizer parameter estimation”, “automatic synthesizer programming”, and “tone matching (for matching static spectra)”.

Quality Diversity for Synthesizer Sound Matching (DAFx20in21)

Paper Website

This paper applied Quality-Diversity (QD) algorithms, a relatively new type of evolutionary computation, to find a diverse yet high-performing solutions to the sound matching problem. You can read more about QD algorithms here.

This work used RenderMan, a fantastic library that allows for faster-than-realtime playback of VST synthesizers, but the recently released DAWDreamer might be better.

Synthesizer Sound Matching with Differentiable DSP (ISMIR2021)

Paper Website

This work is based on the idea that “if a synthesizer was implemented using Differentiable DSP, it would probably be easier to match sounds with it”, since we would be able to calculate the loss in terms of the synthesizer output. We implemented a simple synthesizer in pytorch, with conventional controls like filter cutoff and oscillator shape, and compared the performance of a sound matching network with multiple training strategies.

I plan to continue to expand on this idea, perhaps adding more modules for more synthesis capabilities, and perhaps finding other ways of assisting the user in the sound design process.

Updated: