Daichi Laboratory's Synth1 is a virtual synthesizer based off of the Nord Lead 2 and is the most downloaded synth plug-in of all time.
Click the floating synth above (or here), and an AI will generate a zipped bank of 128 presets for Synth1.
Here's an example of what the output can sound like. All the synth sounds (excluding percussion), in this track were presets generated by the AI.
Synth1 was one of the first plugins I downloaded when starting to make music. Its versatility lies in its simplicity - and a massive collection of user generated presets. So as a CS person who's interested in the fusion of machine learning and music, who could ask for a better dataset?
This project was heavily inspired by Nintorac's This DX7 Cartridge Does Not Exist. Make sure to check that out also!
First, make sure you download Synth1 and load it into your Digital Audio Workstation (DAW). There's tons of tutorials on how to load presets into Synth1 online; here's one that does a good job of showing the process.
The model is still being worked on so there's a chance that the outputs might be strange.
For no sound, I've figured out that filter frequency is sometimes set so low that nothing comes out. Usually if you turn that knob up you end up getting something.
As for the weird noises, some of the presets it trained on were FX patches, which aren't very melodic in nature.
Right now, the generated presets don't make full use of all the features in Synth1 (such as FM, effects, and the arpeggiator). This was done in order to make sure the project was feasible, so now that this release is out, the next step would be to use the full spectrum of Synth1 parameters.
I would also like to make it so you could generate certain types of sounds on command, such as bells or basses.
I used a Generative Adversarial Network (GAN), which is essentially making two neural networks fight against each other for our own benefit.
The AI is made up of two parts, a generator and a discriminator. The discriminator learns to detect whether a given preset is fake or not, while the generator learns how to fool the discriminator. Over time, the discriminator gets better at detecting fakes, while the generator gets better at generating fakes. At the end, the generator ends up producing pretty convincing fakes, which are then sent to you!
The names are generated from a Recurrent Neural Network (RNN) based off of the synth parameters. There's a lot of duplicates however, further training and model finagling is needed.
If you want a more technical explanation, check out this additional blog post I wrote.
There's a collection of 25,000 presets that floats around the internet, who actually compiled it first I don't know. This is the data that I used for training.
The code for this project is on GitHub.
I also have a Ko-fi if you'd like to throw me a couple dollars, which would be graciously appreciated and lets me keep working on cool things like this!
If you have any more questions, feel free to shoot me an email at info@thispatchdoesnotexist.com