Tensynth: AI Software Synthesizer

Tensynth (Tensor Synth) is a differentiable subtractive synthesizer that uses neural networks to map audio or text to synthesis parameters. The synthesis engine is implemented as differentiable DSP (DDSP), so gradients flow from the output audio back through oscillators and filters, enabling end-to-end training.

In plain terms: Imagine a synthesizer as a box full of knobs and switches that shape the sound. Usually, you have to turn those knobs yourself. Tensynth uses AI so you don’t have to: play a sound (or describe it in words, like “warm bass” or “bright lead”), and the system figures out which knob settings would produce something like that. So you can “play” the synth by playing another sound into it, or by typing what you want—useful for musicians, sound designers, or anyone who wants to explore sounds without learning all the technical controls.

To reduce the learning curve and make synthesis more transparent, Tensynth pairs sound with real-time 3D visuals that illustrate key concepts—oscillators, filtering, modulation, and dynamics—so beginners can understand what’s happening while experienced producers can move from idea to usable sound faster. The project is being developed using PyTorch for machine learning, alongside JUCE and Max for Live for prototyping and integration.

***

Jakob Visic is a Toronto-based multidisciplinary designer, developer, and music producer who builds tools and experiences where sound, interaction, and visuals meet. With Croatian roots and a background spanning creative coding, UX, and electronic music production, he’s drawn to projects that make complex creative workflows feel playful, fast, and intuitive. Away from the studio and screen time, you’ll find him DJing, prototyping games and audiovisual pieces, catching live shows, lifting weights, and avoiding new interests.