Week 2 Video of my live coding project overview, f...
# two-minute-week
c
Week 2 Video of my live coding project overview, focusing on the Synthesizer currently. I'm new to audio programming, so this has been an interesting journey.
❤️ 1
👍 2
I wanted to say a couple things in the video, but didn't remember to mention them: • The code is generating what might well be thought of as 'SynthDefs' in SuperCollider. If you think of how you build a synthesizer in that tool, and then add the ability to generate a nice UI for controlling it, then that's the direction I'm going in initially. • The second more complex synth I show is modelled on the 'AudioKit Synth One' Open Source synthesizer. I basically read through their DSP code and figured out how to make my synthesizer have the same layout. This is a great way to troubleshoot my units and check that I'm getting good quality audio to match an existing sythesizer. I've found a lot of bugs this way. What I've learned is that somewhere between generating sine waves and modulating them, and outputting sound, there are lots of tips and tricks to making it sound nice. I'm a GPU programmer, not an Audio guy, so that's been very helpful. The above said, it is sometimes interesting how GPU solutions to problems often have parallels in Audio!
m
looks really nice! are you planning to add some scripting layer like lua to have a more dynamic way to define them other than c++?
once you generate the sound, what do you use to play it? supercollider? something else?
c
Hi @Mariano Guerra, I guess I didn’t say it, but I have some experimental scripting already, but intending to try the language Fe as a simpler solution. That’s the Synth (it replaces SC as the audio generator). The next part is driving the synth with note events. I have 2 approaches - firstly, the built in Orca I showed in my last video, secondly, a pattern based language I’ve been experimenting with. I need to pull all these pieces together and make a unified demo - I think that will help clarify. Think ‘single executable’ with no dependencies like supercollider, etc.....
👍🏽 1
i
Can we nerd-out for a sec on that FFT view? I love the per-channel idea. Was that inspired by anything in particular?
❤️ 1
(My favourite audio tool, perhaps my favourite software tool of all, is the EQ8 in Ableton Live, which is a parametric EQ with an excellent GUI and a fairly nice spectrum behind it. So it always excites me to see people doing cool things with EQ GUIs or FFT displays)
c
Because I need visualisations to pretty much understand anything hard, I needed the FFT to help me understand the synth and check for problems (it has already shown an issue with my triangle wave tables). I solved the problem of polyphony by making my data flow pins have multiple channels - so effectively every unit will generate data for all input channels. That meant that the multiple visualisations just fell out of the design (but note that the visualiser node is attached into the graph before a ‘flattening’ node which combines the polyphony into a single channel (before widening it back to stereo!) hope that makes sense @Ivan Reese !
💡 2
i
That's an incredibly cool idea. If I understand it right, it's as though all your nodes are doing multiple channels in parallel? That feels a bit like SIMD for visual programming. I'm going to have to think about this a lot more. I think this has very interesting ramifications.
c
Yes, that's right. Imagine the user is playing C Major; 3 notes. Those 3 notes go into the Oscillator node as control signals. The oscillator then outputs 3 arrays, each containing, say, 512 audio samples for each of the 3 notes. The 'output' pin of the oscillator now has 3 internal data streams. So each unit can operate on the data as a whole. The next graph node can understand this data and do the right thing. Imagine that you have 2 oscillators tied into the same note input. Now you have 2 units emitting the same 3 channels. If you want to combine the oscillators for each node, then the mixer has no trouble distinguishing the pairs of streams to combine, because each note channel has been tagged with the Id of the note.
Copy code
m_spOutData->MatchChannelInput(*pFlowData);
auto& outChannels = m_spOutData->GetChannels();
auto& inChannels = pFlowData->GetChannels();

for (uint32_t ch = 0; ch < pFlowData->GetNumChannels(); ch++)
{
    for (uint32_t i = 0; i < maud.genFrameCount; i++)
    {
        sp_moogladder_compute(maud.pSP, m_vecMoogLadder[ch], &inChannels[ch].data[i], &outChannels[ch].data[i]);
     }
}
Above is the simple example of a Low Pass Filter. It first reads in the input channels, and in this case ensures that the output channels match, before applying the effect.
I doubt this is a new approach; but it seemed the most sensible to me. I thought about potentially splitting the directed graph into separate instances for each note, but that felt like it was going to get complicated really quickly.
I also considered threading issues, but this actually works really well from that point of view. In my performance analysis, i've noticed that most of the work is in the wave table sampling. Since all those units are just sitting generating arrays of data for all the held down notes, they can effectively be run in parallel if necessary
Above is a profile of the graph I showed with several notes playing. The blocks on the right are the FFT for each note, in worker threads. The block on the left are the oscillators. Each oscillator is generating all samples for all notes, but you can see that they run sequentially since they are on the same audio thread. I'll probably move them to workers (it isn't hard), but I don't want to optimise up front until I have a better feel for things. Since premature optimization is the root of all evil, as you know 😉