Here is my weekly update. Apart from some work on the internals to better represent time events, I spent some time on writing a little visualiser to help me understand scheduling. I hope it will also end up being a nice additional tool for the end user to see what's going on too. I really need some full days to work on some of the more interesting problems, and an hour or so a day (my usual schedule) is not cutting it at the moment 😦
When used in the audio code, the graph is running in the audio thread (the same one which the sound card requests a new buffer on). The generated notes from Orca/Music Language, etc. do run in a separate thread to this, but the notes are turned into PCM audio by the graph on the soundcard thread.
The UI is indeed on a separate thread. I like lock-free programming, so as much as possible the UI is detached from the audio. Some special nodes have 'real time' sections designed to happen in the audio thread, and 'UI' sections for display purposes. It's up to the nodes that do this to manage shared state correctly. This localises the problem somewhat. An example of that might be the ADSR curve which has blue dots running along it containing note events.
None of this is perfect yet, it is still a work in progress, and I tend to jump around filling in gaps as I see fit!
For the question on LFO, it is probably best to think of what a graph step is in my engine. When the sound card requests a buffer of, say, 500 stereo samples, the graph is run. Any notes that are due for scheduling are updated in the 'instrument' node, and then the output node 'pulls' 500 samples. This means each node then processes 500 audio samples in its 'compute' step. This is an atomic operation. For example, the oscillator's compute sample might build 500 sin samples into the buffer from the wavetable. Each node has to handle multiple channels, and doesn't know up front what it will receive. This is the data flow part. The LFO is no different. Suppose it is feeding an oscillator to modulate the frequency. The LFO data flow pin is connected to the oscillator modulation data flow pin. At run time, the LFO will generate its 500 samples when the oscillator evaluates the modulation input, then the oscillator will generate its 500 samples and combine them with the incoming data. If you look at the Frequency analyser in my last video, you will see the separate channels of audio. This node simply looks at what is connected to it, and displays all the channels it finds.
05/18/2020, 6:13 AM
I now watched your 1st video and it's much more clear, awesome concept!
05/18/2020, 7:57 AM
How come your Orca schedules events in the future? I thought it generally executes in the 'now' - are you running it "ahead of time" and delaying the output? Doesn't that mess with user interaction?
05/18/2020, 9:23 AM
Hi @s-ol, Indeed, I am scheduling into the future, but not much.... Orca does indeed schedule immediately, but those generated notes are effectively played after a small delay when the arrive at whatever Synth you are using; you just don't notice. It should be a similar situation here; and although I am testing far into the future, in reality I will probably be scheduling much more closely to 'now'. I'm trying to plan a for a future containing a music generation language I've been working on, alongside Orca. Although Orca is very fast, a music language may not be as fast. I know, for example, that Sonic Pi has a delay of 250->500ms which is added to generated notes. By scheduling a little bit into the future, there is I hope more time to generate and align notes correctly. My sense is that i need this flexibility, but we will see. Pattern languages also let you 'see' into the future, since you can run the pattern forward; so the concept of what is happening 'next' is something I want to at least be able to see.
Yeah, that scheduling into the future seems like a necessary evil when it comes to music tools. Very interesting problem space, considering every system will have a different characteristic latency to be compensated for.