Here is my weekly update. Apart from some work o...
# two-minute-week
c
Here is my weekly update. Apart from some work on the internals to better represent time events, I spent some time on writing a little visualiser to help me understand scheduling. I hope it will also end up being a nice additional tool for the end user to see what's going on too. I really need some full days to work on some of the more interesting problems, and an hour or so a day (my usual schedule) is not cutting it at the moment :(
👍 2
❤️ 1
😎 1
m
What is your end goal? Do you want to be able to develop vst's with your engine which can be used in daw's like ableton or something more standalone?
How do you handle the communication between the user interface and the graph engine? I assume that the graph engine runs in a separate thread, do you send messages from there to the UI main thread?
Another question: how do you handle an LFO to modulate a filter frequency in your graph engine? Do they have there own timers and update a "generation number" themselves?
c
To answer the first question, you should probably check out my first 2 minute week video: https://futureofcoding.slack.com/files/UUQ2EQW21/F011WFWEGMC/week2minute_1.mp4 The short story is that I've been working on a 'Live Coding' environment for some time; it is audio + visual. Designed as a toolkit for performance, teaching, research. So the audio is a smaller part of the whole.
When used in the audio code, the graph is running in the audio thread (the same one which the sound card requests a new buffer on). The generated notes from Orca/Music Language, etc. do run in a separate thread to this, but the notes are turned into PCM audio by the graph on the soundcard thread. The UI is indeed on a separate thread. I like lock-free programming, so as much as possible the UI is detached from the audio. Some special nodes have 'real time' sections designed to happen in the audio thread, and 'UI' sections for display purposes. It's up to the nodes that do this to manage shared state correctly. This localises the problem somewhat. An example of that might be the ADSR curve which has blue dots running along it containing note events.
None of this is perfect yet, it is still a work in progress, and I tend to jump around filling in gaps as I see fit!
For the question on LFO, it is probably best to think of what a graph step is in my engine. When the sound card requests a buffer of, say, 500 stereo samples, the graph is run. Any notes that are due for scheduling are updated in the 'instrument' node, and then the output node 'pulls' 500 samples. This means each node then processes 500 audio samples in its 'compute' step. This is an atomic operation. For example, the oscillator's compute sample might build 500 sin samples into the buffer from the wavetable. Each node has to handle multiple channels, and doesn't know up front what it will receive. This is the data flow part. The LFO is no different. Suppose it is feeding an oscillator to modulate the frequency. The LFO data flow pin is connected to the oscillator modulation data flow pin. At run time, the LFO will generate its 500 samples when the oscillator evaluates the modulation input, then the oscillator will generate its 500 samples and combine them with the incoming data. If you look at the Frequency analyser in my last video, you will see the separate channels of audio. This node simply looks at what is connected to it, and displays all the channels it finds.
m
I now watched your 1st video and it's much more clear, awesome concept!
s
How come your Orca schedules events in the future? I thought it generally executes in the 'now' - are you running it "ahead of time" and delaying the output? Doesn't that mess with user interaction?
c
Hi @s-ol, Indeed, I am scheduling into the future, but not much.... Orca does indeed schedule immediately, but those generated notes are effectively played after a small delay when the arrive at whatever Synth you are using; you just don't notice. It should be a similar situation here; and although I am testing far into the future, in reality I will probably be scheduling much more closely to 'now'. I'm trying to plan a for a future containing a music generation language I've been working on, alongside Orca. Although Orca is very fast, a music language may not be as fast. I know, for example, that Sonic Pi has a delay of 250->500ms which is added to generated notes. By scheduling a little bit into the future, there is I hope more time to generate and align notes correctly. My sense is that i need this flexibility, but we will see. Pattern languages also let you 'see' into the future, since you can run the pattern forward; so the concept of what is happening 'next' is something I want to at least be able to see.
👍 1
i
Yeah, that scheduling into the future seems like a necessary evil when it comes to music tools. Very interesting problem space, considering every system will have a different characteristic latency to be compensated for.