For the question on LFO, it is probably best to think of what a graph step is in my engine. When the sound card requests a buffer of, say, 500 stereo samples, the graph is run. Any notes that are due for scheduling are updated in the 'instrument' node, and then the output node 'pulls' 500 samples. This means each node then processes 500 audio samples in its 'compute' step. This is an atomic operation. For example, the oscillator's compute sample might build 500 sin samples into the buffer from the wavetable. Each node has to handle multiple channels, and doesn't know up front what it will receive. This is the data flow part. The LFO is no different. Suppose it is feeding an oscillator to modulate the frequency. The LFO data flow pin is connected to the oscillator modulation data flow pin. At run time, the LFO will generate its 500 samples when the oscillator evaluates the modulation input, then the oscillator will generate its 500 samples and combine them with the incoming data. If you look at the Frequency analyser in my last video, you will see the separate channels of audio. This node simply looks at what is connected to it, and displays all the channels it finds.