A quick demo of all the pieces of my live coding p...
# two-minute-week
c
A quick demo of all the pieces of my live coding project integrated into the single, final application. It's nice to have everything working together for the first time. The only thing I haven't hooked up is the communication between the audio and the video parts so the sound can affect the visuals; but I'll probably get that working next week at some point.

https://youtu.be/pD0hibX5SlEβ–Ύ

😎 5
🍰 1
πŸŽ‰ 6
Looks like the HD version is still processing, so might be worth waiting for that....
m
WOW! This is really inspiring! Will an alpha/beta version be available around christmas? I am looking forward to itπŸ˜ŽπŸ˜€
c
Thanks πŸ™‚ I have a loose goal of shipping something by Christmas. It will all be open source/free. My focus is cleaning things up and making it easy enough to use for others. I'm done adding major features now; it's really about making it use friendly for the foreseeable future. Then I'll ship it.
m
I am looking forward to it! And I dont mind if its rough around the edges, you've done a great job! We all know that polishing is the 20% that will take the real 80% of the timeπŸ˜… (thinking about it: a nice future of coding would be that it would be the other way around)
c
looks great! how are you planning on exposing the audio to the graphics? fft analysis -> uniforms?
c
I already have code that converts the FFT and Audio data into a 4-row texture for the pixel shader, I just need to hook it up again. Additionally, I bin the frequencies into a vec4 for vertex shader access. The nice part is that I have perfect audio information - technically I don’t even need an FFT - since the synthesizer is integrated and I know all the notes I generate. So lots of possibilities here. I can also delay the audio in order to sync up to the video, or vice versa - something that is harder when sampling audio input to FFT. I have an example of that, let me find it...

https://youtu.be/vyZky_tRq7kβ–Ύ

@Charlie Roberts This is an old version showing audio input->FFT->texture. As you can see, I managed to get them pretty well synced together, but it isn't easy; you need the audio frames to be small and the FPS of the graphics needs to be high.
I actually explain/show the FFT and Uniform stuff here:

https://youtu.be/mLplFS5WsLg?t=74β–Ύ

o
Impressive work, bravo! πŸ‘ You end up with a pretty complete tool: text code, custom views/controls, debug, performance monitoring, live coding, etc. Great!
c
Turned out the FFT -> Texture code only needed a small tweak. The FFT under the sphere is being generated in a single pass by the pixel shader which is reading the uploaded FFT data in real time.
Thanks @ogadaki - This is the plan of course, to have a single integrated tool. I will probably eventually support Midi devices and external synthesizers such as Super Collider, but I wanted something that just works out of the box, and can be easily hacked/tweaked. One of my goals is that a developer can take this tool and do any kind of A/V research they want. Perhaps they want to try a new sound effect, or a new way to visualize sound, or work on a new music language. This should give them a great starting point.
πŸ‘ 1