Stalking the author of <https://futureofcoding.sla...
# linking-together
k
The graphics seem to be used in the author's band's albums: https://sixm.bandcamp.com. Very nice integration of all aspects of life 🙂
d
Shadertoy is amazing. It took me months to figure out how the code works. The shaders are written in a low level language, without any decent abstraction mechanisms or support for libraries (other than: copy and paste), and the code is usually cryptic. My language, Curv, can be seen as a high level DSL for creating the same kind of art, with the same underlying GPU mechanics. @Sébastien is working on a visual programming language and IDE that sits on top of my language / compiler / GPU based virtual machine.
đź’ˇ 2
g
Shadertoy is amazing but just in case, be aware the techniques used there are just what it says shader "TOY" They are not the techniques used to ship performant graphics. Rather they are a fun puzzle, in the form of "if the only input I had was the coordinate of the pixel being drawn could I write a function to draw something interesting" So of course when I see someone made a whole city or whole forest in a single function it's amazing. It also runs at 1 frame a second vs GTA5/Spiderman/Red Dead Redemption which use traditional and performant techniques and run at 30-60fps I only bring this up because if you're new to graphics programming Shadertoy has lead lots of new devs astray
👍 1
Also related but not as popular puzzles "If you only had a vertex id could you write a function to draw something interesting" https://vertexshaderart.com
And "If you only had time as input could you write a function to generate music" https://games.greggman.com/game/html5-bytebeat/
Also loosely related is the demoscene which doesn't have an official site but this one will do http://pouet.net They often have competitions for demos that take 4096 bytes or less (4k). You're generally expected to download and run the code locally and Windows is the most popular platform but there are youtube channels that have recorded the programs running, especially since many of them need a high end machine https://www.youtube.com/channel/UC96JVq-z0-0iHAkIkKp1_6w
d
@gman Thanks for the links. There's some interesting stuff there I haven't seen.
Signed distance fields (and sphere tracing) are a really powerful technique for making procedural art using mathematics. The mandelbulb fractal that Kartik linked to is not something you can reasonably achieve using triangle meshes.
I haven't tried this yet, but I've discovered that you can go beyond standard shadertoy programming and employ all sorts of optimization techniques to make signed distance fields performant. For example, check out the game "Dreams" by Media Molecule, and this talk: https://www.mediamolecule.com/blog/article/siggraph_2015
👍 1
One thing that makes SDFs fun is the mainstream attitude that they aren't practical. Which means there is more of an opportunity to push the envelope and discover new techniques, just because fewer people are working on this stuff. Curv isn't intended for things like GTA5, but is intended to provide a powerful set of primitives for creating 3D printable models and abstract, math-based art. The art you create won't look like the output of mesh-based modelling tools, which is actually a good thing. There are already lots of mesh based modelling tools, this will be different.
So @gman, I just looked at VertexShaderArt, but I haven't spent enough time to "crack the code". At first glance, all the demos look like things that could just as easily be done using shadertoy techniques. So what's the practical difference? What's easy with vertex shaders that is hard with fragment shaders?
g
No idea. Neither site is about being practical. Both sites are about puzzles. SDFs are not the right technique for perf. I get they make some pretty pictures but if you actually wanted to render movie quality images performantly you'd use vastly different techniques and tons of data. Tying your hands with a single function is a fun puzzle, that's it.
There was a question on S.O. The person was asking how to get their 1000 circles using SDFs to render faster. SDF is basically
Copy code
for each circle
   is this ray in the circle
That's O(N) per pixel. It gets a little better using vertex shaders as you just draw the circles so O(1) but the more complex the circle positions the more you end up wasting GPU re-computing on every vertex (or pixel in fragment shaders) something you really should have only computed once and passed in. Basically these shader puzzles are almost always big perf loss because of that. They are fun though
Dreams is amazing but it's NOT just generating a single shader with SDF functions for everything the user creates. In other words it's not making shadertoy shaders. Users create data, that data might be in the form of SDFs but those SDFs are used to generate more data that is passed to the shaders in an efficient format. The SDFs are not directly evaluated by the shaders like shadertoy SDFs.
d
A Signed Distance Field is a mathematical abstraction for representing geometric shapes. You represent a shape as an implicit equation, a function that maps an arbitrary point in space (x,y,z) onto the signed distance from that point to the shape's boundary (positive if (x,y,z) is outside the shape, 0 if on the boundary, negative if inside). The alternative is a boundary representation, like a triangle mesh or the bezier splines used by CAD programs.
SDFs provide an exact mathematical representation for a larger set of shapes than what can be represented using boundary rep. You can represent infinite shapes, and you can represent 3D fractals with infinite detail, and you can do deep zooms into fractals without generating and then storing quintillions of triangles in memory. SDFs support a rich set of operations, such as non-affine transformations (eg, bend and twist), blending, and morphing. CSG operations like union and intersection are fairly simple for the SDF representation, but are quite expensive and even computationally intractible for the triangle mesh representation.
Curv is a high level 2D and 3D modelling program that uses the SDF representation. The puzzle for me is, what's the best way of rendering a shape described by a Curv program? Since Curv is a custom DSL, I can write a custom optimizing compiler that transforms Curv programs into whatever representation is needed for optimal rendering.
So how do you union a thousand shapes using the union operator? One answer is: don't do that, use SDF repetition (or space folding) operators instead. A smart compiler could maybe do this transformation automatically, at least in some cases, but I haven't figured that out. Other answers involve using data structures. Accumulate the results of the union into a 3D texture, and then render the texture. Or, put all of the shapes into a bounding volume hierarchy (similar to a ray-tracing acceleration structure) and traverse that in the GPU. Or, convert the shape to a triangle mesh and render that. I didn't know about vertex shader hacking (thanks for the reference), so I need to think about how that could be used.
👍 1
Curv is used for parametric design of procedurally generated shapes. The benefit of regenerating the shape in every frame is that you can hook up your numeric parameters to sliders and vary them in real time. This is very useful for interactively exploring a parameter space. If you have to build a data structure before you can display the shape, and if the data structure can't be built quickly, then you lose this capability.
s
@Doug Moen This sounds a lot like composition over a monoidal structure is what you’re looking for. Like when you’re adding two integers you get another integer, which could be just calculated directly, or be stored lazily as a composition of the two original integers wrapped in a structure of the same type (this can be done as a function/closure or type or class/object, whatever your preferred implementation flavor is). Of course, integers are a simplistic example, but many more complex DSLs evaluate to a structure like this. I’ve been recently looking at how SwiftUI works, and it does this with composing transformation operations on views, which are views themselves. I assume React probably works similarly. In some object graph libraries for 3D (and 2D as well, for that matter) this is used to apply transformations, for instance to apply a combination of translation, rotation, and skewing operations on an object, which can either be preserved as separate operations (e.g. if you need to visualize each operation in the UI) or can be merged into a single matrix transformation (e.g. if you want faster performance when rendering the object graph). I believe many functional reactive programming libraries use that pattern to transform and combine streams. Parser combinators are another example, but it’s a little more complicated because they usually use monadic structures to wrap additional state or error handling into the same type. I’m not sure if I’m using the correct terminology and maybe this is something that’s well known under a different name. It’s just a pattern I see pop up everywhere, especially recently, and which I find extremely interesting, because it allows you to design a system with very few essential building blocks and very high composability — a recipe for building ultra-complex structures from just a few simple components that are easy to learn.
I love the community around shader programming and the demoscene, which pioneered many of the concepts for high-performance graphics that are part of architectures and libraries today. What I love is that they are driven by constraints. In the past it was the limited capabilities of the devices, today it’s the limitations of the massively-parallel programming paradigm for GPU programming (or alternatively arbitrarily set limits on binary size or memory usage, etc.) that requires you to think differently and find novel solutions. Part of that is that the solutions are often not the most readable, and there is a drive towards overly complicate and impressive tricks that show off the programmer’s competence and cleverness to make something work that was probably deemed impossible. It’s about pushing the boundaries and going off the beaten path to find new ways of doing things. Occasionally that leads to some breakthroughs that are useful in other domains. In a way I think GPU programming gets easily overlooked when talking about the future of programming, but it’s going to be a very important part of it. Now that we discover more and more use cases for GPUs as they become more and more general computing devices, and even the smallest devices we carry around with us have powerful GPUs in them, the future will also be a lot more about distributing computation across multiple different computation devices, a few CPU cores, many GPU cores, cores optimized for machine learning, and a few highly-specialized algorithms implemented directly in hardware, e.g. for image manipulation, (de)compression, and cryptography. I worked with many game developers for who this is already a (sometimes painful) reality. It’s no longer that simple to say there’s graphics acceleration happening on the GPU and everything else on the CPU. As GPUs are a lot more programmable now, they can take over a lot more work, and the challenge becomes balancing out work done on CPU and GPU to deliver the best experience by utilizing and optimizing for the resources a specific architecture provides.
👍 1
d
The Curv language tries to make the difference between the CPU and the GPU invisible. When you run a Curv program, some of the code will run on the CPU, some will run on the GPU, and maybe some code will run in both contexts. It's up to the compiler and runtime to decide the execution context. There is no explicit "GPU API" analogous to OpenGL.
@Stefan said "This sounds a lot like composition over a monoidal structure is what you’re looking for." Yes. And @Sébastien recently reminded me of his need for a scene graph, which is different terminology for the same thing.