Spinning up a separate thread <@UCUSW7WVD> said i...
# of-graphics
i
Spinning up a separate thread @Kartik Agaram said in https://futureofcoding.slack.com/archives/C5T9GPWFL/p1585680051053200?thread_ts=1585420887.013400&amp;cid=C5T9GPWFL:
Please build your prototypes atop Mu!
I have no idea how to do graphics or sound. But a forcing function would be helpful.
All that I need for a basic implementation of Hest's renderer is something like: the ability to draw stroked straight lines, filled rectangles and circles, and the most basic text (bitmap font) at arbitrary sizes and positions... with a few thousand drawing operations of various sorts happening in less than 3ms.
❤️ 3
c
If you want to draw such shapes at good quality, I can recommend the nanovg library; you just need an opengl render target and can draw high quality images.
It's the most simple approach I've seen.
ImGui will do this too, but it doesn't quite have the vector graphics sophistication of nanovg for things like gradient fills, etc.
In my intro thread, I showed a screenshot of a UI button I tried with nanovg:

https://files.slack.com/files-pri/T5TCAFTA9-FUZDKAZ7S/image.png

😍 1
The knob on the right is nanovg, the ones on the left are inside Bitwig (which I suspect might use something very similar, if not the same)
k
@Ivan Reese can you elaborate on what you mean by "stroked straight line"?
c
At least, I assume that's what he means - the library I referenced draws SVG primitives like this.
❤️ 1
i
Yeah, the "line" or "path" refers to the geometry, and the "stroke" is how it is rendered. When stroking a line, you generally want control over the stroke thickness and color at a minimum. Lots of features beyond that, but that's the basics.
👍 1
d
Nanovg requires OpenGL. Doesn't that conflict with the "no dependencies" requirement of the Mu project? If the goal is to be able to program all the way down to the bare metal, then I think you would want to standardize on a single GPU architecture, and program it directly. Just as you have standardized on x86. Or don't support a GPU, and instead interface directly to a frame buffer.
👍 2
k
Yeah, my current plan is to focus on just a frame buffer. Mu currently can boot up a disk image with the hobbyist OS Soso (https://github.com/ozkl/soso) in addition to Linux. Soso is much tinier so reduces my dependency on C. It's also graphical from the ground up, where I haven't really managed to compile a Linux kernel with graphics yet. On the other hand it doesn't have a network stack... I interpreted the nanovg suggestion as something to crib from, and it's extremely useful in that respect. Even though I minimize dependencies I don't want to rethink everything from scratch. (For example, Mu's support for bootable disk images comes from digging into and cribbing from the Minimal Linux project at http://minimal.linux-bg.org.) Regardless of dependencies, tiny projects that can teach how something works are gold.
Another way to put it: Mu burns everything to the ground in production. Mainstream software is good for prototypes (https://futureofcoding.slack.com/archives/C5T9GPWFL/p1586116575146400?thread_ts=1586108832.143700&amp;cid=C5T9GPWFL) and prototypes are very useful in staging environments.
💯 1
I actually like a lot of the OpenGL interface, from what I've seen of it. I'd be happy to implement some subset of it. Unless someone convinces me it sucks..
d
The https://libre-soc.org/ project aims to produce an open source CPU + GPU. The GPU is integrated with the CPU: it's just an extension to the instruction set architecture, rather than being a coprocessor. This means that programming this GPU on the bare metal will be orders of magnitude simpler than a typical GPU. Hypothetically, when the project is finally finished and the hardware is available, it would be a good platform for a program-on-the-bare-metal type of software project.
💡 1
❤️ 2
The reason to implement OpenGL is for porting legacy software. I am not a fan: I think the API is overly complex and clumsy. OpenGL is stuck forever at version 4.6 and everybody has moved on. The new cross-platform GPU API is WebGPU (which is still under development, although working prototypes exist). Vulkan also purports to be cross-platform, but Vulkan will never be available in a web browser. Vulkan is not nice to use (1000 lines of code to draw a triangle) while WebGPU has a pleasant to use and modern C, Rust and Javascript interface. WebGPU will be a native API on all desktop and mobile platforms, it will be available in web browsers via Javascript, and it will be the native GPU API for Web Assembly. So WebGPU is the future and OpenGL is the past.
👍 2
c
The advantage of GL is that it is pretty much available everywhere. As you say, Vulkan is difficult for beginners (and advanced users 🙂 ).
💯 1
If you just have a framebuffer, you could think of something like this; a software-based renderer: https://github.com/zauonlok/renderer
https://github.com/ssloy/tinyrenderer - or this simpler one. It even has an 'our_gl.h' header file....
d
I am using OpenGL right now for my project. My problem is that I want to be cross-platform, and Apple has officially deprecated OpenGL, and they also do not support OpenGL 4.3, which has features I need. The MacOS and WebAssembly platforms are the reasons why I will migrate to WebGPU. OpenGL is still a great cross-platform solution if you don't need access to features introduced in the last 10 years, like compute shaders.
c
Yeah, for original poster’s question, he just needs basic GL. I didn’t realise WebGPU would work native on desktop - if there’s a C API it’s more interesting... my live coding tool has GL/DX 12 and Vulkan backends in various levels of completion!
k
@Chris Maughan both those repos are excellent, thank you. However they're not actually writing to the framebuffer, are they? Maybe I'm missing something. Just to give some context, the problem I wrestle with for graphics is how to make something that: a) can display on a real machine with minimal dependencies, and b) can also display on a stock *nix or Mac machine without needing root permissions and so on. To help you triangulate, my non-graphics programs in Mu can currently run on Linux and also on a much simpler OS using either Qemu or native hardware. This is easy because we don't typically need root to access stdin/stdout/tty the way we need it for framebuffer access. Once I can display a single pixel within these criteria, your repos become very relevant.
d
@Chris Maughan The WebGPU C implementations are wgpu (https://github.com/gfx-rs/wgpu) and dawn (https://dawn.googlesource.com/dawn/). Both implementations use the same C header file (https://github.com/webgpu-native/webgpu-headers/blob/master/webgpu.h). For now, these prototype implementations ingest SPIR-V as the shader language. Later they will change to ingest WebGPU Shader Language (WGSL), which is text based and isomorphic to SPIR-V, but that is still in an early stage of design.
@Kartik Agaram AFAIK you can't write directly to framebuffer hardware if you want to run under a modern OS. Under Linux, Windows or MacOS, Mu is effectively running in a virtual machine and is using OS APIs to do all of its input and output, including graphics. Probably Mu is running under a window manager, so that is another layer of abstraction between Mu and the hardware. Maybe you want to create a pixel array in Mu's address space, pretend that this pixel array is the frame buffer, and write OS-specific code to copy the frame buffer to the window once per frame. It's not difficult to code this using OpenGL. It will not be energy efficient on a laptop though, since the frame buffer copy happens 60 times per second even if the framebuffer hasn't changed.
k
It occurs to me that I just need a framebuffer emulator analogous to text-mode terminal emulators. And look, someone else had the same idea: https://sixpak.org/fbe @Doug Moen it's totally an option to have different code paths for running on native hardware vs a host OS. For example, here are the syscalls I use for two different OSs: https://github.com/akkartik/mu/blob/master/init.linux and https://github.com/akkartik/mu/blob/master/init.soso. In combination with fbe, I could maybe have separate init.linux and init.qemu or something like that, where the ioctls expand to nothing and the address of `mmap`d memory changes.
d
Yes, I was describing my implementation of a framebuffer emulator.
c
@Kartik Agaram sounds like you are on the right track.... I had thought you earlier mentioned you had some kind of framebuffer already - hence the software rasterizer. To me, a framebuffer is a GPU memory surface that gets copied to the screen - I work for NVIDIA, so I have hardware bias 😉. The approach of using a memory buffer then copying it to the screen using an OS specific path, as @Doug Moen suggests sounds good to me 🙂 OpenGL remains the best supported way to do that on any platform. Any version of it will be able to take a memory surface and copy it to the display, and it is still the first graphics API that most platforms support. There are other ways to get a framebuffer that would work without creating an OpenGL context though. My easy render repo has a really simple example of displaying memory pixels on the display in windows. There are probably similar ways to do such things on other OS. There may even be a cross platform header library that will accomplish the same thing: https://github.com/cmaughan/easyrender/blob/master/src/devices/windows/device.cpp
d
sixpak.org/fbe contains a loop that runs every millisecond and copies the framebuffer into a X pixmap using XSetPixel(). Although this code contains optimizations, I would still choose to write a framebuffer emulator in OpenGL, updating once per frame instead of once per millisecond. I think efficiency and power consumption could be an issue. (I still have work to do in Curv to prevent laptops from heating up and turning on their fan unnecessarily, which is why I'm sensitive to this issue.)
💡 1
I want to correct a statement I made about WebGPU native. There was a checkin today to support both SPIR-V and WGSL as shader languages (even though web browsers will not support SPIR-V). People using existing game-programming toolchains will want to use SPIR-V.
s
Btw there is at least one software rasterizes version of nanovg
So no need to get rid of nanovg, there are also backends for all mainstream graphics apis
If I were you I'd probably support some GPU style API, probably bgfx, so Mu can still take advantage of GPUs
💡 1
Even if it's against the core philosophy of Mu, unfortunately GPU capabilities vary wildly across vendors and generations, so abstractions are necessary
And on many platforms OpenGL or OpenGL ES are the lowest level abstraction
k
The philosophy of Mu is to interrogate abstractions. Definitely willing to consider this!