Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multiple layers #395

Open
Cthutu opened this issue Aug 12, 2024 · 7 comments
Open

Multiple layers #395

Cthutu opened this issue Aug 12, 2024 · 7 comments
Labels
question Usability question

Comments

@Cthutu
Copy link

Cthutu commented Aug 12, 2024

Hi, I'm rewriting my emulator to Rust and I've decided to use Pixels as my main rendering crate. But I've hit a little snag.

I would like to render two layers, the base layer is the emulated screen, and the top layer is the debugger UI when it is needed. The debugger UI is twice the resolution of the emulated screen.

Now, I know it's possible to use a single pixels struct and just render the emulated screen twice as big if the debugger UI pixel is transparent (i.e. alpha == 0), but I was wondering if it is possible to support transparency in the top layer.

I've tried rendering the emulator screen first then the debugger UI afterwards, but the debugger UI just overwrites the first render even though the pixel's alpha value is 0.

Looking for options here before I write my own compositing code.

@parasyte parasyte added the question Usability question label Sep 22, 2024
@parasyte
Copy link
Owner

parasyte commented Sep 22, 2024

Hi,

I'm sure you've seen the different GUI examples we have. They both operate in the way you describe, where the GUI sits on a "virtual layer" above the pixel buffer.

It should be noted there is no actual layering going on here, just the illusion of it. When the GUI is drawn, it doesn't replace anything already in the buffer. It's just composited over whatever is already there, using whatever compositing modes the GUI is interested in. That's how, e.g. the imgui-winit example uses a transparent window over the pixel buffer.

imgui-winit

Given this already exists and works in practice, did you have a more specific question regarding how to draw with compositing? Or maybe some example code demonstrating the issue?

@Cthutu
Copy link
Author

Cthutu commented Sep 30, 2024

This method of composition uses the CPU. Currently, I have to write code that checks to see if alpha is non-one and blend with what's already in the RGBA array. Ideally, I would like the GPU to do that work.

Let me give you an example of my scenario. I have a window of 640x512 in which I render a 320x256 pixel buffer for my emulated device. Pixels handles this perfectly.

Now I want to render on top a 640x512 debugger UI.

This means I have to do an if check on every debugger pixel, and if its alpha is 0, then I have to write a part of the fatter pixels of the emulation. This also means that the emulated pixel buffer is now 4 times bigger at 640x512. I also keep a separate 320x256 buffer.

This is the reason I asked for layers. I would love to have different pixel buffer sizes all being rendered to the same surface. I tried rendering them separately but the alpha values in the 640x512 debugger pixel buffer was being ignored and overwriting the 320x256 emulated pixel buffer completely.

@parasyte
Copy link
Owner

Can you specify what you are referring to when you say "this method" in your opening statement?

Both imgui and egui do use the GPU for rendering the UI. Neither treats the render target as a flat texture "array of pixels". They produce meshes into a VBO like a 3D game engine does.

I'm not sure why you would want to use a separate texture to poke pixels into for drawing a GUI over top of the emulated game screen. That sounds extremely clunky. Especially when the immediate mode GUIs give you a much more efficient method that is resolution-independent and fully supports alpha compositing and all of the other essential rendering techniques.

@Cthutu
Copy link
Author

Cthutu commented Oct 2, 2024

I am not sure where you're getting immediate mode GUIs from? I just want two pixel buffers that can be 2 different sizes, and render them on top of each other.

I hoped to do that by creating two pixel buffers from the same surface, grabbing a frame, setting the pixels for that particular layer, then calling render on both. Unfortunately for me, when you call render on the second pixel buffer, it doesn't take into account the alpha channel and so overwrites the first render call.

I hope that makes more sense to you.

@Cthutu
Copy link
Author

Cthutu commented Oct 2, 2024

Perhaps an image will help you visualise what I am doing. This is taken from my C++ version of my emulator with the debugger UI showing above:
image

You can see the emulated screen is below at a lower resolution (smaller pixel buffer) than the debugger UI above. To generate a single pixel buffer is slow. I was hoping I could render one (the emulator) then the second (the debugger).

@parasyte
Copy link
Owner

parasyte commented Oct 2, 2024

Sorry, this conversation is very confused. I will try to clarify.

I am not sure where you're getting immediate mode GUIs from?

I'm suggesting you use wgpu to render your GUI, rather than drawing a GUI with pixel manipulation. Whether the GUI API is immediate mode or retained mode is an orthogonal concern.

In other words, use hardware rasterization instead of software rasterization.

The two GUI examples we have both use hardware rasterization and they both support transparency. That's the point I'm raising. The fact that they are immediate mode GUI APIs is irrelevant.

I just want two pixel buffers that can be 2 different sizes, and render them on top of each other.

Use wgpu to draw a second texture over the one that pixels already draws. You have access to all of the wgpu state in the Pixels API for these kinds of application-specific requirements.

It will need its own shader, texture, vertex buffer, pipeline, etc. You can start with the custom shader example to give you some hints, as well as the pixels source code itself.

I still think I recommend not doing it this way. A GPU-native GUI rasterizer can get the same aesthetic without the CPU ever plotting a single pixel. But you are free to manipulate pixels on the CPU in a separate texture, if you like.

@rocket-matt
Copy link

Thanks for the input. I am doing this way because that's how I did it with the previous C++ incarnation I wrote (using SFML). Their equivalent of pixel buffers were "sprites". There is no way of avoiding CPU plotting pixels because it would be MUCH more complicated to do it with the GPU. I still require filling out two pixel buffers with the CPU (one for emulated screen, one for the debugger UI). All I have to do is render two rectangles on the screen.

Since Pixels cannot do this easily, I've decided to go my own route and start developing a crate called Pixu that can handle multiple layers of pixel buffers. If I have to go the wgpu route and write my own shaders with Pixels, I see no point in using Pixels. I may as well write my own wgpu framework, which I've done before, but was hoping to avoid with Pixels.

This is OK, Pixels is not a fit for my particular purposes. I still plan to use Pixels for my ray-casting adventures later though!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Usability question
Projects
None yet
Development

No branches or pull requests

3 participants