-
-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple layers #395
Comments
Hi, I'm sure you've seen the different GUI examples we have. They both operate in the way you describe, where the GUI sits on a "virtual layer" above the pixel buffer. It should be noted there is no actual layering going on here, just the illusion of it. When the GUI is drawn, it doesn't replace anything already in the buffer. It's just composited over whatever is already there, using whatever compositing modes the GUI is interested in. That's how, e.g. the Given this already exists and works in practice, did you have a more specific question regarding how to draw with compositing? Or maybe some example code demonstrating the issue? |
This method of composition uses the CPU. Currently, I have to write code that checks to see if alpha is non-one and blend with what's already in the RGBA array. Ideally, I would like the GPU to do that work. Let me give you an example of my scenario. I have a window of 640x512 in which I render a 320x256 pixel buffer for my emulated device. Pixels handles this perfectly. Now I want to render on top a 640x512 debugger UI. This means I have to do an This is the reason I asked for layers. I would love to have different pixel buffer sizes all being rendered to the same surface. I tried rendering them separately but the alpha values in the 640x512 debugger pixel buffer was being ignored and overwriting the 320x256 emulated pixel buffer completely. |
Can you specify what you are referring to when you say "this method" in your opening statement? Both imgui and egui do use the GPU for rendering the UI. Neither treats the render target as a flat texture "array of pixels". They produce meshes into a VBO like a 3D game engine does. I'm not sure why you would want to use a separate texture to poke pixels into for drawing a GUI over top of the emulated game screen. That sounds extremely clunky. Especially when the immediate mode GUIs give you a much more efficient method that is resolution-independent and fully supports alpha compositing and all of the other essential rendering techniques. |
I am not sure where you're getting immediate mode GUIs from? I just want two pixel buffers that can be 2 different sizes, and render them on top of each other. I hoped to do that by creating two pixel buffers from the same surface, grabbing a frame, setting the pixels for that particular layer, then calling I hope that makes more sense to you. |
Sorry, this conversation is very confused. I will try to clarify.
I'm suggesting you use In other words, use hardware rasterization instead of software rasterization. The two GUI examples we have both use hardware rasterization and they both support transparency. That's the point I'm raising. The fact that they are immediate mode GUI APIs is irrelevant.
Use It will need its own shader, texture, vertex buffer, pipeline, etc. You can start with the custom shader example to give you some hints, as well as the I still think I recommend not doing it this way. A GPU-native GUI rasterizer can get the same aesthetic without the CPU ever plotting a single pixel. But you are free to manipulate pixels on the CPU in a separate texture, if you like. |
Thanks for the input. I am doing this way because that's how I did it with the previous C++ incarnation I wrote (using SFML). Their equivalent of pixel buffers were "sprites". There is no way of avoiding CPU plotting pixels because it would be MUCH more complicated to do it with the GPU. I still require filling out two pixel buffers with the CPU (one for emulated screen, one for the debugger UI). All I have to do is render two rectangles on the screen. Since Pixels cannot do this easily, I've decided to go my own route and start developing a crate called Pixu that can handle multiple layers of pixel buffers. If I have to go the This is OK, Pixels is not a fit for my particular purposes. I still plan to use Pixels for my ray-casting adventures later though! |
Hi, I'm rewriting my emulator to Rust and I've decided to use Pixels as my main rendering crate. But I've hit a little snag.
I would like to render two layers, the base layer is the emulated screen, and the top layer is the debugger UI when it is needed. The debugger UI is twice the resolution of the emulated screen.
Now, I know it's possible to use a single pixels struct and just render the emulated screen twice as big if the debugger UI pixel is transparent (i.e. alpha == 0), but I was wondering if it is possible to support transparency in the top layer.
I've tried rendering the emulator screen first then the debugger UI afterwards, but the debugger UI just overwrites the first render even though the pixel's alpha value is 0.
Looking for options here before I write my own compositing code.
The text was updated successfully, but these errors were encountered: