-
-
Notifications
You must be signed in to change notification settings - Fork 123
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possibility of gpu-side pixel buffer #170
Comments
This is something I need to look into more closely. I have another side project that I was not planning on using The biggest issue, AFAIK, is that the texture usage is currently hardcoded: Line 304 in 23da739
E.g. one could use the The other question about skipping or ignoring the |
I also think having a separate type would probably be the most straightforward way to expose the functionality to the user. The Pixels- type is advertised as a simple pixel buffer, but changing from internally using a Vec to a GPU-side Texture would fundamentally change how it works. I guess much of the functionality could be shared internally anyway. |
Could this maybe be solved by making Pixels generic over the used storage (maybe internally using a enum)? Than the Implementation could be divided for the methods where it matters. |
My current thought is using a trait to define the common functionality, and concrete structs for the implementation-specific parts. Pretty standard stuff, no frills. Did you have a specific reason to use |
I certainly have the tendency to overcomplicate things 😅. I am hyped this seems to go forward!
|
Hey! I had a similar use case where I just used the glium crate and compute shaders to do things. Only problem was, that I found GPU shader development to be very tedious due to the lack of proper profiling and debugging (I dont have NVIDIA GPU so that complicates things). I was wondering whether I should switch back to using pixels again and while checking out the repo for new things I found this. So has there been any updates on this? |
Nothing new to report here. Even if this was addressed, it would not make the debugging experience with shaders any better. It would just bring that experience to |
Yep. Very true. Thanks for the response! Love your project ^^ |
I am currently working on implementing cellular automata and similar things while learning rust and wgpu.
Simulation on the cpu is already working for me and pixels was a big help for me since I didn't need to put any consideration whatsoever on the graphics aspect. So big thanks for this awesome crate!
But the next step is doing the computation on the gpu to enable large simulations like these.
At the moment it's necessary to have separate buffers on the gpu for the computation, then mapping the buffers to be read by the cpu to update the pixel buffer of pixels which then copies the pixel buffer back to the gpu side texture on rendering.
Directly updating the texture is not possible either, since the
.render-with
- method (and by extension the.render
- method) override the texture with the cpu-side pixel buffer.It would be awesome if it were possible to specify a pixel buffer on the gpu which then gets used by pixels or request pixels itself to use a gpu-side pixel buffer and expose that to the user.
This would allow users to use pixels for what it does best, rendering pixel perfect graphics, while using the most appropriate buffer location for the specific application.
As an alternative pixels could also expose a method which renders the texture without first copying the pixel buffer to it. In that case it would be conveniant to also be able to query the necessary TextureView to be able to use the
copy_buffer_to_texture
- method of the CommandEncoder of wgpu.Thank you for your work!
The text was updated successfully, but these errors were encountered: