-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Assistance In Implementing A New Frontend #823
Comments
I haven't analyzed this in full detail, but one thing that jumps out at me from a quick read is this:
That approach will cause you to render multiple levels-of-detail simultaneously, which will be a mess. Plus extra tiles that are loaded for caching purposes but don't actually need to be rendered. The solution is to use the
Step 1 is unnecessary if you're only doing one call to If that's not the problem, it'd be helpful to see screenshots of what you're rendering looks like, because it might provide a clue. In fact, we'd love to see screenshots if you get it working, too! :) |
Awesome, thank you so much for the response. :) Do I interpret you correctly in that Thank you for the note on ordering with As far as screenshots, I am embarrassed to be showing this to professionals, but, here is a screenshot of the scene before rendering the tile geometries, with a teapot at and here is what it looks like after the geometry renders in the scene: render2.mp4What I've does here is place an The actual colored lines you see in that movie are artifacts of a debugging facility that is part of |
No, sorry for the confusion,
The coordinate system of the tiles - in almost all cases (certainly for Google Photorealistic 3D Tiles) - is ECEF. The origin is the center of the Earth. Actually, each tile has geometry relative to its own local coordinate system, and a transformation that takes that to ECEF. Perhaps you're not including the tile's transformation (
If you want to render the world with a different coordinate system, not at the center of the Earth LocalHorizontalCoordinateSystem may help. Basically you construct an instance of that the way you like, call
|
As someone who was in your shoes a little over a year ago, first let me congratulate you for taking this on. It's rather intimidating to be staring at TilesetExternals, figuring out where to start. With what you've done so far, it's hard to know if your translation from cesium-native's glTF to the RealityKit representation is working correctly. Can you change your scene so that you're looking at the earth from 12k+ kilometers away? From that viewpoint it's quickly obvious if you have something that looks Earth-like or not. Or, as what happened when I first tried Google Photorealistic Tiles, you might see octants of the Earth weirdly rotated but placed correctly. It's also progress to see a white or black globe without textures; at least you know that you're getting somewhere. There's also a fairly new entry point I'll also suggest without modesty that looking at https://github.com/timoore/vsgCs might be helpful because the target system, Vulkan Scene Graph, is much simpler than Unreal. |
@timoore @kring I've made some good progress on using |
So I wanted to check back in here and say that largely I'm seeing what I'd "like" in general, very much in part thanks to both of your advice. Currently, I am now rendering regions centered at a given lat, lng and height. I construct a view request at a given What I have found is that, when providing elevation values ( acquired via the Google Elevation API ) for both the tile request and the local horizontal coordinate system, I frequently see the tiles "hovering" or "below" the origin, but never "at" the origin as I would expect the Thanks for any assistance. I expect you must indirectly have to correct a lot of ignorance on the part of users understanding coordinate systems, and it's truly appreciated. |
As a side note, seeing the Google Photorealistic tiles rendered via the VisionOS renderer is quite cool. Screenshots don't really do it justice, the rendering makes the world feel like a toy in front of you. |
Your explanation is quite likely correct. Mapping elevations are in MSL i.e., the height above the geoid or something close to it. LocalHorizontalCoordinateSystem refers to the WGS84 ellipsoid, not the same thing. There is currently a pull request, #835 , that provides a function to get the height above the ellipsoid of the EGM96 geoid, a standard is somewhat old and low-res model of global sea level. You could merge that branch in and try it out. |
Nevertheless, we'd love to see screenshots and/or vids. It's quite an accomplishment to go from nothing to garbage to real 3D terrain. Congratulations! |
This was exactly it. :) The Google Elevation API in concert w/ the ellipsoidal correction in EGM96 has put everything within the same rough understanding of origin. Thanks so much. |
Just adding a note here about
While I see why these two methods are named what they are, I think it could be helpful to consider either documenting the semantic differences between what these two methods expect to happen, or consider renaming them to make the intent a bit more clear ( I supply these as examples of what I mean, but fully admit I do not think they are great examples: |
I realized I never actually posted a video of what I got working here :-P For anyone else that comes along, here's Cesium Native working on VisionOS CesiumNative.mp4 |
Very cool! |
I'm trying to create a new frontend for
cesium-native
, piecing together how an implementer might usecesium-native
on a new platform. In doing so, I've made it to the point where I am definitely able to submit geometries to the system, but the results are visually chaotic enough to suggest that I may have messed up part of the pipeline. I wanted to describe what I did in creating a new consumer ofcesium-native
in the hopes that someone could double check that I've implemented the minimum required to render map tiles on the screen, and potentially give me any advice when it comes to tryin to validate a new consumer of tiles.My goal in this particular exercise is to fetch a given
Tileset
for a known area from the Google Photorealistic Tileset, and render it on VisionOS in RealityKit. I am attempting to take the GLTF meshes provided bycesium-native
and format them appropriately asMeshDescriptor
objects attached toModelEntity
objects in aScene
.What I have done so far:
IAssetAccessor
,IPrepareRendererResources
andAsyncSystem
. I have interpreted the function of each of these systems as roughly:IAssetAccessor
methods must create futures that resolve URL requests,IPrepareRendererResources
must be responsible for mesh and texture allocation and return handles that will be packed intovoid *
members to later be used by the rendering system, andAsyncSystem
must just be able to push work into task queues somewhere.IPrepareRendererResources
must move all of Cesium spatial coordinates into my target coordinate system. As RealityKit is a right-handed Z-Forward system, each Cesium coordinate shipped to RealityKit must be transformed byy
andz
when I'm lazy. The distance unit measurements are presumed to be in meters for both, so I don't believe any coordinate system transformation is necessary there.ViewState
that represents a viewers position on WGS84, 200 meters off the surface and focused on 0 meters off the surface.Tileset
with my Cesium ION API key, and fetch the photorealistic tile asset ID.updateViewOffline
with myViewState
as discussed previously.TileLoadState::Done
tiles in thetilesToRenderThisFrame
collection of the update result, I fetch thegetRenderContent()
data. This corresponds with the handles created by myIPrepareRendererResources
implementor. Using these handles, I can instruct my RealityKit rendering side to fetch all correspondingModelEntity
objects and place them in my scene.What I'd like to know is:
Cesium3DTilesContent::registerAllTileContentTypes()
)Cesium3DTilesSelection::ViewUpdateResult
semantics will become extremely important in the sense of rendering a scene. Is there a simple explanation somewhere of what per-frame or per-user-action updates should be expected when interacting with aVIewUpdateResult
? Something on the order of "handle the different tile actions in this order in order to arrive at an interpretable frame"? Similarly, understanding what the desirable semantics are for theIPrepareRendererResources
allocate v. free methods are would be useful. I kind of assume that because it says free, it means free with respect to "resources will definitely not be necessary anytime soon unless theViewState
brings it back into view". I'm twitchy on this because in my simplistic "load one location" test case, I loaded something on the order of 1,000 tiles and immediately freed half of them, which took me by surprise given I was loading at a static location.Thanks for any guidance on this. I realize this is hardly an "issue" and more of a request for advice, but I thought it would be useful to surface this publicly for anyone else that might be integrating.
The text was updated successfully, but these errors were encountered: