-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GPU Metadata Property Table Packing for 3D Tiles Next #9572
Comments
Yesterday I discussed some details about textures with @lilleyse, here are some notes from that:
|
Proposed Metadata Packing AlgorithmAt a high-level, the algorithm will have the following phases:
Partitioning PropertiesFor this first iteration, let's keep this simple using the rules I mentioned in the description. To recap:
In theory, we might want to fallback between Type RepresentabilityThis step is very simple, it simply rejects the following types as "not representable" - any other issues like lack of floating point texture support will be caught in the next step.
Computing Packed TypesThis is the most involved phase of the algorithm. Essentially we want to go from a list of property types to a list of Packing functions are any steps that are needed to do to prepare the values for packing. They will be applied in order when packing, and the inverse will be performed in the shader to unpack the values. Some packing types require a lossy conversions. We might want to log an error or throw an error when this happens. Several types have similar packing rules, so here are some rules for converting these into a smaller set of types. These operations are added as packing rules. The following tables summarize these rules. Notes:
Constant/Uniform Type Conversions:
At the end, only these families of types will remain:
Attribute Type Converted:
At the end, only the Texture Type Conversions
At the end, only these families of types will remain:
Grouping Properties by SizeNote: in what follows, when I say "group properties" I am not referring to group metadata from The next step is to group properties together into a single texel/vector to conserve space. Note: this step is optional, it should be controlled by a boolean flag. It's nice for memory efficiency, but will not be useful when interpolation is needed. There are only 5 partitions of 4:
We can use this fact to pair up components to pack memory more densely:
For example, if I had
Compute LayoutsFor uniforms, each group of properties becomes a single uniform. For attributes, each group of properties becomes a single attribute. For textures, it's a little more involved. Each group of properties becomes a single texel, but there are a couple different ways these texels can be arranged:
where
Where propertyOffset is computed for each property. I think Option 2 is nicer for its simplicity and better memory efficiency for multiple feature tables. NOTE: In the above, assume textures are the maximum size and 4 channels. The next step will handle shrinking this layout to fit the content tightly, this is to be done at the end. "Vacuum Packing"To finish the layout, we want to avoid wasting memory, so reduce dimensions of the data to fit the data as tight as possible. This involves:
For example, say
|
Oh one clarification: when it comes to grouping properties by size, this needs to be done per-type. So for example, when it comes to textures, the |
Requested in #11450. |
thank you for your reply. now i am trying to shake every building differently . i once thought the metadata seem a good choice to distinct them ,can you give me some advice on how i can distinct each building . https://sandcastle.cesium.com/?src=Custom%20Shaders%203D%20Tiles.html&label=3D%20Tiles%20Next in this example,a batch building share the same featureId |
One of the upcoming parts of our 3D Tiles Next effort is to pack metadata (specifically, feature tables) for use on the GPU. This will be necessary for both custom shaders (see #9518) and GPU feature styling.
Packing Overview
The goal for this subsystem is to take the metadata from the CPU, pack it into GPU memory (textures, attributes and uniforms), and then unpack it in the shader.
Only properties used in the shader code will be uploaded to the GPU. @lilleyse’s
model-loading
branch will have a way to determine this.Once uploaded and no longer needed on the CPU, try to free the CPU resources. We should include some options for controlling this.
We also want to make any texture management general-purpose, as the refactored Model.js will use other types of textures (feature textures, feature ID textures).
Datatype Compatibility
Not every data type is GPU-compatible. For example, STRING and variable length ARRAY are not easily representable on the GPU. Also, 64-bit types are not directly representable, but a fallback would be to convert them to 32-bit types.
Furthermore, WebGL 1 only supports 8-bit integer or 32-bit float (with
OES_texture_float
) textures. For larger integer types, multiple image channels or multiple pixels will have to be used.Supported Types
LUMINANCE
,ALPHA
)OES_texture_float
is available)LUMINANCE
,LUMINANCE_ALPHA
,RGB
orRGBA
depending on size)Supported with Fallbacks
LUMINANCE_ALPHA
)RGBA
)PointCloud.js
does.RGBA
) whenOES_texture_float
is not available (see https://github.com/CesiumGS/cesium/blob/master/Source/Scene/createElevationBandMaterial.js#L483-L501)Not supported
Other Notes:
vec3
? Or would this add too much complexity?Encoding Considerations
There are some special cases where values need additional encoding:
Choosing a GPU layout
The main unknown right now is how to choose an optimal GPU layout. The calling code will provide a list of properties and information about what GPU resources are available. The layout algorithm needs to take this information and determine what textures/vertex attributes/uniforms to use to store the metadata.
One possibility is to divide the properties into three categories:
constant: 0, divisor: 1
) are good candidates for storing in attributesdefaultValue
can be inlined into the shader code to avoid using GPU resources.However, determining the exact layout is more involved. Here are some complicating factors:
Inputs:
Output:
This layout can be used by the caller to set the
Property
struct in the shader, as well as determine where/how to upload data to the GPU.Stretch Goal: Filtering
One detail that would be nice to have is to allow a method to let the user filter properties. This has a number of benefits:
Potential downsides:
In some cases, filters are only partially effective. E.g. if you already downloaded a large binary buffer, to release one bufferView, you need to move all the other bufferViews
To Do:
czm_
builtin functions (or snippets appended to a shader) for unpacking metadata in GLSLModel.js
refactorThe text was updated successfully, but these errors were encountered: