-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(gltf): Add GLBArrowLoader #3160
base: master
Are you sure you want to change the base?
Conversation
// Unclear how represent indexed, instanced, or normalized meshes as | ||
// ArrowTable. Convert to simpler representations for now. | ||
await document.transform(unweld(), uninstance(), dequantize()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pre-processing to ensure the vertex buffers can be represented as Arrow tables. Possibly quantized meshes (int8 or int16 vertex attributes) could be represented, but we'd need to put the 'normalized: boolean' option somewhere.
Similarly there could be ways to represent instanced draws with a second Arrow table for the instance transforms, if we want to go that direction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My thoughts about indexed tables was to add an index column as a List<Uint32>
and put all the indexes in the first row. We'd need to store an offset to the end of the index list in every other row (in case we had such rows).
const indexes = new Uint32Array([0, 1, 2, 3, 4, 5]);
const nextIndex = indexes.length;
const indexOffsets = new Uint32Array(5).fill(nextIndex);
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can add metadata on every column (field) has a metadata: Map<string, string>
and we can even add a mesharrow: JSON.stringify(...)
key there.
io?: PlatformIO; | ||
}; | ||
|
||
export type ArrowTableTransformList = [ArrowTable, Matrix4][]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TBD what the output should ideally be here?
export type GLBArrowLoaderOptions = LoaderOptions & { | ||
io?: PlatformIO; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the application is running in a non-browser environment (Node, Deno) or requires other dependencies (Draco, Meshopt) then a custom I/O class should be provided. Example:
import { NodeIO } from '@gltf-transform/core';
import { KHRONOS_EXTENSIONS } from '@gltf-transform/extensions';
import draco3d from 'draco3dgltf';
const io = new NodeIO()
.registerExtensions(KHRONOS_EXTENSIONS)
.registerDependencies({
'draco3d.decoder': await draco3d.createDecoderModule()
});
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Loaders.gl has a bunch of abstractions like ReadableFile etc that have implementations that work under Node, Browser, HTTPs etc. This seems to duplicate some of the IO class responsibilities.
I wonder if we could create a LoadersIO
custom IO class for gltf-transform on top of the loaders.gl abstractions that glued the two libraries together, so that your gltf-transform would work with the abstractions we typically use in loaders.
That way this gltf-transform based loader wouldn't become a one-off loader that doesn't fully work like other loaders do.
const arrowSchema = new arrow.Schema(arrowFields); | ||
const arrowStruct = new arrow.Struct(arrowFields); | ||
const arrowData = new arrow.Data(arrowStruct, 0, vertexCount, 0, undefined, arrowAttributes); | ||
const arrowRecordBatch = new arrow.RecordBatch(arrowSchema, arrowData); | ||
const arrowTable = new arrow.Table([arrowRecordBatch]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice. I will take a deeper look but some quick comments.
First, FWIW - In the current model, the existing non-Arro GLB loader just cracks the GLB container structure but doesn't now anything about glTF. In that model, this might more properly be called GLTFArrowLoader
even though it loads a .glb
file.
The glTFLoader understands the GLTF structure inside a GLB and uses the GLB loader to get to that data (JSON chunk and binary chunk).
We used GLB loader for some other non-glTF customer binary data use cases in the past, but that is a vey minor consideration at this time.
We can discuss moving to your gltf library. The API seems a little bit fancy for my tastes, and appears to duplicates some of the loaders.gl infrastructure to load from different sources, but I am sure I can get behind it if you are keen.
// Unclear how represent indexed, instanced, or normalized meshes as | ||
// ArrowTable. Convert to simpler representations for now. | ||
await document.transform(unweld(), uninstance(), dequantize()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My thoughts about indexed tables was to add an index column as a List<Uint32>
and put all the indexes in the first row. We'd need to store an offset to the end of the index list in every other row (in case we had such rows).
const indexes = new Uint32Array([0, 1, 2, 3, 4, 5]);
const nextIndex = indexes.length;
const indexOffsets = new Uint32Array(5).fill(nextIndex);
// Unclear how represent indexed, instanced, or normalized meshes as | ||
// ArrowTable. Convert to simpler representations for now. | ||
await document.transform(unweld(), uninstance(), dequantize()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can add metadata on every column (field) has a metadata: Map<string, string>
and we can even add a mesharrow: JSON.stringify(...)
key there.
For various reasons this might not be the implementation we ultimately prefer for GLBArrowLoader, but it's interesting as a draft for discussion. I've implemented GLBArrowLoader (and GLTFArrowLoader would be a trivial addition) parsing with glTF Transform and encoding the output directly to a list of
[ArrowTable, Matrix4]
tuples. If the input is compatible (non-interleaved, non-instanced, non-indexed, non-quantized...) then no pre-processing is required for vertex buffers, and buffers are handed to Arrow as-is. If these constraints are not satisfied, the conversion is handled automatically:Ideally, any pre-processing would be done offline, there's little reason to do that at runtime in a production application.
For at least some of these cases (indexed meshes, in particular) it may be preferable to find some way to represent the source data directly with Arrow – perhaps one table for the vertex attributes and another for the indices. But this gets into larger questions about what we'd prefer an Arrow-centric representation of a Mesh to be.
If we prefer to use the existing parsing code instead of parsing with glTF Transform I have no objection; this just seemed like a simple (<150 LOC) proof-of-concept that understands any ratified glTF extensions.
Preview: