Spark v2 appears to fail when loading a plain x/y/z/r/g/b CloudCompare-exported binary PLY point cloud.
The same type of file was loading successfully with the previous Spark version I was using, @sparkjsdev/spark@0.1.10. After upgrading to @sparkjsdev/spark@2.0.0, Spark now appears to treat this plain point-cloud PLY as a Gaussian Splat PLY and expects Gaussian-specific properties such as scale_0.
This seems inconsistent with the Spark v2 documentation, which says .ply loading supports original gsplat PLY, compressed SuperSplat/gsplat PLY variants, and plain x/y/z/r/g/b point clouds, with .ply / .spz auto-detected from file contents.
Docs reference: https://sparkjs.dev/docs/loading-splats
Environment
@sparkjsdev/spark: 2.0.0
three: tested in my app with 0.184.0
App: Vite + TypeScript + Three.js
PLY file details
The file is a CloudCompare-exported point-cloud PLY.
Original header:
ply
format binary_little_endian 1.0
comment Created by CloudCompare v2.13.0 (Kharkiv - Feb 14 2024)
comment Created 2024-03-29T00:12:28
obj_info Generated by CloudCompare!
element vertex 279910
property float x
property float y
property float z
property uchar red
property uchar green
property uchar blue
end_header
There are no Gaussian Splat properties such as:
scale_0
scale_1
scale_2
rot_0
rot_1
rot_2
rot_3
opacity
This is expected because it is a plain point-cloud PLY with positions and RGB colors.
What I tried
- Direct SplatMesh URL load, no fileType
const tree = new SplatMesh({
url: './images/tree.ply',
});
scene.add(tree);
Result:
Invalid PLY file
2. Removed CloudCompare obj_info line
Because Spark first complained about:
Unsupported PLY header line: obj_info Generated by CloudCompare!
I created a copy of the binary PLY with only this header line removed:
obj_info Generated by CloudCompare!
The binary vertex body was left untouched.
Then I tested:
const tree = new SplatMesh({
url: './images/Tree-no-obj-info.ply',
});
scene.add(tree);
Result:
Missing scale_0 property
3. Direct SplatLoader, no explicit file type
const loader = new SplatLoader();
const loadedSplats = await loader.loadAsync('./images/Tree-no-obj-info.ply');
const tree = new SplatMesh({
packedSplats: loadedSplats as PackedSplats,
});
scene.add(tree);
Result:
Missing scale_0 property
4. Ensured no explicit SplatFileType.PLY
I also changed my own loader mapping so that .ply is not passed as an explicit fileType, allowing Spark to auto-detect:
function getExplicitSparkFileType(format?: SplatFormat): SplatFileType | undefined {
switch (format) {
case 'ply':
// Let Spark auto-detect PLY variants from file contents.
return undefined;
case 'spz':
return SplatFileType.SPZ;
case 'ksplat':
return SplatFileType.KSPLAT;
case 'splat':
return SplatFileType.SPLAT;
default:
return undefined;
}
}
So for .ply, Spark receives:
new SplatMesh({ url });
or:
new PackedSplats({ url });
not:
new SplatMesh({
url,
fileType: SplatFileType.PLY,
});
The issue still occurs.
Minimal reproduction
This minimal Spark setup works with .spz files:
<style>
body { margin: 0; }
</style>
<script type="importmap">
{
"imports": {
"three": "https://cdnjs.cloudflare.com/ajax/libs/three.js/0.180.0/three.module.js",
"@sparkjsdev/spark": "https://sparkjs.dev/releases/spark/2.0.0/spark.module.js"
}
}
</script>
<script type="module">
import * as THREE from "three";
import { SparkRenderer, SplatMesh } from "@sparkjsdev/spark";
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(
60,
window.innerWidth / window.innerHeight,
0.01,
1000
);
const renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
const spark = new SparkRenderer({ renderer });
scene.add(spark);
// This works:
const spz = new SplatMesh({
url: "https://sparkjs.dev/assets/splats/butterfly.spz",
});
spz.position.set(0, 0, -3);
scene.add(spz);
renderer.setAnimationLoop(() => {
renderer.render(scene, camera);
spz.rotation.y += 0.01;
});
</script>
But replacing the SPZ load with the plain CloudCompare PLY fails:
const tree = new SplatMesh({
url: './Tree-no-obj-info.ply',
});
tree.position.set(0, 0, -3);
scene.add(tree);
Error:
Missing scale_0 property
Expected behavior
A plain point-cloud PLY with:
property float x
property float y
property float z
property uchar red
property uchar green
property uchar blue
should load as a point-cloud splat source, based on the Spark v2 documentation.
Spark should not require Gaussian Splat fields such as scale_0 for this plain point-cloud PLY path.
Actual behavior
Spark v2 appears to treat the file as a Gaussian Splat PLY and throws:
Missing scale_0 property
Before removing the CloudCompare metadata line, it also throws:
Unsupported PLY header line: obj_info Generated by CloudCompare!
Questions
Is plain binary little-endian x/y/z/r/g/b PLY currently supported in Spark v2?
Should obj_info PLY header lines be ignored as valid PLY metadata?
Is this a regression from Spark 0.1.x?
Is there a different loading option needed for plain point-cloud PLY files in Spark v2?
If plain point-cloud PLY support is intended, could the loader auto-detect this file as a point-cloud PLY instead of a Gaussian Splat PLY?
Why this matters
I am building a Three.js/Spark-based viewer that needs to support multiple splat/point-cloud formats:
.spz
.splat
.ksplat
Gaussian Splat .ply
plain point-cloud .ply from tools like CloudCompare
SPZ works correctly in Spark v2, but this plain CloudCompare PLY no longer loads after upgrading to Spark v2.
Spark v2 appears to fail when loading a plain x/y/z/r/g/b CloudCompare-exported binary PLY point cloud.
The same type of file was loading successfully with the previous Spark version I was using, @sparkjsdev/spark@0.1.10. After upgrading to @sparkjsdev/spark@2.0.0, Spark now appears to treat this plain point-cloud PLY as a Gaussian Splat PLY and expects Gaussian-specific properties such as scale_0.
This seems inconsistent with the Spark v2 documentation, which says .ply loading supports original gsplat PLY, compressed SuperSplat/gsplat PLY variants, and plain x/y/z/r/g/b point clouds, with .ply / .spz auto-detected from file contents.
Docs reference: https://sparkjs.dev/docs/loading-splats
Environment
@sparkjsdev/spark: 2.0.0
three: tested in my app with 0.184.0
App: Vite + TypeScript + Three.js
PLY file details
The file is a CloudCompare-exported point-cloud PLY.
Original header:
ply
format binary_little_endian 1.0
comment Created by CloudCompare v2.13.0 (Kharkiv - Feb 14 2024)
comment Created 2024-03-29T00:12:28
obj_info Generated by CloudCompare!
element vertex 279910
property float x
property float y
property float z
property uchar red
property uchar green
property uchar blue
end_header
There are no Gaussian Splat properties such as:
scale_0
scale_1
scale_2
rot_0
rot_1
rot_2
rot_3
opacity
This is expected because it is a plain point-cloud PLY with positions and RGB colors.
What I tried
const tree = new SplatMesh({
url: './images/tree.ply',
});
scene.add(tree);
Result:
Invalid PLY file
2. Removed CloudCompare obj_info line
Because Spark first complained about:
Unsupported PLY header line: obj_info Generated by CloudCompare!
I created a copy of the binary PLY with only this header line removed:
obj_info Generated by CloudCompare!
The binary vertex body was left untouched.
Then I tested:
const tree = new SplatMesh({
url: './images/Tree-no-obj-info.ply',
});
scene.add(tree);
Result:
Missing scale_0 property
3. Direct SplatLoader, no explicit file type
const loader = new SplatLoader();
const loadedSplats = await loader.loadAsync('./images/Tree-no-obj-info.ply');
const tree = new SplatMesh({
packedSplats: loadedSplats as PackedSplats,
});
scene.add(tree);
Result:
Missing scale_0 property
4. Ensured no explicit SplatFileType.PLY
I also changed my own loader mapping so that .ply is not passed as an explicit fileType, allowing Spark to auto-detect:
function getExplicitSparkFileType(format?: SplatFormat): SplatFileType | undefined {
switch (format) {
case 'ply':
// Let Spark auto-detect PLY variants from file contents.
return undefined;
}
So for .ply, Spark receives:
new SplatMesh({ url });
or:
new PackedSplats({ url });
not:
new SplatMesh({
url,
fileType: SplatFileType.PLY,
});
The issue still occurs.
Minimal reproduction
This minimal Spark setup works with .spz files:
<style> body { margin: 0; } </style> <script type="importmap"> { "imports": { "three": "https://cdnjs.cloudflare.com/ajax/libs/three.js/0.180.0/three.module.js", "@sparkjsdev/spark": "https://sparkjs.dev/releases/spark/2.0.0/spark.module.js" } } </script> <script type="module"> import * as THREE from "three"; import { SparkRenderer, SplatMesh } from "@sparkjsdev/spark"; const scene = new THREE.Scene(); const camera = new THREE.PerspectiveCamera( 60, window.innerWidth / window.innerHeight, 0.01, 1000 ); const renderer = new THREE.WebGLRenderer(); renderer.setSize(window.innerWidth, window.innerHeight); document.body.appendChild(renderer.domElement); const spark = new SparkRenderer({ renderer }); scene.add(spark); // This works: const spz = new SplatMesh({ url: "https://sparkjs.dev/assets/splats/butterfly.spz", }); spz.position.set(0, 0, -3); scene.add(spz); renderer.setAnimationLoop(() => { renderer.render(scene, camera); spz.rotation.y += 0.01; }); </script>But replacing the SPZ load with the plain CloudCompare PLY fails:
const tree = new SplatMesh({
url: './Tree-no-obj-info.ply',
});
tree.position.set(0, 0, -3);
scene.add(tree);
Error:
Missing scale_0 property
Expected behavior
A plain point-cloud PLY with:
property float x
property float y
property float z
property uchar red
property uchar green
property uchar blue
should load as a point-cloud splat source, based on the Spark v2 documentation.
Spark should not require Gaussian Splat fields such as scale_0 for this plain point-cloud PLY path.
Actual behavior
Spark v2 appears to treat the file as a Gaussian Splat PLY and throws:
Missing scale_0 property
Before removing the CloudCompare metadata line, it also throws:
Unsupported PLY header line: obj_info Generated by CloudCompare!
Questions
Is plain binary little-endian x/y/z/r/g/b PLY currently supported in Spark v2?
Should obj_info PLY header lines be ignored as valid PLY metadata?
Is this a regression from Spark 0.1.x?
Is there a different loading option needed for plain point-cloud PLY files in Spark v2?
If plain point-cloud PLY support is intended, could the loader auto-detect this file as a point-cloud PLY instead of a Gaussian Splat PLY?
Why this matters
I am building a Three.js/Spark-based viewer that needs to support multiple splat/point-cloud formats:
.spz
.splat
.ksplat
Gaussian Splat .ply
plain point-cloud .ply from tools like CloudCompare
SPZ works correctly in Spark v2, but this plain CloudCompare PLY no longer loads after upgrading to Spark v2.