Spatial Computing2021 - Present

Photogrammetry & 3D Capture

Three approaches to capturing the real world in 3D — neural radiance fields, photogrammetry, and point clouds.

3

Capture Methods

9+

NeRF / Splat Scenes

Real-Time

3DGS Playback

iPhone

Primary Capture

RoleCreator / Capture Artist
MethodsNeRF, 3DGS, Photogrammetry, Point Cloud
CaptureiPhone, LiDAR, Drone
ToolsLuma AI, Polycam, Blender, Nerfstudio
OutputVolumetric Scenes, Meshes, Point Clouds
SubjectsArchitecture, People, Objects, Nature
Loading 3D...

Dino Skeleton Capture

Loading 3D...

Xaya Environment

Loading 3D...

Gal Band

Loading 3D...

Living Room

Loading 3D...

Stone Scene

Loading 3D...

Environment Capture

Overview

Three approaches to capturing the real world in 3D. Neural radiance fields and Gaussian splats for photorealistic volumetric scenes. Photogrammetry for production-ready textured meshes. Point clouds for raw spatial data at scale. Each method has a different output, a different strength, and a different place in the pipeline. These aren't competing technologies — they're complementary tools for different problems. Photogrammetry gives you a mesh you can manipulate, rig, animate, or send to a 3D printer. NeRFs and Gaussian splats give you a scene you can walk through with photorealistic lighting that a mesh can't replicate. Point clouds give you raw spatial truth — the measured coordinates of a space before any interpretation is applied. In practice, they often feed into each other. The same walk-around video footage can produce all three outputs depending on how it's processed.

Capabilities

  • 3D Gaussian Splatting
  • NeRF Reconstruction
  • Photogrammetry Meshes
  • Point Cloud Capture
  • Web Embed Integration
  • VR Ready
How We Work

Process

01

Capture

Walk-around video or multi-angle photo capture using iPhone, LiDAR, or drone. Camera movement speed and overlap are calibrated per subject for optimal reconstruction quality.

02

Process & Reconstruct

Raw footage is processed through Luma AI, Nerfstudio, or RealityCapture depending on output target. Neural rendering produces volumetric scenes; photogrammetry generates meshes; LiDAR yields point clouds.

03

Refine & Optimize

Scene cleanup, mesh decimation, texture optimization, and format conversion. Outputs are optimized for their target platform — web delivery, VR environments, or 3D printing pipelines.

04

Deliver & Embed

Final outputs are embedded as interactive 3D scenes on the web (WebGL), exported as production-ready meshes (OBJ, FBX, GLB, USDZ), or archived as spatial data (PLY, LAS).

Why This Matters

Use Cases & Applications

Virtual Tours

Create immersive walkthroughs of real estate, venues, and retail spaces that visitors can explore from any angle.

Product Visualization

Capture physical products in photorealistic 3D for e-commerce, allowing customers to inspect items before purchase.

Heritage Preservation

Digitally preserve historical sites, artifacts, and cultural landmarks with sub-millimeter accuracy.

Training & Simulation

Build realistic environments for employee training, safety simulations, and educational experiences.

Tools & Stack

Always evolving — adopting the best available tool for each project.

Luma AINerfstudio3D Gaussian SplattingPolycamRealityCaptureiPhone LiDARDJI DroneBlenderCOLMAPWebGLStructure from Motion

Interested in a similar project?

START A CONVERSATION

Have a project in mind?

START A CONVERSATION