Using custom data#
Training model on existing datasets is only so fun. If you would like to train on self captured data you will need to process the data into the nerfstudio format. Specifically we need to know the camera poses for each image.
To process your own data run:
ns-process-data {video,images,polycam,insta360,record3d} --data {DATA_PATH} --output-dir {PROCESSED_DATA_DIR}
A full set of arguments can be found here.
We Currently support the following custom data types:
Data |
Capture Device |
Requirements |
|
---|---|---|---|
📷 Images |
Any |
🐢 |
|
📹 Video |
Any |
🐢 |
|
📱 Polycam |
IOS with LiDAR |
🐇 |
|
IOS or Android |
🐇 |
||
📱 Record3D |
IOS with LiDAR |
🐇 |
|
Any |
🐇 |
Images or Video#
To assist running on custom data we have a script that will process a video or folder of images into a format that is compatible with nerfstudio. We use COLMAP and FFmpeg in our data processing script, please have these installed. We have provided a quickstart to installing COLMAP below, FFmpeg can be downloaded from here
Tip
COLMAP can be finicky. Try your best to capture overlapping, non-blurry images.
Processing Data#
ns-process-data {images, video} --data {DATA_PATH} --output-dir {PROCESSED_DATA_DIR}
Training on your data#
ns-train nerfacto --data {PROCESSED_DATA_DIR}
Installing COLMAP#
There are many ways to install COLMAP, unfortunately it can sometimes be a bit finicky. If the following commands do not work, please refer to the COLMAP installation guide for additional installation methods. COLMAP install issues are common! Feel free to ask for help in on our Discord.
We recommend trying apt
:
sudo apt install colmap
If that doesn’t work, you can try VKPG:
git clone https://github.com/microsoft/vcpkg
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg install colmap[cuda]:x64-linux
git clone https://github.com/microsoft/vcpkg
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg install colmap:x64-linux
If that doesn’t work, you will need to build from source. Refer to the COLMAP installation guide for details.
git clone https://github.com/microsoft/vcpkg
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg install colmap
git clone https://github.com/microsoft/vcpkg
cd vcpkg
./bootstrap-vcpkg.bat
./vcpkg install colmap[cuda]:x64-windows
git clone https://github.com/microsoft/vcpkg
cd vcpkg
./bootstrap-vcpkg.sh
./vcpkg install colmap:x64-windows
Polycam Capture#
Nerfstudio can also be trained directly from captures from the Polycam app. This avoids the need to use COLMAP. Polycam’s poses are globally optimized which make them more robust to drift (an issue with ARKit or SLAM methods).
To get the best results, try to reduce motion blur as much as possible and try to view the target from as many viewpoinrts as possible. Polycam recommends having good lighting and moving the camera slowly if using auto mode. Or, even better, use the manual shutter mode to capture less blurry images.
Note
A LiDAR enabled iPhone or iPad is necessary.
Setting up Polycam#

Devoloper settings must be enabled in Polycam. To do this, navigate to the settings screen and select Developer mode
. Note that this will only apply for future captures, you will not be able to process existing captures with nerfstudio.
Process data#

Capture data in LiDAR or Room mode.
Tap
Process
to process the data in the Polycam app.Navigate to the export app pane.
Select
raw data
to export a.zip
file.Convert the Polycam data into the nerfstudio format using the following command:
ns-process-data polycam --data {OUTPUT_FILE.zip} --output-dir {output directory}
Train with nerfstudio!
ns-train nerfacto --data {output directory}
KIRI Engine Capture#
Nerfstudio can trained from data processed by the KIRI Engine app. This works for both Android and iPhone and does not require a LiDAR supported device.
Note
ns-process-data
does not need to be run when using KIRI Engine.
Setting up KIRI Engine#

After downloading the app, Developer Mode
needs to be enabled. A toggle can be found in the settings menu.
Process data#

Navigate to captures window.
Select
Dev.
tab.Tap the
+
button to create a new capture.Choose
Camera pose
as the capture optionCapture the scene and provide a name.
After processing is complete, export the scene. It will be sent to your email.
Unzip the file and run the training script (
ns-process-data
is not necessary).
ns-train nerfacto --data {kiri output directory}
Record3D Capture#
Nerfstudio can be trained directly from >=iPhone 12 Pro captures from the Record3D app. This uses the iPhone’s LiDAR sensors to calculate camera poses, so COLMAP is not needed.
Click on the image down below 👇 for a 1-minute tutorial on how to run nerfstudio with Record3D from start to finish.
At a high level, you can follow these 3 steps:
Record a video and export with the EXR + JPG sequence format.


Then, move the exported capture folder from your iPhone to your computer.
Convert the data to the nerfstudio format.
ns-process-data record3d --data {data directory} --output-dir {output directory}
Train with nerfstudio!
ns-train nerfacto --data {output directory}
Metashape#
Align your images using Metashape.
File -> Workflow -> Align Photos...

Export the camera alignment as a
xml
file.File -> Export -> Export Cameras...

Convert the data to the nerfstudio format.
ns-process-data metashape --data {data directory} --xml {xml file} --output-dir {output directory}
Train with nerfstudio!
ns-train nerfacto --data {output directory}