Nerfstudio requires python >= 3.8. We recommend using conda to manage dependencies. Make sure to install Conda before proceeding.

Install Git.

Install Visual Studio 2022. This must be done before installing CUDA. The necessary components are included in the Desktop Development with C++ workflow (also called C++ Build Tools in the BuildTools edition).

Nerfstudio requires python >= 3.8. We recommend using conda to manage dependencies. Make sure to install Conda before proceeding.

Create environment#

conda create --name nerfstudio -y python=3.8
conda activate nerfstudio
python -m pip install --upgrade pip



Note that if a PyTorch version prior to 2.0.1 is installed, the previous version of pytorch, functorch, and tiny-cuda-nn should be uninstalled.

pip uninstall torch torchvision functorch tinycudann

Install PyTorch 2.1.2 with CUDA 11.8:

pip install torch==2.1.2+cu118 torchvision==0.16.2+cu118 --extra-index-url

To build the necessary CUDA extensions, cuda-toolkit is also required. We recommend installing with conda:

conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit

Install PyTorch 2.0.1 with CUDA 11.7:

pip install torch==2.0.1+cu117 torchvision==0.15.2+cu117 --extra-index-url

To build the necessary CUDA extensions, cuda-toolkit is also required. We recommend installing with conda:

conda install -c "nvidia/label/cuda-11.7.1" cuda-toolkit


After pytorch and ninja, install the torch bindings for tiny-cuda-nn:

pip install ninja git+

Installing nerfstudio#

From pip

pip install nerfstudio

From source Optional, use this command if you want the latest development version.

git clone
cd nerfstudio
pip install --upgrade pip setuptools
pip install -e .


Below are optional installations, but makes developing with nerfstudio much more convenient.

Tab completion (bash & zsh)

This needs to be rerun when the CLI changes, for example if nerfstudio is updated.


Development packages

pip install -e .[dev]
pip install -e .[docs]

Use docker image#

Instead of installing and compiling prerequisites, setting up the environment and installing dependencies, a ready to use docker image is provided.


Docker (get docker) and nvidia GPU drivers (get nvidia drivers), capable of working with CUDA 11.8, must be installed. The docker image can then either be pulled from here (replace with the actual version, e.g. 0.1.18)

docker pull dromni/nerfstudio:<version>

or be built from the repository using

docker build --tag nerfstudio -f Dockerfile .

To restrict to only CUDA architectures that you have available locally, use the CUDA_ARCHITECTURES build arg and look up the compute capability for your GPU. For example, here’s how to build with support for GeForce 30xx series GPUs:

docker build \
    --build-arg CUDA_VERSION=11.8.0 \
    --build-arg CUDA_ARCHITECTURES=86 \
    --build-arg OS_VERSION=22.04 \
    --tag nerfstudio-86 \
    --file Dockerfile .

The user inside the container is called ‘user’ and is mapped to the local user with ID 1000 (usually the first non-root user on Linux systems).
If you suspect that your user might have a different id, override USER_ID during the build as follows:

docker build \
    --build-arg USER_ID=$(id -u) \
    --file Dockerfile .

Using an interactive container#

The docker container can be launched with an interactive terminal where nerfstudio commands can be entered as usual. Some parameters are required and some are strongly recommended for usage as following:

docker run --gpus all \                                         # Give the container access to nvidia GPU (required).
            -u $(id -u) \                                       # To prevent abusing of root privilege, please use custom user privilege to start.
            -v /folder/of/your/data:/workspace/ \               # Mount a folder from the local machine into the container to be able to process them (required).
            -v /home/<YOUR_USER>/.cache/:/home/user/.cache/ \   # Mount cache folder to avoid re-downloading of models everytime (recommended).
            -p 7007:7007 \                                      # Map port from local machine to docker container (required to access the web interface/UI).
            --rm \                                              # Remove container after it is closed (recommended).
            -it \                                               # Start container in interactive mode.
            --shm-size=12gb \                                   # Increase memory assigned to container to avoid memory limitations, default is 64 MB (recommended).
            dromni/nerfstudio:<tag>                             # Docker image name if you pulled from docker hub.
            <--- OR --->
            nerfstudio                                          # Docker image tag if you built the image from the Dockerfile by yourself using the command from above.

Call nerfstudio commands directly#

Besides, the container can also directly be used by adding the nerfstudio command to the end.

docker run --gpus all -u $(id -u) -v /folder/of/your/data:/workspace/ -v /home/<YOUR_USER>/.cache/:/home/user/.cache/ -p 7007:7007 --rm -it --shm-size=12gb  # Parameters.
            dromni/nerfstudio:<tag> \                           # Docker image name
            ns-process-data video --data /workspace/video.mp4   # Smaple command of nerfstudio.


  • The container works on Linux and Windows, depending on your OS some additional setup steps might be required to provide access to your GPU inside containers.

  • Paths on Windows use backslash ‘' while unix based systems use a frontslash ‘/’ for paths, where backslashes might require an escape character depending on where they are used (e.g. C:\folder1\folder2…). Alternatively, mounts can be quoted (e.g. -v 'C:\local_folder:/docker_folder'). Ensure to use the correct paths when mounting folders or providing paths as parameters.

  • Always use full paths, relative paths are known to create issues when being used in mounts into docker.

  • Everything inside the container, what is not in a mounted folder (workspace in the above example), will be permanently removed after destroying the container. Always do all your tasks and output folder in workdir!

  • The container currently is based on nvidia/cuda:11.8.0-devel-ubuntu22.04, consequently it comes with CUDA 11.8 which must be supported by the nvidia driver. No local CUDA installation is required or will be affected by using the docker image.

  • The docker image (respectively Ubuntu 22.04) comes with Python3.10, no older version of Python is installed.

  • If you call the container with commands directly, you still might want to add the interactive terminal (‘-it’) flag to get live log outputs of the nerfstudio scripts. In case the container is used in an automated environment the flag should be discarded.

  • The current version of docker is built for multi-architecture (CUDA architectures) use. The target architecture(s) must be defined at build time for Colmap and tinyCUDNN to be able to compile properly. If your GPU architecture is not covered by the following table you need to replace the number in the line ARG CUDA_ARCHITECTURES=90;89;86;80;75;70;61;52;37 to your specific architecture. It also is a good idea to remove all architectures but yours (e.g. ARG CUDA_ARCHITECTURES=86) to speedup the docker build process a lot.

  • To avoid memory issues or limitations during processing, it is recommended to use either --shm-size=12gb or --ipc=host to increase the memory available to the docker container. 12gb as in the example is only a suggestion and may be replaced by other values depending on your hardware and requirements.

Currently supported CUDA architectures in the docker image


CUDA arch











TITAN V / V100


10X0 / TITAN Xp






Installation FAQ#

ImportError: DLL load failed while importing _89_C

This occurs with certain GPUs that have CUDA architecture versions (89 in the example above) for which tiny-cuda-nn does not automatically compile support.


Reinstall tiny-cuda-nn with the following command:

pip install git+

Where XX is the architecture version listed here. Ie. for a 4090 GPU use TCNN_CUDA_ARCHITECTURES=89

tiny-cuda-nn installation errors out with cuda mismatch

While installing tiny-cuda, you run into: The detected CUDA version mismatches the version that was used to compile PyTorch (10.2). Please make sure to use the same CUDA versions.


Reinstall PyTorch with the correct CUDA version. See pytorch under Dependencies, above.

(Windows) tiny-cuda-nn installation errors out with no CUDA toolset found

While installing tiny-cuda on Windows, you run into: No CUDA toolset found.


Confirm that you have Visual Studio installed.

Make sure CUDA Visual Studio integration is enabled. This should be done automatically by the CUDA installer if it is run after Visual Studio is installed. You can also manually enable integration.

To manually enable integration for Visual Studio 2019, copy all 4 files from

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\extras\visual_studio_integration\MSBuildExtensions


C:\Program Files (x86)\Microsoft Visual Studio\2019\BuildTools\MSBuild\Microsoft\VC\v160\BuildCustomizations

To manually enable integration for Visual Studio 2022, copy all 4 files from

C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\extras\visual_studio_integration\MSBuildExtensions


C:\Program Files\Microsoft Visual Studio\2022\[Community, Professional, Enterprise, or BuildTools]\MSBuild\Microsoft\VC\v160\BuildCustomizations

Installation errors, File “” not found

When installing dependencies and nerfstudio with pip install -e ., you run into: ERROR: File "" not found. Directory cannot be installed in editable mode

Solution: This can be fixed by upgrading pip to the latest version:

python -m pip install --upgrade pip

Runtime errors: “len(sources) > 0”, “ctype = _C.ContractionType(type.value) ; TypeError: ‘NoneType’ object is not callable”.

When running , an error occurs when installing cuda files in the backend code.

Solution: This is a problem with not being able to detect the correct CUDA version, and can be fixed by updating the CUDA path environment variables:

export CUDA_HOME=/usr/local/cuda
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64
export PATH=$PATH:$CUDA_HOME/bin