Engine#

Optimizers#

Optimizers class.

class nerfstudio.engine.optimizers.AdamOptimizerConfig(_target: ~typing.Type = <class 'torch.optim.adam.Adam'>, lr: float = 0.0005, eps: float = 1e-08, max_norm: ~typing.Optional[float] = None, weight_decay: float = 0)[source]#

Bases: OptimizerConfig

Basic optimizer config with Adam

weight_decay: float = 0#

The weight decay to use.

class nerfstudio.engine.optimizers.OptimizerConfig(_target: ~typing.Type = <class 'torch.optim.adam.Adam'>, lr: float = 0.0005, eps: float = 1e-08, max_norm: ~typing.Optional[float] = None)[source]#

Bases: PrintableConfig

Basic optimizer config with RAdam

eps: float = 1e-08#

The epsilon value to use.

lr: float = 0.0005#

The learning rate to use.

max_norm: Optional[float] = None#

The max norm to use for gradient clipping.

setup(params) Optimizer[source]#

Returns the instantiated object using the config.

class nerfstudio.engine.optimizers.Optimizers(config: Dict[str, Any], param_groups: Dict[str, List[Parameter]])[source]#

Bases: object

A set of optimizers.

Parameters
  • config – The optimizer configuration object.

  • param_groups – A dictionary of parameter groups to optimize.

load_optimizers(loaded_state: Dict[str, Any]) None[source]#

Helper to load the optimizer state from previous checkpoint

Parameters

loaded_state – the state from the previous checkpoint

load_schedulers(loaded_state: Dict[str, Any]) None[source]#

Helper to load the scheduler state from previous checkpoint

Parameters

loaded_state – the state from the previous checkpoint

optimizer_scaler_step_all(grad_scaler: GradScaler) None[source]#

Take an optimizer step using a grad scaler.

Parameters

grad_scaler – GradScaler to use

optimizer_scaler_step_some(grad_scaler: GradScaler, param_groups: List[str]) None[source]#

Take an optimizer step using a grad scaler ONLY on the specified param groups.

Parameters

grad_scaler – GradScaler to use

optimizer_step(param_group_name: str) None[source]#

Fetch and step corresponding optimizer.

Parameters

param_group_name – name of optimizer to step forward

optimizer_step_all() None[source]#

Run step for all optimizers.

scheduler_step(param_group_name: str) None[source]#

Fetch and step corresponding scheduler.

Parameters

param_group_name – name of scheduler to step forward

scheduler_step_all(step: int) None[source]#

Run step for all schedulers.

Parameters

step – the current step

zero_grad_all() None[source]#

Zero the gradients for all optimizer parameters.

zero_grad_some(param_groups: List[str]) None[source]#

Zero the gradients for the given parameter groups.

class nerfstudio.engine.optimizers.RAdamOptimizerConfig(_target: ~typing.Type = <class 'torch.optim.radam.RAdam'>, lr: float = 0.0005, eps: float = 1e-08, max_norm: ~typing.Optional[float] = None, weight_decay: float = 0)[source]#

Bases: OptimizerConfig

Basic optimizer config with RAdam

weight_decay: float = 0#

The weight decay to use.

Schedulers#

Scheduler Classes

class nerfstudio.engine.schedulers.CosineDecayScheduler(config: SchedulerConfig)[source]#

Bases: Scheduler

Cosine decay scheduler with linear warmup

get_scheduler(optimizer: Optimizer, lr_init: float) LRScheduler[source]#

Abstract method that returns a scheduler object.

Parameters
  • optimizer – The optimizer to use.

  • lr_init – The initial learning rate.

Returns

The scheduler object.

class nerfstudio.engine.schedulers.CosineDecaySchedulerConfig(_target: ~typing.Type = <factory>, warm_up_end: int = 5000, learning_rate_alpha: float = 0.05, max_steps: int = 300000)[source]#

Bases: SchedulerConfig

Config for cosine decay schedule

learning_rate_alpha: float = 0.05#

Learning rate alpha value

max_steps: int = 300000#

The maximum number of steps.

warm_up_end: int = 5000#

Iteration number where warmp ends

class nerfstudio.engine.schedulers.ExponentialDecayScheduler(config: SchedulerConfig)[source]#

Bases: Scheduler

Exponential decay scheduler with linear warmup. Scheduler first ramps up to lr_init in warmup_steps steps, then exponentially decays to lr_final in max_steps steps.

get_scheduler(optimizer: Optimizer, lr_init: float) LRScheduler[source]#

Abstract method that returns a scheduler object.

Parameters
  • optimizer – The optimizer to use.

  • lr_init – The initial learning rate.

Returns

The scheduler object.

class nerfstudio.engine.schedulers.ExponentialDecaySchedulerConfig(_target: ~typing.Type = <factory>, lr_pre_warmup: float = 1e-08, lr_final: ~typing.Optional[float] = None, warmup_steps: int = 0, max_steps: int = 100000, ramp: ~typing.Literal['linear', 'cosine'] = 'cosine')[source]#

Bases: SchedulerConfig

Config for exponential decay scheduler with warmup

lr_final: Optional[float] = None#

Final learning rate. If not provided, it will be set to the optimizers learning rate.

lr_pre_warmup: float = 1e-08#

Learning rate before warmup.

max_steps: int = 100000#

The maximum number of steps.

ramp: Literal['linear', 'cosine'] = 'cosine'#

The ramp function to use during the warmup.

warmup_steps: int = 0#

Number of warmup steps.

class nerfstudio.engine.schedulers.MultiStepScheduler(config: SchedulerConfig)[source]#

Bases: Scheduler

Multi step scheduler where lr decays by gamma every milestone

get_scheduler(optimizer: Optimizer, lr_init: float) LRScheduler[source]#

Abstract method that returns a scheduler object.

Parameters
  • optimizer – The optimizer to use.

  • lr_init – The initial learning rate.

Returns

The scheduler object.

class nerfstudio.engine.schedulers.MultiStepSchedulerConfig(_target: ~typing.Type = <factory>, max_steps: int = 1000000, gamma: float = 0.33, milestones: ~typing.Tuple[int, ...] = (500000, 750000, 900000))[source]#

Bases: SchedulerConfig

Config for multi step scheduler where lr decays by gamma every milestone

gamma: float = 0.33#

The learning rate decay factor.

max_steps: int = 1000000#

The maximum number of steps.

milestones: Tuple[int, ...] = (500000, 750000, 900000)#

The milestone steps at which to decay the learning rate.

class nerfstudio.engine.schedulers.Scheduler(config: SchedulerConfig)[source]#

Bases: object

Base scheduler

abstract get_scheduler(optimizer: Optimizer, lr_init: float) LRScheduler[source]#

Abstract method that returns a scheduler object.

Parameters
  • optimizer – The optimizer to use.

  • lr_init – The initial learning rate.

Returns

The scheduler object.

class nerfstudio.engine.schedulers.SchedulerConfig(_target: ~typing.Type = <factory>)[source]#

Bases: InstantiateConfig

Basic scheduler config

Trainer#

Code to train model.

class nerfstudio.engine.trainer.Trainer(config: TrainerConfig, local_rank: int = 0, world_size: int = 1)[source]#

Bases: object

Trainer class

Parameters
  • config – The configuration object.

  • local_rank – Local rank of the process.

  • world_size – World size of the process.

config#

The configuration object.

local_rank#

Local rank of the process.

world_size#

World size of the process.

device#

The device to run the training on.

pipeline#

The pipeline object.

Type

nerfstudio.pipelines.base_pipeline.VanillaPipeline

optimizers#

The optimizers object.

Type

nerfstudio.engine.optimizers.Optimizers

callbacks#

The callbacks object.

Type

List[nerfstudio.engine.callbacks.TrainingCallback]

training_state#

Current model training state.

setup(test_mode: Literal['test', 'val', 'inference'] = 'val') None[source]#

Setup the Trainer by calling other setup functions.

Parameters

test_mode – ‘val’: loads train/val datasets into memory ‘test’: loads train/test datasets into memory ‘inference’: does not load any dataset into memory

setup_optimizers() Optimizers[source]#

Helper to set up the optimizers

Returns

The optimizers object given the trainer config.

shutdown() None[source]#

Stop the trainer and stop all associated threads/processes (such as the viewer).

train() None[source]#

Train the model.

train_iteration(step: int) Tuple[Tensor, Dict[str, Tensor], Dict[str, Tensor]][source]#

Run one iteration with a batch of inputs. Returns dictionary of model losses.

Parameters

step – Current training step.

class nerfstudio.engine.trainer.TrainerConfig(_target: Type = <factory>, output_dir: Path = PosixPath('outputs'), method_name: Optional[str] = None, experiment_name: Optional[str] = None, project_name: Optional[str] = 'nerfstudio-project', timestamp: str = '{timestamp}', machine: MachineConfig = <factory>, logging: LoggingConfig = <factory>, viewer: ViewerConfig = <factory>, pipeline: VanillaPipelineConfig = <factory>, optimizers: Dict[str, Any] = <factory>, vis: Literal['viewer', 'wandb', 'tensorboard', 'comet', 'viewer+wandb', 'viewer+tensorboard', 'viewer+comet', 'viewer_legacy'] = 'wandb', data: Optional[Path] = None, prompt: Optional[str] = None, relative_model_dir: Path = PosixPath('nerfstudio_models'), load_scheduler: bool = True, steps_per_save: int = 1000, steps_per_eval_batch: int = 500, steps_per_eval_image: int = 500, steps_per_eval_all_images: int = 25000, max_num_iterations: int = 1000000, mixed_precision: bool = False, use_grad_scaler: bool = False, save_only_latest_checkpoint: bool = True, load_dir: Optional[Path] = None, load_step: Optional[int] = None, load_config: Optional[Path] = None, load_checkpoint: Optional[Path] = None, log_gradients: bool = False, gradient_accumulation_steps: Dict[str, int] = <factory>)[source]#

Bases: ExperimentConfig

Configuration for training regimen

gradient_accumulation_steps: Dict[str, int]#

num}

Type

Number of steps to accumulate gradients over. Contains a mapping of {param_group

load_checkpoint: Optional[Path] = None#

Path to checkpoint file.

load_config: Optional[Path] = None#

Path to config YAML file.

load_dir: Optional[Path] = None#

Optionally specify a pre-trained model directory to load from.

load_step: Optional[int] = None#

Optionally specify model step to load from; if none, will find most recent model in load_dir.

log_gradients: bool = False#

Optionally log gradients during training

max_num_iterations: int = 1000000#

Maximum number of iterations to run.

mixed_precision: bool = False#

Whether or not to use mixed precision for training.

save_only_latest_checkpoint: bool = True#

Whether to only save the latest checkpoint or all checkpoints.

steps_per_eval_all_images: int = 25000#

Number of steps between eval all images.

steps_per_eval_batch: int = 500#

Number of steps between randomly sampled batches of rays.

steps_per_eval_image: int = 500#

Number of steps between single eval images.

steps_per_save: int = 1000#

Number of steps between saves.

use_grad_scaler: bool = False#

Use gradient scaler even if the automatic mixed precision is disabled.

Callbacks#

Callback code used for training iterations

class nerfstudio.engine.callbacks.TrainingCallback(where_to_run: List[TrainingCallbackLocation], func: Callable, update_every_num_iters: Optional[int] = None, iters: Optional[Tuple[int, ...]] = None, args: Optional[List] = None, kwargs: Optional[Dict] = None)[source]#

Bases: object

Callback class used during training. The function ‘func’ with ‘args’ and ‘kwargs’ will be called every ‘update_every_num_iters’ training iterations, including at iteration 0. The function is called after the training iteration.

Parameters
  • where_to_run – List of locations for when to run callback (before/after iteration)

  • func – The function that will be called.

  • update_every_num_iters – How often to call the function func.

  • iters – Tuple of iteration steps to perform callback

  • args – args for the function ‘func’.

  • kwargs – kwargs for the function ‘func’.

run_callback(step: int) None[source]#

Callback to run after training step

Parameters

step – current iteration step

run_callback_at_location(step: int, location: TrainingCallbackLocation) None[source]#

Runs the callback if it’s supposed to be run at the given location.

Parameters
  • step – current iteration step

  • location – when to run callback (before/after iteration)

class nerfstudio.engine.callbacks.TrainingCallbackAttributes(optimizers: Optional[Optimizers], grad_scaler: Optional[GradScaler], pipeline: Optional['Pipeline'], trainer: Optional['Trainer'])[source]#

Bases: object

Attributes that can be used to configure training callbacks. The callbacks can be specified in the Dataloader or Model implementations. Instead of providing access to the entire Trainer object, we only provide these attributes. This should be least prone to errors and fairly clean from a user perspective.

grad_scaler: Optional[GradScaler]#

gradient scalers

optimizers: Optional[Optimizers]#

optimizers for training

pipeline: Optional['Pipeline']#

reference to training pipeline

trainer: Optional['Trainer']#

reference to trainer

class nerfstudio.engine.callbacks.TrainingCallbackLocation(value)[source]#

Bases: Enum

Enum for specifying where the training callback should be run.