Engine#

Optimizers#

Optimizers class.

class nerfstudio.engine.optimizers.AdamOptimizerConfig(_target: ~typing.Type = <class 'torch.optim.adam.Adam'>, lr: float = 0.0005, eps: float = 1e-08, weight_decay: float = 0)#

Bases: OptimizerConfig

Basic optimizer config with Adam

class nerfstudio.engine.optimizers.OptimizerConfig(_target: ~typing.Type = <class 'torch.optim.adam.Adam'>, lr: float = 0.0005, eps: float = 1e-08)#

Bases: PrintableConfig

Basic optimizer config with RAdam

setup(params) Any#

Returns the instantiated object using the config.

class nerfstudio.engine.optimizers.Optimizers(config: Dict[str, Any], param_groups: Dict[str, List[Parameter]])#

Bases: object

A set of optimizers.

Parameters:
  • config – The optimizer configuration object.

  • param_groups – A dictionary of parameter groups to optimize.

load_optimizers(loaded_state: Dict[str, Any]) None#

Helper to load the optimizer state from previous checkpoint

Parameters:

loaded_state – the state from the previous checkpoint

optimizer_scaler_step_all(grad_scaler: GradScaler) None#

Take an optimizer step using a grad scaler.

Parameters:

grad_scaler – GradScaler to use

optimizer_step(param_group_name: str) None#

Fetch and step corresponding optimizer.

Parameters:

param_group_name – name of optimizer to step forward

optimizer_step_all()#

Run step for all optimizers.

scheduler_step(param_group_name: str) None#

Fetch and step corresponding scheduler.

Parameters:

param_group_name – name of scheduler to step forward

scheduler_step_all(step: int) None#

Run step for all schedulers.

Parameters:

step – the current step

zero_grad_all() None#

Zero the gradients for all optimizer parameters.

class nerfstudio.engine.optimizers.RAdamOptimizerConfig(_target: ~typing.Type = <class 'torch.optim.radam.RAdam'>, lr: float = 0.0005, eps: float = 1e-08)#

Bases: OptimizerConfig

Basic optimizer config with RAdam

nerfstudio.engine.optimizers.setup_optimizers(config: Config, param_groups: Dict[str, List[Parameter]]) Optimizers#

Helper to set up the optimizers

Parameters:
  • config – The trainer configuration object.

  • param_groups – A dictionary of parameter groups to optimize.

Returns:

The optimizers object.

Schedulers#

Scheduler Classes

class nerfstudio.engine.schedulers.DelayedExponentialScheduler(optimizer: Optimizer, lr_init, lr_final, max_steps, delay_epochs: int = 200)#

Bases: DelayerScheduler

Delayer Scheduler with an Exponential Scheduler initialized afterwards.

class nerfstudio.engine.schedulers.DelayerScheduler(optimizer: Optimizer, lr_init, lr_final, max_steps, delay_epochs: int = 500, after_scheduler: Optional[LambdaLR] = None)#

Bases: LambdaLR

Starts with a flat lr schedule until it reaches N epochs then applies a given scheduler

class nerfstudio.engine.schedulers.ExponentialDecaySchedule(optimizer, lr_init, lr_final, max_steps, lr_delay_steps=0, lr_delay_mult=1.0)#

Bases: LambdaLR

Exponential learning rate decay function. See https://github.com/google-research/google-research/blob/ fd2cea8cdd86b3ed2c640cbe5561707639e682f3/jaxnerf/nerf/utils.py#L360 for details.

Parameters:
  • optimizer – The optimizer to update.

  • lr_init – The initial learning rate.

  • lr_final – The final learning rate.

  • max_steps – The maximum number of steps.

  • lr_delay_steps – The number of steps to delay the learning rate.

  • lr_delay_mult – The multiplier for the learning rate after the delay.

class nerfstudio.engine.schedulers.SchedulerConfig(_target: ~typing.Type = <factory>, lr_final: float = 5e-06, max_steps: int = 1000000)#

Bases: InstantiateConfig

Basic scheduler config with self-defined exponential decay schedule

setup(optimizer=None, lr_init=None, **kwargs) Any#

Returns the instantiated object using the config.