Losses#

Collection of Losses.

class nerfstudio.model_components.losses.DephtLossType(value)#

Bases: Enum

Types of depth losses for depth supervision.

nerfstudio.model_components.losses.depth_loss(weights: Tensor, ray_samples: RaySamples, termination_depth: Tensor, predicted_depth: Tensor, sigma: Tensor, directions_norm: Tensor, is_euclidean: bool, depth_loss_type: DephtLossType) Tensor#

Implementation of depth losses.

Parameters:
  • weights – Weights predicted for each sample.

  • ray_samples – Samples along rays corresponding to weights.

  • termination_depth – Ground truth depth of rays.

  • predicted_depth – Depth prediction from the network.

  • sigma – Uncertainty around depth value.

  • directions_norm – Norms of ray direction vectors in the camera frame.

  • is_euclidean – Whether ground truth depths corresponds to normalized direction vectors.

  • depth_loss_type – Type of depth loss to apply.

Returns:

Depth loss scalar.

nerfstudio.model_components.losses.distortion_loss(weights_list, ray_samples_list)#

From mipnerf360

nerfstudio.model_components.losses.ds_nerf_depth_loss(weights: Tensor, termination_depth: Tensor, steps: Tensor, lengths: Tensor, sigma: Tensor) Tensor#

Depth loss from Depth-supervised NeRF (Deng et al., 2022).

Parameters:
  • weights – Weights predicted for each sample.

  • termination_depth – Ground truth depth of rays.

  • steps – Sampling distances along rays.

  • lengths – Distances between steps.

  • sigma – Uncertainty around depth values.

Returns:

Depth loss scalar.

nerfstudio.model_components.losses.interlevel_loss(weights_list, ray_samples_list)#

Calculates the proposal loss in the MipNeRF-360 paper.

https://github.com/kakaobrain/NeRF-Factory/blob/f61bb8744a5cb4820a4d968fb3bfbed777550f4a/src/model/mipnerf360/model.py#L515 https://github.com/google-research/multinerf/blob/b02228160d3179300c7d499dca28cb9ca3677f32/internal/train_utils.py#L133

nerfstudio.model_components.losses.lossfun_distortion(t, w)#

https://github.com/kakaobrain/NeRF-Factory/blob/f61bb8744a5cb4820a4d968fb3bfbed777550f4a/src/model/mipnerf360/helper.py#L142 https://github.com/google-research/multinerf/blob/b02228160d3179300c7d499dca28cb9ca3677f32/internal/stepfun.py#L266

nerfstudio.model_components.losses.lossfun_outer(t: Tensor, w: Tensor, t_env: Tensor, w_env: Tensor)#

https://github.com/kakaobrain/NeRF-Factory/blob/f61bb8744a5cb4820a4d968fb3bfbed777550f4a/src/model/mipnerf360/helper.py#L136 https://github.com/google-research/multinerf/blob/b02228160d3179300c7d499dca28cb9ca3677f32/internal/stepfun.py#L80

Parameters:
  • t – interval edges

  • w – weights

  • t_env – interval edges of the upper bound enveloping histogram

  • w_env – weights that should upper bound the inner (t,w) histogram

nerfstudio.model_components.losses.nerfstudio_distortion_loss(ray_samples: RaySamples, densities: Optional[Tensor] = None, weights: Optional[Tensor] = None) Tensor#

Ray based distortion loss proposed in MipNeRF-360. Returns distortion Loss.

\[\mathcal{L}(\mathbf{s}, \mathbf{w}) =\iint\limits_{-\infty}^{\,\,\,\infty} \mathbf{w}_\mathbf{s}(u)\mathbf{w}_\mathbf{s}(v)|u - v|\,d_{u}\,d_{v}\]

where \(\mathbf{w}_\mathbf{s}(u)=\sum_i w_i \mathbb{1}_{[\mathbf{s}_i, \mathbf{s}_{i+1})}(u)\) is the weight at location \(u\) between bin locations \(s_i\) and \(s_{i+1}\).

Parameters:
  • ray_samples – Ray samples to compute loss over

  • densities – Predicted sample densities

  • weights – Predicted weights from densities and sample locations

nerfstudio.model_components.losses.orientation_loss(weights: Tensor, normals: Tensor, viewdirs: Tensor)#

Orientation loss proposed in Ref-NeRF. Loss that encourages that all visible normals are facing towards the camera.

nerfstudio.model_components.losses.outer(t0_starts: Tensor, t0_ends: Tensor, t1_starts: Tensor, t1_ends: Tensor, y1: Tensor) Tensor#

Faster version of

https://github.com/kakaobrain/NeRF-Factory/blob/f61bb8744a5cb4820a4d968fb3bfbed777550f4a/src/model/mipnerf360/helper.py#L117 https://github.com/google-research/multinerf/blob/b02228160d3179300c7d499dca28cb9ca3677f32/internal/stepfun.py#L64

Parameters:
  • t0_starts – start of the interval edges

  • t0_ends – end of the interval edges

  • t1_starts – start of the interval edges

  • t1_ends – end of the interval edges

  • y1 – weights

nerfstudio.model_components.losses.pred_normal_loss(weights: Tensor, normals: Tensor, pred_normals: Tensor)#

Loss between normals calculated from density and normals from prediction network.

nerfstudio.model_components.losses.ray_samples_to_sdist(ray_samples)#

Convert ray samples to s space

nerfstudio.model_components.losses.urban_radiance_field_depth_loss(weights: Tensor, termination_depth: Tensor, predicted_depth: Tensor, steps: Tensor, sigma: Tensor) Tensor#

Lidar losses from Urban Radiance Fields (Rematas et al., 2022).

Parameters:
  • weights – Weights predicted for each sample.

  • termination_depth – Ground truth depth of rays.

  • predicted_depth – Depth prediction from the network.

  • steps – Sampling distances along rays.

  • sigma – Uncertainty around depth values.

Returns:

Depth loss scalar.