MLP#
Multi Layer Perceptron
- class nerfstudio.field_components.mlp.MLP(in_dim: int, num_layers: int, layer_width: int, out_dim: Optional[int] = None, skip_connections: Optional[Tuple[int]] = None, activation: Optional[Module] = ReLU(), out_activation: Optional[Module] = None, implementation: Literal['tcnn', 'torch'] = 'torch')[source]#
Bases:
FieldComponent
Multilayer perceptron
- Parameters:
in_dim – Input layer dimension
num_layers – Number of network layers
layer_width – Width of each MLP layer
out_dim – Output layer dimension. Uses layer_width if None.
activation – intermediate layer activation function.
out_activation – output activation function.
implementation – Implementation of hash encoding. Fallback to torch if tcnn not available.
- forward(in_tensor: Float[Tensor, '*bs in_dim']) Float[Tensor, '*bs out_dim'] [source]#
Returns processed tensor
- Parameters:
in_tensor – Input tensor to process
- class nerfstudio.field_components.mlp.MLPWithHashEncoding(num_levels: int = 16, min_res: int = 16, max_res: int = 1024, log2_hashmap_size: int = 19, features_per_level: int = 2, hash_init_scale: float = 0.001, interpolation: Optional[Literal['Nearest', 'Linear', 'Smoothstep']] = None, num_layers: int = 2, layer_width: int = 64, out_dim: Optional[int] = None, skip_connections: Optional[Tuple[int]] = None, activation: Optional[Module] = ReLU(), out_activation: Optional[Module] = None, implementation: Literal['tcnn', 'torch'] = 'torch')[source]#
Bases:
FieldComponent
Multilayer perceptron with hash encoding
- Parameters:
num_levels – Number of feature grids.
min_res – Resolution of smallest feature grid.
max_res – Resolution of largest feature grid.
log2_hashmap_size – Size of hash map is 2^log2_hashmap_size.
features_per_level – Number of features per level.
hash_init_scale – Value to initialize hash grid.
interpolation – Interpolation override for tcnn hashgrid. Not supported for torch unless linear.
num_layers – Number of network layers
layer_width – Width of each MLP layer
out_dim – Output layer dimension. Uses layer_width if None.
activation – intermediate layer activation function.
out_activation – output activation function.
implementation – Implementation of hash encoding. Fallback to torch if tcnn not available.
- nerfstudio.field_components.mlp.activation_to_tcnn_string(activation: Optional[Module]) str [source]#
Converts a torch.nn activation function to a string that can be used to initialize a TCNN activation function.
- Parameters:
activation – torch.nn activation function
- Returns:
TCNN activation function string
- Return type:
str