Data Parsers#
Base Data Parser#
A set of standard datasets.
- class nerfstudio.data.dataparsers.base_dataparser.DataParser(config: DataParserConfig)[source]#
A dataset.
- Parameters:
config – datasetparser config containing all information needed to instantiate dataset
- config#
datasetparser config containing all information needed to instantiate dataset
- includes_time#
Does the dataset include time information in the camera poses.
- Type:
bool
- get_dataparser_outputs(split: str = 'train', **kwargs: Optional[Dict]) DataparserOutputs [source]#
Returns the dataparser outputs for the given split.
- Parameters:
split – Which dataset split to generate (train/test).
kwargs – kwargs for generating dataparser outputs.
- Returns:
DataparserOutputs containing data for the specified dataset and split
- class nerfstudio.data.dataparsers.base_dataparser.DataParserConfig(_target: ~typing.Type = <factory>, data: ~pathlib.Path = PosixPath('.'))[source]#
Basic dataset config
- data: Path = PosixPath('.')#
Directory specifying location of data.
- class nerfstudio.data.dataparsers.base_dataparser.DataparserOutputs(image_filenames: ~typing.List[~pathlib.Path], cameras: ~nerfstudio.cameras.cameras.Cameras, alpha_color: ~typing.Optional[~jaxtyping.Float[Tensor, '3']] = None, scene_box: ~nerfstudio.data.scene_box.SceneBox = <factory>, mask_filenames: ~typing.Optional[~typing.List[~pathlib.Path]] = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>, dataparser_transform: ~jaxtyping.Float[Tensor, '3 4'] = tensor([[1., 0., 0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.]]), dataparser_scale: float = 1.0)[source]#
Dataparser outputs for the which will be used by the DataManager for creating RayBundle and RayGT objects.
- alpha_color: Optional[Float[Tensor, '3']] = None#
Color of dataset background.
- dataparser_scale: float = 1.0#
Scale applied by the dataparser.
- dataparser_transform: Float[Tensor, '3 4'] = tensor([[1., 0., 0., 0.], [0., 1., 0., 0.], [0., 0., 1., 0.]])#
Transform applied by the dataparser.
- image_filenames: List[Path]#
Filenames for the images.
- mask_filenames: Optional[List[Path]] = None#
Filenames for any masks that are required
- metadata: Dict[str, Any]#
Dictionary of any metadata that be required for the given experiment. Will be processed by the InputDataset to create any additional tensors that may be required.
- save_dataparser_transform(path: Path)[source]#
Save dataparser transform to json file. Some dataparsers will apply a transform to the poses, this method allows the transform to be saved so that it can be used in other applications.
- Parameters:
path – path to save transform to
- scene_box: SceneBox#
Scene box of dataset. Used to bound the scene or provide the scene scale depending on model.
- transform_poses_to_original_space(poses: Float[Tensor, 'num_poses 3 4'], camera_convention: Literal['opengl', 'opencv'] = 'opencv') Float[Tensor, 'num_poses 3 4'] [source]#
Transforms the poses in the transformed space back to the original world coordinate system. :param poses: Poses in the transformed space :param camera_convention: Camera system convention used for the transformed poses
- Returns:
Original poses
- class nerfstudio.data.dataparsers.base_dataparser.Semantics(filenames: ~typing.List[~pathlib.Path], classes: ~typing.List[str], colors: ~torch.Tensor, mask_classes: ~typing.List[str] = <factory>)[source]#
Dataclass for semantic labels.
- classes: List[str]#
class labels for data
- colors: Tensor#
color mapping for classes
- filenames: List[Path]#
filenames to load semantic data
- mask_classes: List[str]#
classes to mask out from training for all modalities
- nerfstudio.data.dataparsers.base_dataparser.transform_poses_to_original_space(poses: Float[Tensor, 'num_poses 3 4'], applied_transform: Float[Tensor, '3 4'], applied_scale: float, camera_convention: Literal['opengl', 'opencv'] = 'opencv') Float[Tensor, 'num_poses 3 4'] [source]#
Transforms the poses in the transformed space back to the original world coordinate system. :param poses: Poses in the transformed space :param applied_transform: Transform matrix applied in the data processing step :param applied_scale: Scale used in the data processing step :param camera_convention: Camera system convention used for the transformed poses
- Returns:
Original poses
ARKitScenes#
Data parser for ARKitScenes dataset
- class nerfstudio.data.dataparsers.arkitscenes_dataparser.ARKitScenes(config: ARKitScenesDataParserConfig, includes_time: bool = False)[source]#
Bases:
DataParser
ARKitScenes DatasetParser
- class nerfstudio.data.dataparsers.arkitscenes_dataparser.ARKitScenesDataParserConfig(_target: ~typing.Type = <factory>, data: ~pathlib.Path = PosixPath('data/ARKitScenes/3dod/Validation/41069021'), scale_factor: float = 1.0, scene_scale: float = 1.0, center_method: ~typing.Literal['poses', 'focus', 'none'] = 'poses', auto_scale_poses: bool = True, train_split_fraction: float = 0.9, depth_unit_scale_factor: float = 0.001)[source]#
Bases:
DataParserConfig
ARKitScenes dataset config. ARKitScenes dataset (http://github.com/apple/ARKitScenes) is a large-scale 3D dataset of indoor scenes. This dataparser uses 3D detection subset of the ARKitScenes dataset.
- auto_scale_poses: bool = True#
Whether to automatically scale the poses to fit in +/- 1 bounding box.
- center_method: Literal['poses', 'focus', 'none'] = 'poses'#
The method to use to center the poses.
- data: Path = PosixPath('data/ARKitScenes/3dod/Validation/41069021')#
Path to ARKitScenes folder with densely extracted scenes.
- depth_unit_scale_factor: float = 0.001#
Scales the depth values to meters. Default value is 0.001 for a millimeter to meter conversion.
- scale_factor: float = 1.0#
How much to scale the camera origins by.
- scene_scale: float = 1.0#
How much to scale the region of interest by.
- train_split_fraction: float = 0.9#
The fraction of images to use for training. The remaining images are for eval.
- nerfstudio.data.dataparsers.arkitscenes_dataparser.traj_string_to_matrix(traj_string: str)[source]#
convert traj_string into translation and rotation matrices :param traj_string: A space-delimited file where each line represents a camera position at a particular timestamp. :param The file has seven columns: :param * Column 1: timestamp :param * Columns 2-4: rotation (axis-angle representation in radians) :param * Columns 5-7: translation (usually in meters)
- Returns:
translation matrix Rt: rotation matrix
- Return type:
ts
Blender#
Data parser for blender dataset
- class nerfstudio.data.dataparsers.blender_dataparser.Blender(config: BlenderDataParserConfig)[source]#
Bases:
DataParser
Blender Dataset Some of this code comes from https://github.com/yenchenlin/nerf-pytorch/blob/master/load_blender.py#L37.
- class nerfstudio.data.dataparsers.blender_dataparser.BlenderDataParserConfig(_target: ~typing.Type = <factory>, data: ~pathlib.Path = PosixPath('data/blender/lego'), scale_factor: float = 1.0, alpha_color: str = 'white')[source]#
Bases:
DataParserConfig
Blender dataset parser config
- alpha_color: str = 'white'#
alpha color of background
- data: Path = PosixPath('data/blender/lego')#
Directory specifying location of data.
- scale_factor: float = 1.0#
How much to scale the camera origins by.
D-NeRF#
Data parser for blender dataset
- class nerfstudio.data.dataparsers.dnerf_dataparser.DNeRF(config: DNeRFDataParserConfig)[source]#
Bases:
DataParser
DNeRF Dataset
- class nerfstudio.data.dataparsers.dnerf_dataparser.DNeRFDataParserConfig(_target: ~typing.Type = <factory>, data: ~pathlib.Path = PosixPath('data/dnerf/lego'), scale_factor: float = 1.0, alpha_color: str = 'white')[source]#
Bases:
DataParserConfig
D-NeRF dataset parser config
- alpha_color: str = 'white'#
alpha color of background
- data: Path = PosixPath('data/dnerf/lego')#
Directory specifying location of data.
- scale_factor: float = 1.0#
How much to scale the camera origins by.
dycheck#
Data parser for DyCheck (https://arxiv.org/abs/2210.13445) dataset of iphone subset
- class nerfstudio.data.dataparsers.dycheck_dataparser.Dycheck(config: DycheckDataParserConfig)[source]#
Bases:
DataParser
Dycheck (https://arxiv.org/abs/2210.13445) Dataset iphone subset
- class nerfstudio.data.dataparsers.dycheck_dataparser.DycheckDataParserConfig(_target: ~typing.Type = <factory>, data: ~pathlib.Path = PosixPath('data/iphone/mochi-high-five'), scale_factor: float = 5.0, alpha_color: str = 'white', downscale_factor: int = 1, scene_box_bound: float = 1.5)[source]#
Bases:
DataParserConfig
Dycheck (https://arxiv.org/abs/2210.13445) dataset parser config
- alpha_color: str = 'white'#
alpha color of background
- data: Path = PosixPath('data/iphone/mochi-high-five')#
Directory specifying location of data.
- downscale_factor: int = 1#
How much to downscale images.
- scale_factor: float = 5.0#
How much to scale the camera origins by.
- scene_box_bound: float = 1.5#
Boundary of scene box.
- nerfstudio.data.dataparsers.dycheck_dataparser.downscale(img, scale: int) ndarray [source]#
Function from DyCheck’s repo. Downscale an image.
- Parameters:
img – Input image
scale – Factor of the scale
- Returns:
New image
Instant-NGP#
Data parser for instant ngp data
- class nerfstudio.data.dataparsers.instant_ngp_dataparser.InstantNGP(config: InstantNGPDataParserConfig, includes_time: bool = False)[source]#
Bases:
DataParser
Instant NGP Dataset
- class nerfstudio.data.dataparsers.instant_ngp_dataparser.InstantNGPDataParserConfig(_target: ~typing.Type = <factory>, data: ~pathlib.Path = PosixPath('data/ours/posterv2'), scene_scale: float = 0.3333, eval_mode: ~typing.Literal['fraction', 'filename', 'interval', 'all'] = 'fraction', train_split_fraction: float = 0.9, eval_interval: int = 8)[source]#
Bases:
DataParserConfig
Instant-NGP dataset parser config
- data: Path = PosixPath('data/ours/posterv2')#
Directory or explicit json file path specifying location of data.
- eval_interval: int = 8#
The interval between frames to use for eval. Only used when eval_mode is eval-interval.
- eval_mode: Literal['fraction', 'filename', 'interval', 'all'] = 'fraction'#
The method to use for splitting the dataset into train and eval. Fraction splits based on a percentage for train and the remaining for eval. Filename splits based on filenames containing train/eval. Interval uses every nth frame for eval. All uses all the images for any split.
- scene_scale: float = 0.3333#
How much to scale the scene.
- train_split_fraction: float = 0.9#
The percentage of the dataset to use for training. Only used when eval_mode is train-split-fraction.
Minimal#
Data parser for pre-prepared datasets for all cameras, with no additional processing needed Optional fields - semantics, mask_filenames, cameras.distortion_params, cameras.times
- class nerfstudio.data.dataparsers.minimal_dataparser.MinimalDataParser(config: MinimalDataParserConfig, includes_time: bool = False)[source]#
Bases:
DataParser
Minimal DatasetParser
- class nerfstudio.data.dataparsers.minimal_dataparser.MinimalDataParserConfig(_target: ~typing.Type = <factory>, data: ~pathlib.Path = PosixPath('/home/nikhil/nerfstudio-main/tests/data/lego_test/minimal_parser'))[source]#
Bases:
DataParserConfig
Minimal dataset config
- data: Path = PosixPath('/home/nikhil/nerfstudio-main/tests/data/lego_test/minimal_parser')#
Directory specifying location of data.
NeRF-OSR#
Data parser for NeRF-OSR datasets
Presented in the paper: https://4dqv.mpi-inf.mpg.de/NeRF-OSR/
- class nerfstudio.data.dataparsers.nerfosr_dataparser.NeRFOSR(config: NeRFOSRDataParserConfig, includes_time: bool = False)[source]#
Bases:
DataParser
NeRFOSR Dataparser Presented in the paper: https://4dqv.mpi-inf.mpg.de/NeRF-OSR/
Some of this code comes from https://github.com/r00tman/NeRF-OSR/blob/main/data_loader_split.py
- Source data convention is:
camera coordinate system: x–>right, y–>down, z–>scene (opencv/colmap convention) poses is camera-to-world masks are 0 for dynamic content, 255 for static content
- class nerfstudio.data.dataparsers.nerfosr_dataparser.NeRFOSRDataParserConfig(_target: ~typing.Type = <factory>, data: ~pathlib.Path = PosixPath('data/NeRF-OSR/Data'), scene: str = 'stjacob', scene_scale: float = 1.0, scale_factor: float = 1.0, use_masks: bool = False, orientation_method: ~typing.Literal['pca', 'up', 'vertical', 'none'] = 'vertical', center_method: ~typing.Literal['poses', 'focus', 'none'] = 'focus', auto_scale_poses: bool = True)[source]#
Bases:
DataParserConfig
Nerfstudio dataset config
- auto_scale_poses: bool = True#
Whether to automatically scale the poses to fit in +/- 1 bounding box.
- center_method: Literal['poses', 'focus', 'none'] = 'focus'#
The method to use for centering.
- data: Path = PosixPath('data/NeRF-OSR/Data')#
Directory specifying location of data.
- orientation_method: Literal['pca', 'up', 'vertical', 'none'] = 'vertical'#
The method to use for orientation.
- scale_factor: float = 1.0#
How much to scale the camera origins by.
- scene: str = 'stjacob'#
Which scene to load
- scene_scale: float = 1.0#
How much to scale the region of interest by.
- use_masks: bool = False#
Whether to use masks.
- nerfstudio.data.dataparsers.nerfosr_dataparser.get_camera_params(scene_dir: str, split: Literal['train', 'validation', 'test']) Tuple[Tensor, Tensor, int] [source]#
Load camera intrinsic and extrinsic parameters for a given scene split.
- Args”
scene_dir : The directory containing the scene data. split : The split for which to load the camera parameters.
- Returns
A tuple containing the intrinsic parameters (as a torch.Tensor of shape [N, 4, 4]), the camera-to-world matrices (as a torch.Tensor of shape [N, 4, 4]), and the number of cameras (N).
Nerfstudio#
Data parser for nerfstudio datasets.
- class nerfstudio.data.dataparsers.nerfstudio_dataparser.Nerfstudio(config: NerfstudioDataParserConfig, includes_time: bool = False, downscale_factor: Optional[int] = None)[source]#
Bases:
DataParser
Nerfstudio DatasetParser
- class nerfstudio.data.dataparsers.nerfstudio_dataparser.NerfstudioDataParserConfig(_target: ~typing.Type = <factory>, data: ~pathlib.Path = PosixPath('.'), scale_factor: float = 1.0, downscale_factor: ~typing.Optional[int] = None, scene_scale: float = 1.0, orientation_method: ~typing.Literal['pca', 'up', 'vertical', 'none'] = 'up', center_method: ~typing.Literal['poses', 'focus', 'none'] = 'poses', auto_scale_poses: bool = True, eval_mode: ~typing.Literal['fraction', 'filename', 'interval', 'all'] = 'fraction', train_split_fraction: float = 0.9, eval_interval: int = 8, depth_unit_scale_factor: float = 0.001)[source]#
Bases:
DataParserConfig
Nerfstudio dataset config
- auto_scale_poses: bool = True#
Whether to automatically scale the poses to fit in +/- 1 bounding box.
- center_method: Literal['poses', 'focus', 'none'] = 'poses'#
The method to use to center the poses.
- data: Path = PosixPath('.')#
Directory or explicit json file path specifying location of data.
- depth_unit_scale_factor: float = 0.001#
Scales the depth values to meters. Default value is 0.001 for a millimeter to meter conversion.
- downscale_factor: Optional[int] = None#
How much to downscale images. If not set, images are chosen such that the max dimension is <1600px.
- eval_interval: int = 8#
The interval between frames to use for eval. Only used when eval_mode is eval-interval.
- eval_mode: Literal['fraction', 'filename', 'interval', 'all'] = 'fraction'#
The method to use for splitting the dataset into train and eval. Fraction splits based on a percentage for train and the remaining for eval. Filename splits based on filenames containing train/eval. Interval uses every nth frame for eval. All uses all the images for any split.
- orientation_method: Literal['pca', 'up', 'vertical', 'none'] = 'up'#
The method to use for orientation.
- scale_factor: float = 1.0#
How much to scale the camera origins by.
- scene_scale: float = 1.0#
How much to scale the region of interest by.
- train_split_fraction: float = 0.9#
The percentage of the dataset to use for training. Only used when eval_mode is train-split-fraction.
nuScenes#
Data parser for NuScenes dataset
- class nerfstudio.data.dataparsers.nuscenes_dataparser.NuScenes(config: NuScenesDataParserConfig, includes_time: bool = False)[source]#
Bases:
DataParser
NuScenes DatasetParser
- class nerfstudio.data.dataparsers.nuscenes_dataparser.NuScenesDataParserConfig(_target: ~typing.Type = <factory>, data: ~pathlib.Path = PosixPath('scene-0103'), data_dir: ~pathlib.Path = PosixPath('/mnt/local/NuScenes'), version: ~typing.Literal['v1.0-mini', 'v1.0-trainval'] = 'v1.0-mini', cameras: ~typing.Tuple[~typing.Literal['FRONT', 'FRONT_LEFT', 'FRONT_RIGHT', 'BACK', 'BACK_LEFT', 'BACK_RIGHT'], ...] = ('FRONT',), mask_dir: ~typing.Optional[~pathlib.Path] = None, train_split_fraction: float = 0.9, verbose: bool = False)[source]#
Bases:
DataParserConfig
NuScenes dataset config. NuScenes (https://www.nuscenes.org/nuscenes) is an autonomous driving dataset containing 1000 20s clips. Each clip was recorded with a suite of sensors including 6 surround cameras. It also includes 3D cuboid annotations around objects. We optionally use these cuboids to mask dynamic objects by specifying the mask_dir flag. To create these masks use nerfstudio/scripts/datasets/process_nuscenes_masks.py.
- cameras: Tuple[Literal['FRONT', 'FRONT_LEFT', 'FRONT_RIGHT', 'BACK', 'BACK_LEFT', 'BACK_RIGHT'], ...] = ('FRONT',)#
Which cameras to use.
- data: Path = PosixPath('scene-0103')#
Name of the scene.
- data_dir: Path = PosixPath('/mnt/local/NuScenes')#
Path to NuScenes dataset.
- mask_dir: Optional[Path] = None#
Path to masks of dynamic objects.
- train_split_fraction: float = 0.9#
The percent of images to use for training. The remaining images are for eval.
- verbose: bool = False#
Load dataset with verbose messaging
- version: Literal['v1.0-mini', 'v1.0-trainval'] = 'v1.0-mini'#
Dataset version.
Phototourism#
Phototourism dataset parser. Datasets and documentation here: http://phototour.cs.washington.edu/datasets/
- class nerfstudio.data.dataparsers.phototourism_dataparser.Phototourism(config: PhototourismDataParserConfig)[source]#
Bases:
DataParser
Phototourism dataset. This is based on https://github.com/kwea123/nerf_pl/blob/nerfw/datasets/phototourism.py and uses colmap’s utils file to read the poses.
- class nerfstudio.data.dataparsers.phototourism_dataparser.PhototourismDataParserConfig(_target: ~typing.Type = <factory>, data: ~pathlib.Path = PosixPath('data/phototourism/brandenburg-gate'), scale_factor: float = 3.0, alpha_color: str = 'white', train_split_fraction: float = 0.9, scene_scale: float = 1.0, orientation_method: ~typing.Literal['pca', 'up', 'vertical', 'none'] = 'up', center_method: ~typing.Literal['poses', 'focus', 'none'] = 'poses', auto_scale_poses: bool = True)[source]#
Bases:
DataParserConfig
Phototourism dataset parser config
- alpha_color: str = 'white'#
alpha color of background
- auto_scale_poses: bool = True#
Whether to automatically scale the poses to fit in +/- 1 bounding box.
- center_method: Literal['poses', 'focus', 'none'] = 'poses'#
The method to use to center the poses.
- data: Path = PosixPath('data/phototourism/brandenburg-gate')#
Directory specifying location of data.
- orientation_method: Literal['pca', 'up', 'vertical', 'none'] = 'up'#
The method to use for orientation.
- scale_factor: float = 3.0#
How much to scale the camera origins by.
- scene_scale: float = 1.0#
How much to scale the region of interest by.
- train_split_fraction: float = 0.9#
The fraction of images to use for training. The remaining images are for eval.
ScanNet#
Data parser for ScanNet dataset
- class nerfstudio.data.dataparsers.scannet_dataparser.ScanNet(config: ScanNetDataParserConfig, includes_time: bool = False)[source]#
Bases:
DataParser
ScanNet DatasetParser
- class nerfstudio.data.dataparsers.scannet_dataparser.ScanNetDataParserConfig(_target: ~typing.Type = <factory>, data: ~pathlib.Path = PosixPath('data/scannet/scene0423_02'), scale_factor: float = 1.0, scene_scale: float = 1.0, center_method: ~typing.Literal['poses', 'focus', 'none'] = 'poses', auto_scale_poses: bool = True, train_split_fraction: float = 0.9, depth_unit_scale_factor: float = 0.001)[source]#
Bases:
DataParserConfig
ScanNet dataset config. ScanNet dataset (https://www.scan-net.org/) is a large-scale 3D dataset of indoor scenes. This dataparser assumes that the dense stream was extracted from .sens files. Expected structure of scene directory:
root/ ├── color/ ├── depth/ ├── intrinsic/ ├── pose/
- auto_scale_poses: bool = True#
Whether to automatically scale the poses to fit in +/- 1 bounding box.
- center_method: Literal['poses', 'focus', 'none'] = 'poses'#
The method to use to center the poses.
- data: Path = PosixPath('data/scannet/scene0423_02')#
Path to ScanNet folder with densely extracted scenes.
- depth_unit_scale_factor: float = 0.001#
Scales the depth values to meters. Default value is 0.001 for a millimeter to meter conversion.
- scale_factor: float = 1.0#
How much to scale the camera origins by.
- scene_scale: float = 1.0#
How much to scale the region of interest by.
- train_split_fraction: float = 0.9#
The fraction of images to use for training. The remaining images are for eval.
SDFStudio#
Datapaser for sdfstudio formatted data
- class nerfstudio.data.dataparsers.sdfstudio_dataparser.SDFStudio(config: SDFStudioDataParserConfig, includes_time: bool = False)[source]#
Bases:
DataParser
SDFStudio Dataset
- class nerfstudio.data.dataparsers.sdfstudio_dataparser.SDFStudioDataParserConfig(_target: ~typing.Type = <factory>, data: ~pathlib.Path = PosixPath('data/DTU/scan65'), include_mono_prior: bool = False, depth_unit_scale_factor: float = 0.001, include_foreground_mask: bool = False, downscale_factor: int = 1, scene_scale: float = 2.0, skip_every_for_val_split: int = 1, auto_orient: bool = True)[source]#
Bases:
DataParserConfig
Scene dataset parser config
- data: Path = PosixPath('data/DTU/scan65')#
Directory specifying location of data.
- depth_unit_scale_factor: float = 0.001#
Scales the depth values to meters. Default value is 0.001 for a millimeter to meter conversion.
- include_foreground_mask: bool = False#
whether or not to load foreground mask
- include_mono_prior: bool = False#
whether or not to load monocular depth and normal
- scene_scale: float = 2.0#
Sets the bounding cube to have edge length of this size. The longest dimension of the axis-aligned bbox will be scaled to this value.
- skip_every_for_val_split: int = 1#
sub sampling validation images
sitcoms3D#
Data parser for sitcoms3D dataset.
The dataset is from the paper [“The One Where They Reconstructed 3D Humans and Environments in TV Shows”](https://ethanweber.me/sitcoms3D/)
- class nerfstudio.data.dataparsers.sitcoms3d_dataparser.Sitcoms3D(config: Sitcoms3DDataParserConfig, includes_time: bool = False)[source]#
Bases:
DataParser
Sitcoms3D Dataset
- class nerfstudio.data.dataparsers.sitcoms3d_dataparser.Sitcoms3DDataParserConfig(_target: ~typing.Type = <factory>, data: ~pathlib.Path = PosixPath('data/sitcoms3d/TBBT-big_living_room'), include_semantics: bool = True, downscale_factor: int = 4, scene_scale: float = 2.0)[source]#
Bases:
DataParserConfig
sitcoms3D dataset parser config
- data: Path = PosixPath('data/sitcoms3d/TBBT-big_living_room')#
Directory specifying location of data.
- include_semantics: bool = True#
whether or not to include loading of semantics data
- scene_scale: float = 2.0#
Sets the bounding cube to have edge length of this size. The longest dimension of the Sitcoms3D axis-aligned bbox will be scaled to this value.