computational_config
Computational configuration for 2DTM.
BaseComputationalConfig
Bases: BaseModel
Base class for computational configuration with shared GPU device handling.
Source code in src/leopard_em/pydantic_models/config/computational_config.py
12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 | |
gpu_devices
property
Convert requested GPU IDs to torch device objects.
Returns:
| Type | Description |
|---|---|
list[device]
|
|
ComputationalConfigMatch
Bases: BaseComputationalConfig
Serialization of computational resources allocated for 2DTM.
NOTE: The field gpu_ids is not validated at instantiation past being one of the
valid types. For example, if "cuda:0" is specified but no CUDA device is available,
the instantiation will succeed, and only upon translating gpu_ids to a list of
torch.device objects will an error be raised. This is done to allow for
configuration files to be loaded without requiring the actual hardware to be
present at the time of loading.
Attributes:
| Name | Type | Description |
|---|---|---|
gpu_ids |
Optional[Union[int, list[int], str, list[str]]]
|
Field which specifies which GPUs to use for computation. The following types of values are allowed: - A single integer, e.g. 0, which means to use GPU with ID 0. - A list of integers, e.g. [0, 2], which means to use GPUs with IDs 0 and 2. - A device specifier string, e.g. "cuda:0", which means to use GPU with ID 0. - A list of device specifier strings, e.g. ["cuda:0", "cuda:1"], which means to use GPUs with IDs 0 and 1. - The specific string "all" which means to use all available GPUs identified by torch.cuda.device_count(). - The specific string "cpu" which means to use CPU. |
num_cpus |
int
|
Total number of CPUs to use, defaults to 1. |
backend |
Optional[str]
|
The backend to use for match template. Must be "streamed" or "batched". Defaults to "streamed". |
Source code in src/leopard_em/pydantic_models/config/computational_config.py
51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 | |
ComputationalConfigRefine
Bases: BaseComputationalConfig
Serialization of computational resources allocated for 2DTM.
NOTE: The field gpu_ids is not validated at instantiation past being one of the
valid types. For example, if "cuda:0" is specified but no CUDA device is available,
the instantiation will succeed, and only upon translating gpu_ids to a list of
torch.device objects will an error be raised. This is done to allow for
configuration files to be loaded without requiring the actual hardware to be
present at the time of loading.
Attributes:
| Name | Type | Description |
|---|---|---|
gpu_ids |
Optional[Union[int, list[int], str, list[str]]]
|
Field which specifies which GPUs to use for computation. The following types of values are allowed: - A single integer, e.g. 0, which means to use GPU with ID 0. - A list of integers, e.g. [0, 2], which means to use GPUs with IDs 0 and 2. - A device specifier string, e.g. "cuda:0", which means to use GPU with ID 0. - A list of device specifier strings, e.g. ["cuda:0", "cuda:1"], which means to use GPUs with IDs 0 and 1. - The specific string "all" which means to use all available GPUs identified by torch.cuda.device_count(). - The specific string "cpu" which means to use CPU. |
num_cpus |
int
|
Total number of CPUs to use, defaults to 1. |
backend |
Optional[str]
|
The backend to use for match template. Must be "streamed" or "batched". Defaults to "streamed". |
Source code in src/leopard_em/pydantic_models/config/computational_config.py
95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 | |