Configuration
ropt.context
The ropt.context module provides the context class used by optimization workflows.
EnOptContext
Bases: BaseModel
The primary context object for a single optimization run.
EnOptContext holds all information needed to run an ensemble-based
optimization: variables, objectives, constraints, realizations, gradient
settings, samplers, filters, and the optimizer/backend. It is constructed
from plain Python dicts or config objects and validated on creation.
Index-based sharing
All tuple-based plugin fields (realization_filters, function_estimators,
samplers, variable_transforms, objective_transforms, and
nonlinear_constraint_transforms) are referenced by index from other config
fields. For example, the samplers field of
VariablesConfig is an integer array whose
values index into the samplers tuple — use all zeros when a single sampler
is shared across all variables, or distinct indices when different samplers
are needed per variable. The same pattern applies to transform indices in
VariablesConfig,
ObjectiveFunctionsConfig, and
NonlinearConstraintsConfig.
Optional names
The names attribute maps axis types (see AxisName)
to ordered sequences of labels for variables, objectives, and constraints.
It is not required for the optimization itself, but when present it is used
to produce labelled multi-index results in exported data frames.
Plugin instances
The backend field and all tuple-based plugin fields (realization_filters,
function_estimators, samplers, variable_transforms,
objective_transforms, and nonlinear_constraint_transforms) store plugin
instances. Instead of constructing instances manually, these fields can be
initialized with a configuration object or a plain dict of settings — Pydantic
will resolve and instantiate the appropriate plugin automatically. Each config
class has a method field that selects the plugin implementation. The
configuration classes are defined in the ropt.config
sub-package.
Broadcasting
Many nested config classes represent per-variable or per-objective
properties (e.g., bounds, perturbation magnitudes) as numpy arrays. A
size-1 array is broadcast to all elements; otherwise the array length must
match the count of the corresponding entities.
Warning
EnOptContext objects are immutable after construction. Do not attempt
to serialize and round-trip them (e.g., to/from JSON): numpy arrays
and plugin instances cannot survive a round-trip faithfully. Persist the
raw input dicts instead.
Attributes:
| Name | Type | Description |
|---|---|---|
variables |
VariablesConfig
|
Variable settings. |
objectives |
ObjectiveFunctionsConfig
|
Objective function settings. |
linear_constraints |
LinearConstraintsConfig | None
|
Optional linear constraint settings. |
nonlinear_constraints |
NonlinearConstraintsConfig | None
|
Optional nonlinear constraint settings. |
realizations |
RealizationsConfig
|
Ensemble realization settings. |
optimizer |
OptimizerConfig
|
Optimizer settings. |
backend |
BackendInstance
|
Backend plugin instance used for function evaluations. |
gradient |
GradientConfig
|
Gradient estimation settings. |
realization_filters |
tuple[RealizationFilterInstance, ...]
|
Tuple of realization filter plugin instances. |
function_estimators |
tuple[FunctionEstimatorInstance, ...]
|
Tuple of function estimator plugin instances. |
samplers |
tuple[SamplerInstance, ...]
|
Tuple of sampler plugin instances. |
variable_transforms |
tuple[VariableTransformInstance, ...]
|
Tuple of variable transform plugin instances. |
objective_transforms |
tuple[ObjectiveTransformInstance, ...]
|
Tuple of objective transform plugin instances. |
nonlinear_constraint_transforms |
tuple[NonlinearConstraintTransformInstance, ...]
|
Tuple of nonlinear constraint transform plugin instances. |
names |
dict[str, tuple[str | int, ...]]
|
Optional mapping of axis names to label sequences. |
lock
Lock the object to prevent sharing and re-use.
Raises:
| Type | Description |
|---|---|
RuntimeError
|
If the object is already locked. |
ropt.config
Configuration classes for ensemble-based optimization.
The ropt.config module provides Pydantic-based configuration classes that
collectively define a complete optimization setup. These classes are used to
construct an EnOptContext object, which serves
as the in-memory configuration for a single optimization run.
VariablesConfig
Configuration class for optimization variables.
VariablesConfig defines optimization variable settings for an
EnOptContext object: bounds, types, mask, and
perturbation settings.
The variables field is required and determines the number of variables,
including both free and fixed variables.
The lower_bounds and upper_bounds fields define the bounds for each
variable. These are also numpy arrays and are broadcasted to match the
number of variables. By default, they are set to negative and positive
infinity, respectively. numpy.nan values in these arrays indicate
unbounded variables and are converted to numpy.inf with the appropriate
sign.
The optional types field allows assigning a
VariableType to each variable. If not provided,
all variables are assumed to be continuous real-valued
(VariableType.REAL).
The optional mask field is a boolean numpy array that indicates which
variables are free to change during optimization. True values in the mask
indicate that the corresponding variable is free, while False indicates a
fixed variable.
Variable perturbations
The VariablesConfig class also stores information that is needed to
generate perturbed variables, for instance to calculate stochastic
gradients.
Perturbations are generated by Sampler instances
that are configured as a tuple in
EnOptContext.samplers. The samplers field
of VariablesConfig specifies, for each variable, the sampler to use by its
index into the samplers tuple. A random number generator is created to
support samplers that require random numbers.
The generated perturbation values are scaled by the values of the
perturbation_magnitudes field and can be modified based on the
perturbation_types. See PerturbationType
for details on available perturbation types.
Perturbed variables may violate the defined variable bounds. The
boundary_types field specifies how to handle such violations. See
BoundaryType for details on available boundary
handling methods.
The perturbation_types and boundary_types fields use values from the
PerturbationType and
BoundaryType enumerations, respectively.
Seed for Samplers
The seed value ensures consistent results across repeated runs with
the same configuration. To obtain unique results for each optimization
run, modify the seed. A common approach is to use a tuple with a unique
ID as the first element, ensuring reproducibility across nested and
parallel evaluations.
Attributes:
| Name | Type | Description |
|---|---|---|
types |
ArrayEnum
|
Optional variable types. |
variable_count |
int
|
Number of variables. |
lower_bounds |
Array1D
|
Lower bounds for the variables (default: \(-\infty\)). |
upper_bounds |
Array1D
|
Upper bounds for the variables (default: \(+\infty\)). |
mask |
Array1DBool
|
Optional boolean mask indicating free variables. |
perturbation_magnitudes |
Array1D
|
Magnitudes of the perturbations for each variable
(default:
|
perturbation_types |
ArrayEnum
|
Type of perturbation for each variable (see
|
boundary_types |
ArrayEnum
|
How to handle perturbations that violate boundary
conditions (see |
samplers |
Array1DInt
|
Indices of the samplers to use for each variable. |
seed |
ItemOrTuple[int]
|
Seed for the random number generator used by the samplers. |
transforms |
Array1DInt
|
Indices of the variable transforms to apply for each variable. |
ObjectiveFunctionsConfig
Configuration class for objective functions.
ObjectiveFunctionsConfig defines objective function settings for an
EnOptContext object.
ropt supports multi-objective optimization. Multiple objectives are
combined into a single value by summing them after weighting. The weights
field, a numpy array, determines the weight of each objective function.
The length of this array defines the number of objective functions. The
weights are automatically normalized to sum to 1 (e.g., [1, 1] becomes
[0.5, 0.5]).
Objective functions can optionally be processed using realization
filters, function
estimators, and
transforms objects. The
realization_filters, function_estimators, and transforms fields are
integer index arrays: each entry selects an object by its position in the
corresponding tuple defined in EnOptContext.
An out-of-range index means no object is applied to that objective. If a
field uses its default value of -1, no object is applied at all.
Attributes:
| Name | Type | Description |
|---|---|---|
weights |
Array1D
|
Weights for the objective functions (default: 1.0). |
realization_filters |
Array1DInt
|
Optional indices of realization filters. |
function_estimators |
Array1DInt
|
Optional indices of function estimators. |
transforms |
Array1DInt
|
Optional indices of objective transforms. |
LinearConstraintsConfig
Configuration class for linear constraints.
LinearConstraintsConfig defines linear constraints used as the
linear_constraints field of an
EnOptContext object.
Linear constraints are defined by a set of linear equations involving the
optimization variables. These equations can represent equality or inequality
constraints. The coefficients field is a 2D numpy array where each row
represents a constraint, and each column corresponds to a variable.
The lower_bounds and upper_bounds fields specify the bounds on the
right-hand side of each constraint equation. These fields are converted and
broadcasted to numpy arrays with a length equal to the number of
constraint equations.
Less-than and greater-than inequality constraints can be specified by setting the lower bounds to \(-\infty\), or the upper bounds to \(+\infty\), respectively. Equality constraints are specified by setting the lower bounds equal to the upper bounds.
Attributes:
| Name | Type | Description |
|---|---|---|
coefficients |
Array2D
|
Matrix of coefficients for the linear constraints. |
lower_bounds |
Array1D
|
Lower bounds for the right-hand side of the constraint equations. |
upper_bounds |
Array1D
|
Upper bounds for the right-hand side of the constraint equations. |
Linear transformation of variables.
The set of linear constraints can be represented by a matrix equation: \(\mathbf{A} \mathbf{x} = \mathbf{b}\).
When linearly transforming variables to the optimizer domain, the coefficients (\(\mathbf{A}\)) and right-hand-side values (\(\mathbf{b}\)) must be converted to remain valid. If the linear transformation of the variables to the optimizer domain is given by:
then the coefficients and right-hand-side values must be transformed as follows:
NonlinearConstraintsConfig
Configuration class for non-linear constraints.
NonlinearConstraintsConfig defines nonlinear constraints used as the
nonlinear_constraints field of an
EnOptContext object.
Nonlinear constraints are defined by comparing a constraint function to a
right-hand-side value, allowing for equality or inequality constraints. The
lower_bounds and upper_bounds fields, which are numpy arrays, specify
the bounds on these right-hand-side values. The length of these arrays
determines the number of constraint functions.
Less-than and greater-than inequality constraints can be specified by setting the lower bounds to \(-\infty\), or the upper bounds to \(+\infty\), respectively. Equality constraints are specified by setting the lower bounds equal to the upper bounds.
Constraint functions can optionally be processed using realization
filters and function
estimators objects. The
realization_filters and function_estimators fields are integer index
arrays: each entry selects a filter or estimator by its position in the
corresponding tuple defined in EnOptContext.
An out-of-range index means no filter or estimator is applied to that
constraint. If a field uses its default value of -1, no filter or
estimator is applied at all.
Attributes:
| Name | Type | Description |
|---|---|---|
lower_bounds |
Array1D
|
Lower bounds for the right-hand-side values. |
upper_bounds |
Array1D
|
Upper bounds for the right-hand-side values. |
realization_filters |
Array1DInt
|
Optional indices of realization filters. |
function_estimators |
Array1DInt
|
Optional indices of function estimators. |
transforms |
Array1DInt
|
Optional indices of constraint transforms. |
RealizationsConfig
Configuration class for realizations.
RealizationsConfig defines realization ensemble settings for an
EnOptContext object.
To optimize an ensemble of functions, a set of realizations is defined. When the optimizer requests a function value or a gradient, these are calculated for each realization and then combined into a single value. Typically, this combination is a weighted sum, but other methods are possible.
The weights field, a numpy array, determines the weight of each
realization. The length of this array defines the number of realizations. The
weights are automatically normalized to sum to 1 (e.g., [1, 1] becomes
[0.5, 0.5]).
If function value calculations for some realizations fail (e.g., due to a
simulation error), the total function and gradient values can still be
calculated by excluding the missing values. However, a minimum number of
successful realizations may be required. The realization_min_success field
specifies this minimum. By default, it is set equal to the number of
realizations, meaning no missing values are allowed.
Note
Setting realization_min_success to zero allows the optimization to
proceed even if all realizations fail. While some optimizers can handle
this, most will treat it as if the value were one, requiring at least
one successful realization.
Attributes:
| Name | Type | Description |
|---|---|---|
weights |
Array1D
|
Weights for the realizations (default: 1.0). |
realization_min_success |
NonNegativeInt | None
|
Minimum number of successful realizations (default: equal to the number of realizations). |
OptimizerConfig
Configuration class for the optimization algorithm.
OptimizerConfig defines workflow-level settings for an optimization run,
configured as the optimizer field of
EnOptContext.
-
max_batches: This limit restricts the total number of calls made to the evaluation function provided toropt. An optimizer might request a batch containing multiple function and/or gradient evaluations within a single call.max_batcheslimits how many such batch requests are processed sequentially. This is particularly useful for managing resource usage when batches are evaluated in parallel (e.g., on an HPC cluster), as it controls the number of sequential submission steps. The number of batches does not necessarily correspond directly to the number of optimizer iterations, especially if function and gradient evaluations occur in separate batches. -
max_functions: This imposes a hard limit on the total number of individual objective function evaluations performed across all batches. Since a single batch evaluation (limited bymax_batches) can involve multiple function evaluations, settingmax_functionsprovides more granular control over the total computational effort spent on function calls. It can serve as an alternative stopping criterion if the backend optimizer doesn't supportmax_iterationsor if you need to strictly limit the function evaluation count. Note that exceeding this limit might cause the optimization to terminate mid-batch, potentially earlier than a correspondingmax_batcheslimit would. -
output_dir: An optional output directory where the optimizer can store files. stdout: Redirect optimizer standard output to the given file.stderr: Redirect optimizer standard error to the given file.
Attributes:
| Name | Type | Description |
|---|---|---|
max_functions |
PositiveInt | None
|
Maximum number of function evaluations (optional). |
max_batches |
PositiveInt | None
|
Maximum number of batch evaluations (optional). |
output_dir |
Path | None
|
Output directory for the optimizer (optional). |
stdout |
Path | None
|
File to redirect optimizer standard output (optional). |
stderr |
Path | None
|
File to redirect optimizer standard error (optional). |
BackendConfig
Configuration class for the optimization backend.
BackendConfig defines the configuration for the optimization algorithm
used by an optimization backend plugin. The method field selects the
algorithm using a "plugin/method" string (e.g. "scipy/default").
While optimization methods can have diverse parameters, this class provides a standardized set of settings that are commonly used and forwarded to the backend:
max_iterations: The maximum number of iterations allowed. The exact definition depends on the optimizer backend, and it may be ignored.convergence_tolerance: The convergence tolerance used as a stopping criterion. The exact definition depends on the optimizer, and it may be ignored.parallel: IfTrue, allows the optimizer to use parallelized function evaluations. This typically applies to gradient-free methods and may be ignored.options: A dictionary or list of strings for generic optimizer options. The required format and interpretation depend on the specific optimization method.
Attributes:
| Name | Type | Description |
|---|---|---|
method |
str
|
Name of the optimization method. |
max_iterations |
PositiveInt | None
|
Maximum number of iterations (optional). |
convergence_tolerance |
NonNegativeFloat | None
|
Convergence tolerance (optional). |
parallel |
bool
|
Allow parallelized function evaluations (default: |
options |
dict[str, Any] | list[str] | None
|
Generic options for the optimizer (optional). |
GradientConfig
Configuration class for gradient calculations.
GradientConfig specifies how gradients are estimated in gradient-based
optimizers. It is used as the gradient field of
EnOptContext.
Gradients are estimated using function values calculated from perturbed
variables and the unperturbed variables. The number_of_perturbations field
determines the number of perturbed variables used, which must be at least
one.
If function evaluations for some perturbed variables fail, the gradient may
still be estimated as long as a minimum number of evaluations succeed. The
perturbation_min_success field specifies this minimum. By default, it
equals number_of_perturbations.
Gradients are calculated for each realization individually and then combined
into a total gradient. If number_of_perturbations is low, or even just
one, individual gradient calculations may be unreliable. In this case,
setting merge_realizations to True directs the optimizer to combine the
results of all realizations directly into a single gradient estimate.
The evaluation_policy option controls how and when objective functions and
gradients are calculated. It accepts one of three string values:
"speculative": Evaluate the gradient whenever the objective function is requested, even if the optimizer hasn't explicitly asked for the gradient at that point. This approach can potentially improve load balancing on HPC clusters by initiating gradient work earlier, though its effectiveness depends on whether the optimizer can utilize these speculatively computed gradients."separate": Always launch function and gradient evaluations as distinct operations, even if the optimizer requests both simultaneously. This is particularly useful when employing realization filters (seeRealizationFilterConfig) that might disable certain realizations, as it can potentially reduce the number of gradient evaluations needed."auto": Evaluate functions and/or gradients strictly according to the optimizer's requests. Calculations are performed only when the optimization algorithm explicitly requires them.
Attributes:
| Name | Type | Description |
|---|---|---|
number_of_perturbations |
PositiveInt
|
Number of perturbations (default:
|
perturbation_min_success |
PositiveInt | None
|
Minimum number of successful function evaluations
for perturbed variables (default: equal to |
merge_realizations |
bool
|
Merge all realizations for the final gradient
calculation (default: |
evaluation_policy |
Literal['speculative', 'separate', 'auto']
|
How to evaluate functions and gradients. |
FunctionEstimatorConfig
Configuration class for function estimators.
FunctionEstimatorConfig configures a function estimator plugin, which
controls how objective and constraint function values (and their gradients)
are combined across realizations. By default, a weighted sum over
realizations is used; function estimators allow replacing that with a
different combination method.
The method field selects the estimator using a "plugin/method" string
(e.g. "default/default"). The options field passes additional
configuration to the selected method.
Attributes:
| Name | Type | Description |
|---|---|---|
method |
str
|
Name of the function estimator method. |
options |
dict[str, Any]
|
Dictionary of options for the function estimator. |
RealizationFilterConfig
Configuration class for realization filters.
RealizationFilterConfig configures a
RealizationFilter plugin that
adjusts per-realization weights. Realization filters are configured as a
tuple in the realization_filters field of
EnOptContext. Objectives and constraints
reference a specific filter by its index in that tuple.
By default, objective and constraint functions, as well as their gradients, are calculated as a weighted function of all realizations. Realization filters provide a way to modify the weights of individual realizations. For example, they can be used to select a subset of realizations for calculating the final objective and constraint functions and their gradients by setting the weights of the other realizations to zero.
The method field specifies the realization filter method to use for
adjusting the weights. The options field allows passing a dictionary of
key-value pairs to further configure the chosen method. The interpretation
of these options depends on the selected method.
Attributes:
| Name | Type | Description |
|---|---|---|
method |
str
|
Name of the realization filter method. |
options |
dict[str, Any]
|
Dictionary of options for the realization filter. |
SamplerConfig
Configuration class for samplers.
SamplerConfig configures a Sampler plugin that
generates variable perturbations for gradient estimation. Samplers are
configured as a tuple in the samplers field of
EnOptContext. Variables reference a specific
sampler by its index in that tuple via their samplers field.
Samplers generate perturbations added to variables for gradient calculations. These perturbations can be deterministic or stochastic.
The method field specifies the sampler method to use for generating
perturbations. The options field allows passing a dictionary of key-value
pairs to further configure the chosen method. The interpretation of these
options depends on the selected method.
By default, each realization uses a different set of perturbed variables.
Setting the shared flag to True directs the sampler to use the same set
of perturbed values for all realizations.
Attributes:
| Name | Type | Description |
|---|---|---|
method |
str
|
Name of the sampler method. |
options |
dict[str, Any]
|
Dictionary of options for the sampler. |
shared |
bool
|
Whether to share perturbation values between realizations (default: |
ropt.config.constants
Default values used by the configuration classes.
DEFAULT_SEED
module-attribute
Default seed for random number generators.
The seed is used as the base value for random number generators within various components of the optimization process, such as samplers. Using a consistent seed ensures reproducibility across multiple runs with the same configuration. To obtain unique results for each optimization run, modify this seed.
DEFAULT_NUMBER_OF_PERTURBATIONS
module-attribute
Default number of perturbations for gradient estimation.
This value defines the default number of perturbed variables used to estimate gradients. A higher number of perturbations can lead to more accurate gradient estimates but also increases the number of function evaluations required.
DEFAULT_PERTURBATION_MAGNITUDE
module-attribute
Default magnitude for variable perturbations.
This value specifies the default value of the scaling factor applied to the perturbation values generated by samplers. The magnitude can be interpreted as an absolute value or as a relative value, depending on the selected perturbation type.
See also: PerturbationType.
DEFAULT_PERTURBATION_BOUNDARY_TYPE
module-attribute
Default perturbation boundary handling type.
This value determines how perturbations that violate the defined variable bounds
are handled. The default, BoundaryType.MIRROR_BOTH, mirrors perturbations back
into the valid range if they exceed either the lower or upper bound.
See also: BoundaryType.
DEFAULT_PERTURBATION_TYPE
module-attribute
Default perturbation type.
This value determines how the perturbation magnitude is interpreted. The
default, PerturbationType.ABSOLUTE, means that the perturbation magnitude is
added directly to the variable value. Other options, such as
PerturbationType.RELATIVE, scale the perturbation magnitude based on the
variable's bounds.
See also: PerturbationType.
ropt.config.options
This module defines utilities for validating plugin options.
This module provides classes and functions to define and validate options for plugins. It uses Pydantic to create models that represent the schema of plugin options, allowing for structured and type-safe configuration.
Classes:
| Name | Description |
|---|---|
OptionsSchemaModel |
Represents the overall schema for plugin options. |
MethodSchemaModel |
Represents the schema for a specific method within a plugin, including its name and options. |
OptionsSchemaModel
Bases: BaseModel
Represents the overall schema for plugin options.
This class defines the structure for describing the methods and options
available for a plugin. The methods are described in a list of
[MethodSchemaModel][ropt.config.options.MethodSchemaModel] objects, each
describing a method supported by the plugin.
Attributes:
| Name | Type | Description |
|---|---|---|
methods |
dict[str, MethodSchemaModel[Any]]
|
A list of method schemas. |
Example:
from ropt.config.options import OptionsSchemaModel
schema = OptionsSchemaModel.model_validate(
{
"methods": [
{
"options": {"a": float}
},
{
"options": {"b": int | str},
},
]
}
)
options = schema.get_options_model("method")
print(options.model_validate({"a": 1.0, "b": 1})) # a=1.0 b=1
get_options_model
Creates a Pydantic model for validating options of a specific method.
This method dynamically generates a Pydantic model tailored to validate
the options associated with a given method. It iterates through the
defined methods, collecting option schemas from those matching the
specified method name. The resulting model can then be used to
validate dictionaries of options against the defined schema.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
method
|
str
|
The name of the method for which to create the options model. |
required |
Returns:
| Type | Description |
|---|---|
type[BaseModel]
|
A Pydantic model class capable of validating options for the specified method. |
Raises:
| Type | Description |
|---|---|
ValueError
|
If the specified method is not found in the schema. |
MethodSchemaModel
Represents the schema for a specific method within a plugin.
This class defines the structure for describing one or more methods supported by a plugin. It contains a dictionary describing an option for this method.
Attributes:
| Name | Type | Description |
|---|---|---|
options |
dict[str, T]
|
A list of option dictionaries. |
url |
HttpUrl | None
|
An optional URL for the plugin. |
gen_options_table
Generates a Markdown table documenting plugin options.
This function takes a schema dictionary, validates it against the
OptionsSchemaModel, and then
generates a Markdown table that summarizes the available methods and their
options. Each row in the table represents a method, and the columns list the
method's name and its configurable options. If a URL is provided for a
method, the method name will be hyperlinked to that URL in the table.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
schema
|
dict[str, Any]
|
A dictionary representing the schema of plugin options. |
required |
Returns:
| Type | Description |
|---|---|
str
|
A string containing a Markdown table that documents the plugin options. |