Skip to content

Optimizer Plugins

ropt.plugins.optimizer

Framework and Implementations for Optimizer Plugins.

This module provides the necessary components for integrating optimization algorithms into ropt via its plugin system. Optimizer plugins allow ropt to utilize various optimization techniques, either built-in or provided by third-party packages.

Core Concepts:

  • Plugin Interface: Optimizer plugins must inherit from the OptimizerPlugin base class. This class acts as a factory, defining a create method to instantiate optimizer objects.
  • Optimizer Implementation: The actual optimization logic resides in classes that inherit from the Optimizer abstract base class. These classes are initialized with the optimization configuration (EnOptConfig) and an OptimizerCallback. The callback is used by the optimizer to request function and gradient evaluations from ropt. The optimization process is initiated by calling the optimizer's start method.
  • Discovery: The PluginManager discovers available OptimizerPlugin implementations (typically via entry points) and uses them to create Optimizer instances as needed during plan execution.

Utilities:

The ropt.plugins.optimizer.utils module offers helper functions for common tasks within optimizer plugins, such as validating constraint support and handling normalized constraints.

Built-in Optimizers:

ropt includes the following optimizers by default:

  • SciPyOptimizer: Provides access to various algorithms from the scipy.optimize library.
  • ExternalOptimizer: Enables running other optimizer plugins in a separate external process, useful for isolation or specific execution environments.

ropt.plugins.optimizer.base.OptimizerPlugin

Bases: Plugin

Abstract Base Class for Optimizer Plugins (Factories).

This class defines the interface for plugins responsible for creating Optimizer instances. These plugins act as factories for specific optimization algorithms or backends.

During plan execution, the PluginManager identifies the appropriate optimizer plugin based on the configuration and uses its create class method to instantiate the actual Optimizer object that will perform the optimization.

create abstractmethod classmethod

create(
    config: EnOptConfig,
    optimizer_callback: OptimizerCallback,
) -> Optimizer

Create an Optimizer instance.

This abstract class method serves as a factory for creating concrete Optimizer objects. Plugin implementations must override this method to return an instance of their specific Optimizer subclass.

The PluginManager calls this method when an optimization step requires an optimizer provided by this plugin.

Parameters:

Name Type Description Default
config EnOptConfig

The configuration object containing the optimization settings.

required
optimizer_callback OptimizerCallback

The callback function used by the optimizer to request evaluations.

required

Returns:

Type Description
Optimizer

An initialized instance of an Optimizer subclass.

validate_options classmethod

validate_options(
    method: str, options: dict[str, Any] | list[str] | None
) -> None

Validate the optimizer-specific options for a given method.

This class method is intended to check if the options dictionary, typically provided in the OptimizerConfig, contains valid keys and values for the specified optimization method supported by this plugin.

This default implementation performs no validation. Subclasses should override this method to implement validation logic specific to the methods they support, potentially using schema validation tools like Pydantic.

The raised exception must be a ValueError, or derive from a ValueError.

Note

It is expected that the optimizer either receives a dictionary, or a list of options. This method should test if the type of the options is as expected, and raise a ValueError with an appropriate message if this is not the case.

Parameters:

Name Type Description Default
method str

The specific optimization method name (e.g., "slsqp", "my_optimizer/variant1").

required
options dict[str, Any] | list[str] | None

The dictionary or a list of strings of options.

required

Raises:

Type Description
ValueError

If the provided options are invalid for the specified method.

ropt.plugins.optimizer.base.Optimizer

Bases: ABC

Abstract Base Class for Optimizer Implementations.

This class defines the fundamental interface for all concrete optimizer implementations within the ropt framework. Optimizer plugins provide classes derived from Optimizer that encapsulate the logic of specific optimization algorithms.

Instances of Optimizer subclasses are created by their corresponding OptimizerPlugin factories. They are initialized with an EnOptConfig object detailing the optimization setup and an OptimizerCallback function. The callback is crucial as it allows the optimizer to request function and gradient evaluations from the ropt core during its execution.

The optimization process itself is initiated by calling the start method, which must be implemented by subclasses.

Subclasses must implement: - __init__: To accept the configuration and callback. - start: To contain the main optimization loop.

Subclasses can optionally override: - allow_nan: To indicate if the algorithm can handle NaN function values. - is_parallel: To indicate if the algorithm may perform parallel evaluations.

allow_nan property

allow_nan: bool

Indicate whether the optimizer can handle NaN function values.

If an optimizer algorithm can gracefully handle NaN (Not a Number) objective function values, its implementation should override this property to return True.

This is particularly relevant in ensemble-based optimization where evaluations might fail for all realizations. When allow_nan is True, setting realization_min_success to zero allows the evaluation process to return NaN instead of raising an error, enabling the optimizer to potentially continue.

Returns:

Type Description
bool

True if the optimizer supports NaN function values.

is_parallel property

is_parallel: bool

Indicate whether the optimizer alows parallel evaluations.

If an optimizer algorithm is designed to evaluate multiple variable vectors concurrently, its implementation should override this property to return True.

This information can be used by ropt or other components to manage resources or handle parallel execution appropriately.

Returns:

Type Description
bool

True if the optimizer allows parallel evaluations.

__init__

__init__(
    config: EnOptConfig,
    optimizer_callback: OptimizerCallback,
) -> None

Initialize an optimizer object.

The config object provides the desired configuration for the optimization process and should be used to set up the optimizer correctly before starting the optimization. The optimization will be initiated using the start method and will repeatedly require function and gradient evaluations at given variable vectors. The optimizer_callback argument provides the function that should be used to calculate the function and gradient values, which can then be forwarded to the optimizer.

Parameters:

Name Type Description Default
config EnOptConfig

The optimizer configuration to used.

required
optimizer_callback OptimizerCallback

The optimizer callback.

required

start abstractmethod

start(initial_values: NDArray[float64]) -> None

Initiate the optimization process.

This abstract method must be implemented by concrete Optimizer subclasses to start the optimization process. It takes the initial set of variable values as input.

During execution, the implementation should use the OptimizerCallback (provided during initialization) to request necessary function and gradient evaluations from the ropt core.

Parameters:

Name Type Description Default
initial_values NDArray[float64]

A 1D NumPy array representing the starting variable values for the optimization.

required

ropt.plugins.optimizer.utils

Utility functions for use by optimizer plugins.

This module provides utility functions to validate supported constraints, filter linear constraints, and to retrieve the list of supported optimizers.

NormalizedConstraints

Class for handling normalized constraints.

This class can be used to normalize non-linear constraints into the form C(x) = 0, C(x) <= 0, or C(x) >= 0. By default this is done by subtracting the right-hand side value, and multiplying with -1, if necessary.

The right hand sides are provided by the lower_bounds and upper_bound values. If corresponding entries in these arrays are equal (within a 1e-15 tolerance), the corresponding constraint is assumed to be a equality constraint. If they are not, they are considered inequality constraints, if one or both values are finite. If the lower bounds are finite, the constraint is added as is, after subtracting of the lower bound. If the upper bound is finite, the same is done, but the constraint is multiplied by -1. If both are finite, both constraints are added, effectively splitting a two-sided constraint into two normalized constraints.

By default this normalizes inequality constraints to the form C(x) > 0, by setting the flip flag, this can be changed to C(x) < 0.

Usage:

  1. Initialize with the lower and upper bounds.
  2. Before each new function/gradient evaluation with a new variable vector, reset the normalized constraints by calling the reset method.
  3. The constraint values are given by the constraints property. Before accessing it, call the set_constraints with the raw constraints. If necessary, this will calculate and cache the normalized values. Since values are cached, calling this method and accessing constraints multiple times is cheap.
  4. Use the same procedure for gradients, using the gradients property and set_gradients. Raw gradients must be provided as a matrix, where the rows are the gradients of each constraint.
  5. Use the is_eq property to retrieve a vector of boolean flags to check which constraints are equality constraints.

See the scipy optimization backend in the ropt source code for an example of usage.

Parallel evaluation.

The raw constraints may be a vector of constraints, or may be a matrix of constraints for multiple variables to support parallel evaluation. In the latter case, the constraints for different variables are given by the columns of the matrix. In this case, the constraints property will have the same structure. Note that this is only supported for the constraint values, not for the gradients. Hence, parallel evaluation of multiple gradients is not supported.

is_eq property

is_eq: list[bool]

Return flags indicating which constraints are equality constraints.

Returns:

Type Description
list[bool]

A list of booleans, True for constraints that are equality constraints.

constraints property

constraints: NDArray[float64] | None

Return the cached normalized constraint values.

These are the constraint values after applying the normalization logic (subtracting RHS, potential sign flipping) based on the bounds provided during initialization.

This property should be accessed after calling set_constraints with the raw constraint values for the current variable vector. Returns None if set_constraints has not been called since the last reset.

Returns:

Type Description
NDArray[float64] | None

A NumPy array containing the normalized constraint values.

gradients property

gradients: NDArray[float64] | None

Return the cached normalized constraint gradients.

These are the gradients of the constraints after applying the normalization logic (potential sign flipping) based on the bounds provided during initialization.

This property should be accessed after calling set_gradients with the raw constraint gradients for the current variable vector. Returns None if set_gradients has not been called since the last reset.

Returns:

Type Description
NDArray[float64] | None

A 2D NumPy array containing the normalized constraint gradients.

__init__

__init__(*, flip: bool = False) -> None

Initialize the normalization class.

Parameters:

Name Type Description Default
flip bool

Whether to flip the sign of the constraints.

False

set_bounds

set_bounds(
    lower_bounds: NDArray[float64],
    upper_bounds: NDArray[float64],
) -> None

Set the bounds of the normalization class.

Parameters:

Name Type Description Default
lower_bounds NDArray[float64]

The lower bounds on the right hand sides.

required
upper_bounds NDArray[float64]

The upper bounds on the right hand sides.

required

reset

reset() -> None

Reset cached normalized constraints and gradients.

This must be called when the stored constraints and their gradients are no longer valid. This is typically done after a new function/gradient evaluation. The set_constraints and set_gradients methods can then be called to set the new values.

After calling this method, the constraints and gradients properties will return None until new values are set. This can be utilized to check if new values need to be calculated.

set_constraints

set_constraints(values: NDArray[float64]) -> None

Calculate and cache normalized constraint values.

This method takes the raw constraint values (evaluated at the current variable vector) and applies the normalization logic defined during initialization (subtracting RHS, potential sign flipping). The results are stored internally and made available via the constraints property.

This supports parallel evaluation: if values is a 2D array, each row is treated as the constraint values for a separate variable vector evaluation.

If there are already cached values, this method will not overwrite them, the reset method must be called first.

Parameters:

Name Type Description Default
values NDArray[float64]

A 1D or 2D NumPy array of raw constraint values. If 2D, rows represent different evaluations.

required

set_gradients

set_gradients(values: NDArray[float64]) -> None

Calculate and cache normalized constraint gradients.

This method takes the raw constraint gradients (evaluated at the current variable vector) and applies the normalization logic defined during initialization (potential sign flipping). The results are stored internally and made available via the gradients property.

If there are already cached values, this method will not overwrite them, the reset method must be called first.

Note

Unlike set_constraints, this method does not support parallel evaluation; it expects gradients corresponding to a single variable vector.

Parameters:

Name Type Description Default
values NDArray[float64]

A 2D NumPy array of raw constraint gradients (rows are gradients of original constraints, columns are variables).

required

validate_supported_constraints

validate_supported_constraints(
    config: EnOptConfig,
    method: str,
    supported_constraints: dict[str, set[str]],
    required_constraints: dict[str, set[str]],
) -> None

Validate if the configured constraints are supported by the chosen method.

This function checks if the constraints defined in the config object (bounds, linear, non-linear) are compatible with the specified optimization method. It uses dictionaries mapping constraint types to sets of methods that support or require them.

Constraint types are identified by keys like "bounds", "linear:eq", "linear:ineq", "nonlinear:eq", and "nonlinear:ineq".

Example supported_constraints dictionary:

{
    "bounds": {"L-BFGS-B", "TNC", "SLSQP"},
    "linear:eq": {"SLSQP"},
    "linear:ineq": {"SLSQP"},
    "nonlinear:eq": {"SLSQP"},
    "nonlinear:ineq": {"SLSQP"},
}
A similar structure is used for required_constraints.

Parameters:

Name Type Description Default
config EnOptConfig

The optimization configuration object.

required
method str

The name of the optimization method being used.

required
supported_constraints dict[str, set[str]]

Dict mapping constraint types to sets of methods that support them.

required
required_constraints dict[str, set[str]]

Dict mapping constraint types to sets of methods that require them.

required

Raises:

Type Description
NotImplementedError

If a configured constraint is not supported by the method, or if a required constraint is missing.

create_output_path

create_output_path(
    base_name: str,
    base_dir: Path | None = None,
    name: str | None = None,
    suffix: str | None = None,
) -> Path

Construct a unique output path, appending an index if necessary.

This function generates a file or directory path based on the provided components. If the resulting path already exists, it automatically appends or increments a numerical suffix (e.g., "-001", "-002") to ensure uniqueness.

Parameters:

Name Type Description Default
base_name str

The core name for the path.

required
base_dir Path | None

Optional parent directory for the path.

None
name str | None

Optional identifier (e.g., step name) to include in the path.

None
suffix str | None

Optional file extension or suffix for the path.

None

Returns:

Type Description
Path

A unique pathlib.Path object.

get_masked_linear_constraints

get_masked_linear_constraints(
    config: EnOptConfig, initial_values: NDArray[float64]
) -> tuple[
    NDArray[np.float64],
    NDArray[np.float64],
    NDArray[np.float64],
]

Adjust linear constraints based on a variable mask.

When an optimization problem uses a variable mask (config.variables.mask) to optimize only a subset of variables, the linear constraints need to be adapted. This function performs that adaptation.

It removes columns from the constraint coefficient matrix (config.linear_constraints.coefficients) that correspond to the masked (fixed) variables. The contribution of these fixed variables (using their initial_values) is then calculated and subtracted from the original lower and upper bounds (config.linear_constraints.lower_bounds, config.linear_constraints.upper_bounds) to produce adjusted bounds for the optimization involving only the active variables.

Additionally, any constraint rows that originally involved only masked variables (i.e., all coefficients for active variables in that row are zero) are removed entirely, as they become trivial constants.

Parameters:

Name Type Description Default
config EnOptConfig

The EnOptConfig object containing the variable mask and linear constraints.

required
initial_values NDArray[float64]

The initial values to use.

required

Returns:

Type Description
tuple[NDArray[float64], NDArray[float64], NDArray[float64]]

The adjusted coefficients and bounds.