Skip to content

Basic Optimizer

ropt.workflow._basic_optimizer

This module defines a basic optimization object.

ropt.workflow.BasicOptimizer

A class for executing single optimization runs.

The BasicOptimizer is designed to simplify the process of setting up and executing optimization workflows that consist primarily of a single optimization run.

This class provides a user-friendly interface for common optimization operations, including:

  • Initiating a Single Optimization: Easily start an optimization process with a provided configuration and evaluator.
  • Observing Optimization Events: Register observer functions to monitor and react to various events that occur during the optimization, such as the start of an evaluation or the availability of new results.
  • Abort Conditions: Define a callback function that can be used to check for abort conditions during the optimization.
  • Result Reporting: Define a callback function that will be called whenever new results become available.
  • Accessing Results: After the optimization is complete, the optimal results, corresponding variables, and the optimization's exit code are readily accessible.
  • Customizable ComputeSteps, Handlers, and Evaluators: While designed for single runs, it allows for the addition of custom compute steps and event handlers for more complex scenarios.

By encapsulating the core elements of an optimization run, the BasicOptimizer reduces the boilerplate code required for simple optimization tasks, allowing users to focus on defining the optimization problem and analyzing the results.

The following example demonstrates how to find the optimum of the Rosenbrock function using a BasicOptimizer object, combining it with a tracker to store the best result.

Example
import numpy as np
from numpy.typing import NDArray

from ropt.evaluator import EvaluatorContext, EvaluatorResult
from ropt.workflow import BasicOptimizer

DIM = 5
CONFIG = {
    "variables": {
        "variable_count": DIM,
        "perturbation_magnitudes": 1e-6,
    },
}
initial_values = 2 * np.arange(DIM) / DIM + 0.5


def rosenbrock(variables: NDArray[np.float64], _: EvaluatorContext) -> EvaluatorResult:
    objectives = np.zeros((variables.shape[0], 1), dtype=np.float64)
    for v_idx in range(variables.shape[0]):
        for d_idx in range(DIM - 1):
            x, y = variables[v_idx, d_idx : d_idx + 2]
            objectives[v_idx, 0] += (1.0 - x) ** 2 + 100 * (y - x * x) ** 2
    return EvaluatorResult(objectives=objectives)


optimizer = BasicOptimizer(CONFIG, rosenbrock)
optimizer.run(initial_values)

print(f"Optimal variables: {optimizer.results.evaluations.variables}")
print(f"Optimal objective: {optimizer.results.functions.weighted_objective}")
Customization

The optimization workflow executed by BasicOptimizer can be tailored in two main ways: by adding event handlers to the default workflow or by running an entirely different workflow:

  1. Adding Custom Event Handlers

    This method allows for custom processing of events emitted by the default optimization workflow, without replacing the workflow itself. This is useful for tasks like custom logging or data processing.

    Event handlers can be specified in two ways, and handlers from both sources will be combined:

    • Environment Variable: If the ROPT_HANDLERS environment variable contains a comma-separated list of event handler names, these handlers will be added to the default optimization workflow. Each name must correspond to a registered EventHandler.

    • JSON Configuration File: If a JSON configuration file is found at <prefix>/share/ropt/options.json (where <prefix> is the Python installation prefix or a system-wide data prefix.1), BasicOptimizer will look for specific keys to load additional event handlers. If this JSON file contains a basic_optimizer key, and nested within it an event_handlers key, the value of event_handlers should be a list of strings. Each string in this list should be the name of a registered EventHandler. These handlers will be added to those found via ROPT_HANDLERS.

      Example shared/ropt/options.json:

      {
          "basic_optimizer": {
              "event_handlers": ["custom_logger", "extra/event_processor"]
          }
      }
      

    Note that if a custom optimization workflow is installed using the ROPT_SCRIPT environment variable (see below), these custom handlers will not be installed.

  2. Custom Workflow Execution

    If the ROPT_SCRIPT environment variable contains an option in the format step-name=script.py (where script.py may be any file), the named custom compute step will be executed instead of the standard optimization workflow, passing it the name of the script that defines the new optimization workflow.

    The custom compute step (step-name) must adhere to the following:

    • It must be a registered ComputeStep.
    • Its run method must accept
      1. An evaluator keyword argument, which will receive the evaluator function passed to BasicOptimizer.
      2. A script keyword argument, which will receive the name of script passed via ROPT_SCRIPT.
    • This method must return a callable that returns an optimization ExitCode.

      This callable will then be executed by BasicOptimizer in place of its default workflow.

    As a short-cut is possible to also define ROPT_SCRIPT with only the name of the script (i.e. ROPT_SCRIPT=script.py). In this case a compute step with the name run_script is assumed to exists and will be used.


  1. The exact path to Python installation prefix, or the system's data prefix can be found using the Python sysconfig module:

    from sysconfig import get_paths
    print(get_paths()["data"])
    
     

results property

results: FunctionResults | None

Return the optimal result found during the optimization.

This property provides access to the best FunctionResults object discovered during the optimization process. It encapsulates the objective function value, constraint values, and other relevant information about the optimal solution.

Returns:

Type Description
FunctionResults | None

The optimal result.

__init__

__init__(
    enopt_config: dict[str, Any],
    evaluator: EvaluatorCallback,
    *,
    transforms: OptModelTransforms | None = None,
    constraint_tolerance: float = 1e-10,
) -> None

Initialize a BasicOptimizer object.

This constructor sets up the necessary components for a single optimization run. It requires an optimization configuration, an evaluator, and optional domain transform, which together define the optimization problem.

The constraint_tolerance is used to check any constraints, if a constraint value is within this tolerance, it is considered satisfied.

Parameters:

Name Type Description Default
enopt_config dict[str, Any]

The configuration for the optimization.

required
evaluator EvaluatorCallback

The evaluator object.

required
transforms OptModelTransforms | None

Optional transforms to apply to the model.

None
constraint_tolerance float

The constraint violation tolerance.

1e-10

run

run(initial_values: ArrayLike) -> ExitCode

Run the optimization process.

This method initiates and executes the optimization workflow defined by the BasicOptimizer object. It manages the optimization, result handling, and event processing. After the optimization is complete, the optimal results, variables, and exit code can be accessed via the corresponding properties.

Returns:

Type Description
ExitCode

The exit code returned by the optimization workflow.

set_abort_callback

set_abort_callback(callback: Callable[[], bool]) -> None

Set a callback to check for abort conditions.

The provided callback function will be invoked repeatedly during the optimization process. If the callback returns True, the optimization will be aborted, and the BasicOptimizer will exit with an ExitCode.USER_ABORT.

The callback function should have no arguments and return a boolean value.

Parameters:

Name Type Description Default
callback Callable[[], bool]

The callable to check for abort conditions.

required

set_results_callback

set_results_callback(callback: Callable[..., None]) -> None

Set a callback to report new results.

The provided callback function will be invoked whenever new results become available during the optimization process. This allows for real-time monitoring and analysis of the optimization's progress.

The required signature of the callback function should be:

def callback(results: tuple[FunctionResults, ...]) -> None:
    ...

Parameters:

Name Type Description Default
callback Callable[..., None]

The callable that will be invoked to report new results.

required