Configuration
ropt.config
The ropt.config
module provides configuration classes for optimization workflows.
This module defines a set of classes that are used to configure various aspects of an optimization process, including variables, objectives, constraints, realizations, samplers, and more.
The central configuration class for optimization is
EnOptConfig
, which encapsulates the complete
configuration for a single optimization step. It is designed to be flexible and
extensible, allowing users to customize the optimization process to their
specific needs.
These configuration classes are built using
pydantic
, which provides robust data validation
and parsing capabilities. This ensures that the configuration data is consistent
and adheres to the expected structure.
Configuration objects are typically created from dictionaries of configuration
values using the model_validate
method provided by pydantic
.
Key Features:
- Modular Design: The configuration is broken down into smaller, manageable components, each represented by a dedicated class.
- Validation:
pydantic
ensures that the configuration data is valid and consistent. - Extensibility: The modular design allows for easy extension and customization of the optimization process.
- Centralized Configuration: The
EnOptConfig
class provides a single point of entry for configuring an optimization step.
Parsing and Validation
The configuration classes are built using
pydantic
, which provides robust data validation.
The primary configuration class is EnOptConfig
, and
it contains nested configuration classes for various aspects of the
optimization. To parse a configuration from a dictionary, use the
model_validate
class method.
```
Classes: EnOptConfig: The main configuration class for an optimization step. VariablesConfig: Configuration for variables. ObjectiveFunctionsConfig: Configuration for objective functions. LinearConstraintsConfig: Configuration for linear constraints. NonlinearConstraintsConfig: Configuration for non-linear constraints. RealizationsConfig: Configuration for realizations. OptimizerConfig: Configuration for the optimizer. GradientConfig: Configuration for gradient calculations. FunctionEstimatorConfig: Configuration for function estimators. RealizationFilterConfig: Configuration for realization filters. SamplerConfig: Configuration for samplers.
EnOptConfig
The primary configuration class for an optimization step.
EnOptConfig
orchestrates the configuration of an entire optimization
workflow. It contains nested configuration classes that define specific
aspects of the optimization, such as variables, objectives, constraints,
realizations, and the optimizer itself.
realization_filters
, function_estimators
, and samplers
are configured
as tuples. Other configuration fields reference these objects by their index
within the tuples. For example,
GradientConfig
uses a samplers
field,
which is an array of indices specifying the sampler to use for each
variable.
The optional names
attribute is a dictionary that stores the names of the
various entities, such as variables, objectives, and constraints. The
supported name types are defined in the AxisName
enumeration. This information is optional, as it is not strictly necessary
for the optimization, but it can be useful for labeling and interpreting
results. For instance, when present, it is used to create a multi-index
results that are exported as data frames.
Info
Many nested configuration classes use numpy
arrays. These arrays
typically have a size determined by a configured property (e.g., the
number of variables) or a size of one. In the latter case, the single
value is broadcasted to all relevant elements. For example,
VariablesConfig
defines properties like
initial values and bounds as numpy
arrays, which must either match the
number of variables or have a size of one.
Attributes:
Name | Type | Description |
---|---|---|
variables |
VariablesConfig
|
Configuration for the optimization variables. |
objectives |
ObjectiveFunctionsConfig
|
Configuration for the objective functions. |
linear_constraints |
LinearConstraintsConfig | None
|
Configuration for linear constraints. |
nonlinear_constraints |
NonlinearConstraintsConfig | None
|
Configuration for non-linear constraints. |
realizations |
RealizationsConfig
|
Configuration for the realizations. |
optimizer |
OptimizerConfig
|
Configuration for the optimization algorithm. |
gradient |
GradientConfig
|
Configuration for gradient calculations. |
realization_filters |
tuple[RealizationFilterConfig, ...]
|
Configuration for realization filters. |
function_estimators |
tuple[FunctionEstimatorConfig, ...]
|
Configuration for function estimators. |
samplers |
tuple[SamplerConfig, ...]
|
Configuration for samplers. |
names |
dict[str, tuple[str | int, ...]]
|
Optional mapping of axis types to names. |
VariablesConfig
Configuration class for optimization variables.
This class, VariablesConfig
, defines the configuration for optimization
variables. It is used in an EnOptConfig
object
to specify the initial values, bounds, types, and an optional mask for the
variables.
The variables
field is required and determines the number of variables,
including both free and fixed variables.
The lower_bounds
and upper_bounds
fields define the bounds for each
variable. These are also numpy
arrays and are broadcasted to match the
number of variables. By default, they are set to negative and positive
infinity, respectively. numpy.nan
values in these arrays indicate
unbounded variables and are converted to numpy.inf
with the appropriate
sign.
The optional types
field allows assigning a
VariableType
to each variable. If not provided,
all variables are assumed to be continuous real-valued
(VariableType.REAL
).
The optional mask
field is a boolean numpy
array that indicates which
variables are free to change during optimization. True
values in the mask
indicate that the corresponding variable is free, while False
indicates a
fixed variable.
Variable perturbations
The VariablesConfig
class also stores information that is needed to
generate perturbed variables, for instance to calculate stochastic
gradients.
Perturbations are generated by sampler objects that are configured
separately as a tuple of SamplerConfig
objects in the configuration object used by a plan step. For instance,
EnOptConfig
object defines the available
samplers in its samplers
field. The samplers
field of the
VariablesConfig
object specifies, for each variable, the index of the
sampler to use. A random number generator is created to support samplers
that require random numbers.
The generated perturbation values are scaled by the values of the
perturbation_magnitudes
field and can be modified based on the
perturbation_types
. See PerturbationType
for details on available perturbation types.
Perturbed variables may violate the defined variable bounds. The
boundary_types
field specifies how to handle such violations. See
BoundaryType
for details on available boundary
handling methods.
The perturbation_types
and boundary_types
fields use values from the
PerturbationType
and
BoundaryType
enumerations, respectively.
Seed for Samplers
The seed
value ensures consistent results across repeated runs with
the same configuration. To obtain unique results for each optimization
run, modify the seed. A common approach is to use a tuple with a unique
ID as the first element, ensuring reproducibility across nested and
parallel plan evaluations.
Attributes:
Name | Type | Description |
---|---|---|
types |
ArrayEnum
|
Optional variable types. |
variable_count |
int
|
Number of variables. |
lower_bounds |
Array1D
|
Lower bounds for the variables (default: \(-\infty\)). |
upper_bounds |
Array1D
|
Upper bounds for the variables (default: \(+\infty\)). |
mask |
Array1DBool
|
Optional boolean mask indicating free variables. |
perturbation_magnitudes |
Array1D
|
Magnitudes of the perturbations for each variable
(default:
|
perturbation_types |
ArrayEnum
|
Type of perturbation for each variable (see
|
boundary_types |
ArrayEnum
|
How to handle perturbations that violate boundary
conditions (see |
samplers |
Array1DInt
|
Indices of the samplers to use for each variable. |
seed |
ItemOrTuple[int]
|
Seed for the random number generator used by the samplers. |
ObjectiveFunctionsConfig
Configuration class for objective functions.
This class, ObjectiveFunctionsConfig
, defines the configuration for
objective functions. for instance, as part of an
EnOptConfig
object.
ropt
supports multi-objective optimization. Multiple objectives are
combined into a single value by summing them after weighting. The weights
field, a numpy
array, determines the weight of each objective function.
The length of this array defines the number of objective functions. The
weights are automatically normalized to sum to 1 (e.g., [1, 1]
becomes
[0.5, 0.5]
).
Objective functions can optionally be processed using realization
filters
and function
estimators
.The realization_filters
and function_estimators
attributes, if provided, must be arrays of integer
indices. Each index in the realization_filters
array corresponds to a
objective (by position) and specifies which filter to use. The available
filters must be defined elsewhere as a tuple of realization filter
configurations. For instance, for optimization these are defined in the
EnOptConfig.realization_filters
configuration
class. The same logic applies to the function_estimators
array . If an
index is invalid (e.g., out of bounds for the corresponding object tuple),
no filter or estimator is applied to that specific objective. If these
attributes are not provided (None
), no filters or estimators are applied
at all.
Attributes:
Name | Type | Description |
---|---|---|
weights |
Array1D
|
Weights for the objective functions (default: 1.0). |
realization_filters |
Array1DInt
|
Optional indices of realization filters. |
function_estimators |
Array1DInt
|
Optional indices of function estimators. |
LinearConstraintsConfig
Configuration class for linear constraints.
This class, LinearConstraintsConfig
, defines linear constraints used in an
optimization, for instance as part of an
EnOptConfig
object.
Linear constraints are defined by a set of linear equations involving the
optimization variables. These equations can represent equality or inequality
constraints. The coefficients
field is a 2D numpy
array where each row
represents a constraint, and each column corresponds to a variable.
The lower_bounds
and upper_bounds
fields specify the bounds on the
right-hand side of each constraint equation. These fields are converted and
broadcasted to numpy
arrays with a length equal to the number of
constraint equations.
Less-than and greater-than inequality constraints can be specified by setting the lower bounds to \(-\infty\), or the upper bounds to \(+\infty\), respectively. Equality constraints are specified by setting the lower bounds equal to the upper bounds.
Attributes:
Name | Type | Description |
---|---|---|
coefficients |
Array2D
|
Matrix of coefficients for the linear constraints. |
lower_bounds |
Array1D
|
Lower bounds for the right-hand side of the constraint equations. |
upper_bounds |
Array1D
|
Upper bounds for the right-hand side of the constraint equations. |
Linear transformation of variables.
The set of linear constraints can be represented by a matrix equation: \(\mathbf{A} \mathbf{x} = \mathbf{b}\).
When linearly transforming variables to the optimizer domain, the coefficients (\(\mathbf{A}\)) and right-hand-side values (\(\mathbf{b}\)) must be converted to remain valid. If the linear transformation of the variables to the optimizer domain is given by:
then the coefficients and right-hand-side values must be transformed as follows:
NonlinearConstraintsConfig
Configuration class for non-linear constraints.
This class, NonlinearConstraintsConfig
, defines non-linear constraints ,
for instance as part of an EnOptConfig
object.
Non-linear constraints are defined by comparing a constraint function to a
right-hand-side value, allowing for equality or inequality constraints. The
lower_bounds
and upper_bounds
fields, which are numpy
arrays, specify the
bounds on these right-hand-side values. The length of these arrays determines
the number of constraint functions.
Less-than and greater-than inequality constraints can be specified by setting the lower bounds to \(-\infty\), or the upper bounds to \(+\infty\), respectively. Equality constraints are specified by setting the lower bounds equal to the upper bounds.
Non-linear constraints can optionally be processed using realization
filters
and function
estimators
.The realization_filters
and function_estimators
attributes, if provided, must be arrays of integer
indices. Each index in the realization_filters
array corresponds to a
constraint function (by position) and specifies which filter to use. The
available filters must be defined elsewhere as a tuple of realization filter
configurations. For instance, for optimization these are defined in the
EnOptConfig.realization_filters
configuration
class. The same logic applies to the function_estimators
array . If an
index is invalid (e.g., out of bounds for the corresponding object tuple),
no filter or estimator is applied to that specific constraint function. If
these attributes are not provided (None
), no filters or estimators are
applied at all.
Attributes:
Name | Type | Description |
---|---|---|
lower_bounds |
Array1D
|
Lower bounds for the right-hand-side values. |
upper_bounds |
Array1D
|
Upper bounds for the right-hand-side values. |
realization_filters |
Array1DInt
|
Optional indices of realization filters. |
function_estimators |
Array1DInt
|
Optional indices of function estimators. |
RealizationsConfig
Configuration class for realizations.
This class, RealizationsConfig
, defines the configuration for realizations
used when calculating objectives and constraints.
To optimize an ensemble of functions, a set of realizations is defined. When the optimizer requests a function value or a gradient, these are calculated for each realization and then combined into a single value. Typically, this combination is a weighted sum, but other methods are possible.
The weights
field, a numpy
array, determines the weight of each
realization. The length of this array defines the number of realizations. The
weights are automatically normalized to sum to 1 (e.g., [1, 1]
becomes
[0.5, 0.5]
).
If function value calculations for some realizations fail (e.g., due to a
simulation error), the total function and gradient values can still be
calculated by excluding the missing values. However, a minimum number of
successful realizations may be required. The realization_min_success
field
specifies this minimum. By default, it is set equal to the number of
realizations, meaning no missing values are allowed.
Note
Setting realization_min_success
to zero allows the optimization to
proceed even if all realizations fail. While some optimizers can handle
this, most will treat it as if the value were one, requiring at least
one successful realization.
Attributes:
Name | Type | Description |
---|---|---|
weights |
Array1D
|
Weights for the realizations (default: 1.0). |
realization_min_success |
NonNegativeInt | None
|
Minimum number of successful realizations (default: equal to the number of realizations). |
OptimizerConfig
Configuration class for the optimization algorithm.
This class, OptimizerConfig
, defines the configuration for the optimization
algorithm used in an EnOptConfig
object.
While optimization methods can have diverse parameters, this class provides a standardized set of settings that are commonly used and forwarded to the optimizer:
max_iterations
: The maximum number of iterations allowed. The optimizer may choose to ignore this.max_functions
: The maximum number of function evaluations allowed.max_batches
: The maximum number of evaluations batches allowed. The optimizer callback may ask to evaluate a batch of multiple functions and gradients at once. This setting will limit the number of those calls.tolerance
: The convergence tolerance used as a stopping criterion. The exact definition depends on the optimizer, and it may be ignored.parallel
: IfTrue
, allows the optimizer to use parallelized function evaluations. This typically applies to gradient-free methods and may be ignored.output_dir
: An optional output directory where the optimizer can store files.options
: A dictionary or list of strings for generic optimizer options. The required format and interpretation depend on the specific optimization method.stdout
: Redirect optimizer standard output to the given file.stderr
: Redirect optimizer standard error to the given file.
Differences between max_iterations
, max_functions
, and max_batches
These three parameters provide different ways to limit the duration or computational cost of the optimization process:
-
max_iterations
: This limit is passed directly to the backend optimization algorithm. Many optimizers define an "iteration" as a distinct step in their process, which might involve one or more function or gradient evaluations. The interpretation ofmax_iterations
depends on the specific backend optimizer; it typically caps the number of these internal iterations. Some backends might ignore this setting if they don't have a clear concept of iterations. -
max_batches
: This limit restricts the total number of calls made to the evaluation function provided toropt
. An optimizer might request a batch containing multiple function and/or gradient evaluations within a single call.max_batches
limits how many such batch requests are processed sequentially. This is particularly useful for managing resource usage when batches are evaluated in parallel (e.g., on an HPC cluster), as it controls the number of sequential submission steps. The number of batches does not necessarily correspond directly to the number of optimizer iterations, especially if function and gradient evaluations occur in separate batches. -
max_functions
: This imposes a hard limit on the total number of individual objective function evaluations performed across all batches. Since a single batch evaluation (limited bymax_batches
) can involve multiple function evaluations, settingmax_functions
provides more granular control over the total computational effort spent on function calls. It can serve as an alternative stopping criterion if the backend optimizer doesn't supportmax_iterations
or if you need to strictly limit the function evaluation count. Note that exceeding this limit might cause the optimization to terminate mid-batch, potentially earlier than a correspondingmax_batches
limit would.
Attributes:
Name | Type | Description |
---|---|---|
method |
str
|
Name of the optimization method. |
max_iterations |
PositiveInt | None
|
Maximum number of iterations (optional). |
max_functions |
PositiveInt | None
|
Maximum number of function evaluations (optional). |
max_batches |
PositiveInt | None
|
Maximum number of batch evaluations (optional). |
tolerance |
NonNegativeFloat | None
|
Convergence tolerance (optional). |
parallel |
bool
|
Allow parallelized function evaluations (default: |
output_dir |
Path | None
|
Output directory for the optimizer (optional). |
options |
dict[str, Any] | list[str] | None
|
Generic options for the optimizer (optional). |
stdout |
Path | None
|
File to redirect optimizer standard output (optional). |
stderr |
Path | None
|
File to redirect optimizer standard error (optional). |
GradientConfig
Configuration class for gradient calculations.
This class, GradientConfig
, defines the configuration of gradient
calculations. It is used in an EnOptConfig
object as the gradient
field to specify how gradients are calculated in
gradient-based optimizers.
Gradients are estimated using function values calculated from perturbed
variables and the unperturbed variables. The number_of_perturbations
field
determines the number of perturbed variables used, which must be at least
one.
If function evaluations for some perturbed variables fail, the gradient may
still be estimated as long as a minimum number of evaluations succeed. The
perturbation_min_success
field specifies this minimum. By default, it
equals number_of_perturbations
.
Gradients are calculated for each realization individually and then combined
into a total gradient. If number_of_perturbations
is low, or even just
one, individual gradient calculations may be unreliable. In this case,
setting merge_realizations
to True
directs the optimizer to combine the
results of all realizations directly into a single gradient estimate.
The evaluation_policy
option controls how and when objective functions and
gradients are calculated. It accepts one of three string values:
"speculative"
: Evaluate the gradient whenever the objective function is requested, even if the optimizer hasn't explicitly asked for the gradient at that point. This approach can potentially improve load balancing on HPC clusters by initiating gradient work earlier, though its effectiveness depends on whether the optimizer can utilize these speculatively computed gradients."separate"
: Always launch function and gradient evaluations as distinct operations, even if the optimizer requests both simultaneously. This is particularly useful when employing realization filters (seeRealizationFilterConfig
) that might disable certain realizations, as it can potentially reduce the number of gradient evaluations needed."auto"
: Evaluate functions and/or gradients strictly according to the optimizer's requests. Calculations are performed only when the optimization algorithm explicitly requires them.
Attributes:
Name | Type | Description |
---|---|---|
number_of_perturbations |
PositiveInt
|
Number of perturbations (default:
|
perturbation_min_success |
PositiveInt | None
|
Minimum number of successful function evaluations
for perturbed variables (default: equal to |
merge_realizations |
bool
|
Merge all realizations for the final gradient
calculation (default: |
evaluation_policy |
Literal['speculative', 'separate', 'auto']
|
How to evaluate functions and gradients. |
FunctionEstimatorConfig
Configuration class for function estimators.
This class, FunctionEstimatorConfig
, defines the configuration for
function estimators. Function estimators are generally configured as a tuple
of FunctionEstimatorConfig
objects in a configuration class of a plan
step. For instance, function_estimators
field of the EnOptConfig
defines
the available estimators for the optimization.
By default, objective and constraint functions, as well as their gradients, are calculated from individual realizations using a weighted sum. Function estimators provide a way to modify this default calculation.
The method
field specifies the function estimator method to use for
combining the individual realizations. The options
field allows passing a
dictionary of key-value pairs to further configure the chosen method. The
interpretation of these options depends on the selected method.
Attributes:
Name | Type | Description |
---|---|---|
method |
str
|
Name of the function estimator method. |
options |
dict[str, Any]
|
Dictionary of options for the function estimator. |
RealizationFilterConfig
Configuration class for realization filters.
This class, RealizationFilterConfig
, defines the configuration for
realization filters. Realization filters are generally configured as a tuple
in another configuration object. For instance, the realization_filters
field of the EnOptConfig
defines the available
filters for the optimization.
By default, objective and constraint functions, as well as their gradients, are calculated as a weighted function of all realizations. Realization filters provide a way to modify the weights of individual realizations. For example, they can be used to select a subset of realizations for calculating the final objective and constraint functions and their gradients by setting the weights of the other realizations to zero.
The method
field specifies the realization filter method to use for
adjusting the weights. The options
field allows passing a dictionary of
key-value pairs to further configure the chosen method. The interpretation
of these options depends on the selected method.
Attributes:
Name | Type | Description |
---|---|---|
method |
str
|
Name of the realization filter method. |
options |
dict[str, Any]
|
Dictionary of options for the realization filter. |
SamplerConfig
Configuration class for samplers.
This class, SamplerConfig
, defines the configuration for samplers used in
an EnOptConfig
object. Samplers are configured
as a tuple in the samplers
field of the EnOptConfig
, defining the
available samplers for the optimization. The samplers
field in the
GradientConfig
specifies the index of the
sampler to use for each variable.
Samplers generate perturbations added to variables for gradient calculations. These perturbations can be deterministic or stochastic.
The method
field specifies the sampler method to use for generating
perturbations. The options
field allows passing a dictionary of key-value
pairs to further configure the chosen method. The interpretation of these
options depends on the selected method.
By default, each realization uses a different set of perturbed variables.
Setting the shared
flag to True
directs the sampler to use the same set
of perturbed values for all realizations.
Attributes:
Name | Type | Description |
---|---|---|
method |
str
|
Name of the sampler method. |
options |
dict[str, Any]
|
Dictionary of options for the sampler. |
shared |
bool
|
Whether to share perturbation values between realizations (default: |
ropt.config.constants
Default values used by the configuration classes.
DEFAULT_SEED
module-attribute
Default seed for random number generators.
The seed is used as the base value for random number generators within various components of the optimization process, such as samplers. Using a consistent seed ensures reproducibility across multiple runs with the same configuration. To obtain unique results for each optimization run, modify this seed.
DEFAULT_NUMBER_OF_PERTURBATIONS
module-attribute
Default number of perturbations for gradient estimation.
This value defines the default number of perturbed variables used to estimate gradients. A higher number of perturbations can lead to more accurate gradient estimates but also increases the number of function evaluations required.
DEFAULT_PERTURBATION_MAGNITUDE
module-attribute
Default magnitude for variable perturbations.
This value specifies the default value of the scaling factor applied to the perturbation values generated by samplers. The magnitude can be interpreted as an absolute value or as a relative value, depending on the selected perturbation type.
See also: PerturbationType
.
DEFAULT_PERTURBATION_BOUNDARY_TYPE
module-attribute
Default perturbation boundary handling type.
This value determines how perturbations that violate the defined variable bounds
are handled. The default, BoundaryType.MIRROR_BOTH
, mirrors perturbations back
into the valid range if they exceed either the lower or upper bound.
See also: BoundaryType
.
DEFAULT_PERTURBATION_TYPE
module-attribute
Default perturbation type.
This value determines how the perturbation magnitude is interpreted. The
default, PerturbationType.ABSOLUTE
, means that the perturbation magnitude is
added directly to the variable value. Other options, such as
PerturbationType.RELATIVE
, scale the perturbation magnitude based on the
variable's bounds.
See also: PerturbationType
.
ropt.config.options
This module defines utilities for validating plugin options.
This module provides classes and functions to define and validate options for plugins. It uses Pydantic to create models that represent the schema of plugin options, allowing for structured and type-safe configuration.
Classes:
Name | Description |
---|---|
OptionsSchemaModel |
Represents the overall schema for plugin options. |
MethodSchemaModel |
Represents the schema for a specific method within a plugin, including its name and options. |
OptionsSchemaModel
Bases: BaseModel
Represents the overall schema for plugin options.
This class defines the structure for describing the methods and options
available for a plugin. The methods are described in a list of
[MethodSchemaModel][ropt.config.options.MethodSchemaModel
] objects, each
describing a method supported by the plugin.
Attributes:
Name | Type | Description |
---|---|---|
methods |
dict[str, MethodSchemaModel[Any]]
|
A list of method schemas. |
Example:
from ropt.config.options import OptionsSchemaModel
schema = OptionsSchemaModel.model_validate(
{
"methods": [
{
"options": {"a": float}
},
{
"options": {"b": int | str},
},
]
}
)
options = schema.get_options_model("method")
print(options.model_validate({"a": 1.0, "b": 1})) # a=1.0 b=1
get_options_model
Creates a Pydantic model for validating options of a specific method.
This method dynamically generates a Pydantic model tailored to validate
the options associated with a given method. It iterates through the
defined methods, collecting option schemas from those matching the
specified method
name. The resulting model can then be used to
validate dictionaries of options against the defined schema.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
method
|
str
|
The name of the method for which to create the options model. |
required |
Returns:
Type | Description |
---|---|
type[BaseModel]
|
A Pydantic model class capable of validating options for the specified method. |
MethodSchemaModel
Represents the schema for a specific method within a plugin.
This class defines the structure for describing one or more methods supported by a plugin. It contains a dictionary describing an option for this method.
Attributes:
Name | Type | Description |
---|---|---|
options |
dict[str, T]
|
A list of option dictionaries. |
url |
HttpUrl | None
|
An optional URL for the plugin. |
gen_options_table
Generates a Markdown table documenting plugin options.
This function takes a schema dictionary, validates it against the
OptionsSchemaModel
, and then
generates a Markdown table that summarizes the available methods and their
options. Each row in the table represents a method, and the columns list the
method's name and its configurable options. If a URL is provided for a
method, the method name will be hyperlinked to that URL in the table.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
schema
|
dict[str, Any]
|
A dictionary representing the schema of plugin options. |
required |
Returns:
Type | Description |
---|---|
str
|
A string containing a Markdown table that documents the plugin options. |