# run_gradient_free_optimization

`boulderopal.run_gradient_free_optimization(graph, cost_node_name, output_node_names, iteration_count=100, target_cost=None, optimization_count=4, cost_history_scope=HistoryScope.NONE, seed=None)`

Perform model-based optimization without using gradient values.

Use this function to determine a choice of variables that minimizes the value of a scalar real-valued cost function of those variables. You express that cost function as a graph describing how the input variables are transformed into the cost value.

This function provides an alternative to the gradient based optimizer, and is useful when the gradient is either very costly to compute or inaccessible (for example if the graph includes a node that does not allow gradients).

### Parameters

**graph**(*Graph*) – The graph describing the cost $C(\mathbf v)$ and outputs $\{F_j(\mathbf v)\}$ as functions of the optimizable input variables $\mathbf v$. The graph must contain nodes with names $s$ (giving the cost function) and $\{s_j\}$ (giving the output functions).**cost_node_name**(*str*) – The name $s$ of the real-valued scalar graph node that defines the cost function $C(\mathbf v)$ to be minimized.**output_node_names**(*str**or**list**[**str**]*) – The names $\{s_j\}$ of the graph nodes that define the output functions $\{F_j(\mathbf v)\}$. The function evaluates these using the optimized variables and returns them in the output.**iteration_count**(*int**,**optional*) – The number of iterations the optimizer performs until it halts. Defaults to 100.**target_cost**(*float**or**None**,**optional*) – A target value of the cost that you can set as an early stop condition for the optimizer. If the cost becomes equal or smaller than this value, the optimization halts. Defaults to None, meaning that the optimizer runs until it converges.**optimization_count**(*int**,**optional*) – The number $N$ of independent randomly seeded optimizations to perform. The function returns the results from the best optimization (the one with the lowest cost). Defaults to 4. Depending on the landscape of the optimization problem, a larger value will help in finding lower costs, at the expense of prolonging computation time.**cost_history_scope**(*HistoryScope**,**optional*) – Configuration for the scope of the returned cost history data. Use this to select if you want the history data to be returned. Defaults to no cost history data returned.**seed**(*int**or**None**,**optional*) – Seed for the random number generator used in the optimizer. If set, must be a non-negative integer. Use this option to generate deterministic results from the optimizer. Note that if your graph contains nodes generating random samples, you need to also set seeds for those nodes to ensure a reproducible result.

### Returns

A dictionary containing the optimization result, with the following keys:

`cost`

: The minimum cost function value $C(\mathbf v_\mathrm{optimized})$
achieved across all optimizations.

`output`

: The dictionary giving the value of each requested output node, evaluated
at the optimized variables, namely
$\{s_j: F_j(\mathbf v_\mathrm{optimized})\}$.
The keys of the dictionary are the names $\{s_j\}$ of the output nodes.

`cost_history`

: The evolution of the cost function across all optimizations and iterations.

`metadata`

: Metadata associated with the calculation.
No guarantees are made about the contents of this metadata dictionary;
the contained information is intended purely to help interpret the results of the
calculation on a one-off basis.

### Return type

dict

### SEE ALSO

`boulderopal.closed_loop.optimize`

: Run a closed-loop optimization to find a minimum of the given cost function.

`boulderopal.closed_loop.step`

: Perform a single step in a closed-loop optimization.

`boulderopal.execute_graph`

: Evaluate generic functions.

`boulderopal.run_optimization`

: Perform gradient-based deterministic optimization of generic real-valued functions.

`boulderopal.run_stochastic_optimization`

: Perform gradient-based stochastic optimization of generic real-valued functions.

## Notes

Given a cost function $C(\mathbf v)$ of variables $\mathbf v$, this function computes an estimate $\mathbf v_\mathrm{optimized}$ of $\mathrm{argmin}_{\mathbf v} C(\mathbf v)$, namely the choice of variables $\mathbf v$ that minimizes $C(\mathbf v)$. The function then calculates the values of arbitrary output functions $\{F_j(\mathbf v_\mathrm{optimized})\}$ with that choice of variables.

This function represents the cost and output functions as nodes of a graph. This graph defines the input optimization variables $\mathbf v$, and how these variables are transformed into the corresponding cost and output quantities. You build the graph from primitive nodes defined in the graph of the Boulder Opal package. Each such node, which can be identified by a name, represents a function of the previous nodes in the graph (and thus, transitively, a function of the input variables). You can use any named scalar real-valued node $s$ as the cost function, and any named nodes $\{s_j\}$ as outputs.

After you provide a cost function $C(\mathbf v)$ (via a graph), this function runs an optimization process $N$ times, each with random initial variables, to identify $N$ local minima of the cost function, and then takes the variables corresponding to the best such minimum as $\mathbf v_\mathrm{optimized}$.

The optimizer will carry out the gradient-free optimization for a number of iterations set by iteration_count. If a target_cost is passed and the cost becomes less than or equal to this value then the optimizer will terminate early.

The gradient-free optimizer used here is the covariance matrix adaptation evolution strategy (CMA-ES) based optimizer. For more detail on CMA-ES see CMA-ES on Wikipedia.

## Examples

Perform a simple optimization with variables that are initialized to given values.

```
>>> graph = bo.Graph()
>>> variables = graph.optimization_variable(
... 2, -1, 1, initial_values=np.array([0.6, 0.]), name="variables"
... )
>>> x, y = variables[0], variables[1]
>>> cost = (x - 0.5) ** 2 + (y - 0.1) ** 2
>>> cost.name = "cost"
>>> result = bo.run_gradient_free_optimization(
... graph=graph, cost_node_name="cost", output_node_names="variables", seed=0
... )
>>> result["cost"], result["output"]
(5.166836148557636e-22, {'variables': {'value': array([0.5, 0.1])}})
```

See also the How to optimize controls using gradient-free optimization user guide.