# optimize

`boulderopal.closed_loop.optimize(cost_function, optimizer, initial_parameters=None, target_cost=None, max_iteration_count=100, callback=None, verbose=True)`

Run a closed-loop optimization to find a minimum of the given cost function.

This is an iterative process, where the optimizer generates and tests a set of points. After several iterations the distribution of generated test points should converge to low values of the cost function. You can use this approach when your system is too complicated to model.

The provided cost function must take a 2D array of shape `(test_point_count, parameter_count)`

as input and return a 1D array of costs of length test_point_count.
Alternatively, it can return a 2D array of shape `(2, test_point_count)`

, where the first
row represents the costs and the second row represents their associated uncertainties.

For best results, you should provide a set of initial_parameters to start the optimization.
The performance and convergence of the optimizer might change depending on these values.
If you don’t pass initial_parameters, randomly sampled values inside the bounds are used.
The number of initial values is set to the population size for CMA-ES and to
`2 * parameter_count`

for other optimizers.

### Parameters

**cost_function**(*Callable*) – The cost function to minimize, as a callable that takes a 2D NumPy array of parameters and returns either a 1D NumPy array of costs, or a 2D NumPy arrays of costs and uncertainties. The cost function should always return the same type of output (either always return uncertainties or never return them).**optimizer**(*ClosedLoopOptimizer**or**str*) – The optimizer to be used in the minimization of the cost function, or an optimizer state. If this is the first optimization step, pass an instance of a closed-loop optimizer class. If you want to resume an optimization, pass the optimizer state as of the last step.**initial_parameters**(*np.ndarray**or**None**,**optional*) – The initial values of the parameters to use in the optimization, as a 2D NumPy array of shape`(test_point_count, parameter_count)`

. If not passed, random values uniformly sampled inside the optimizer bounds are used. If you provide an optimizer state, you must provide an array of initial parameters.**target_cost**(*float**or**None**,**optional*) – The target cost. If passed, the optimization will halt if the best cost is below the given value.**max_iteration_count**(*int**,**optional*) – The maximum number of iterations. Defaults to 100.**callback**(*Callable**or**None**,**optional*) – A function that takes in the current set of parameters, a 2D NumPy array of shape`(test_point_count, parameter_count)`

, and returns a bool. The function is evaluated once during each iteration with the current parameters. If it returns True, the optimization is halted.**verbose**(*bool**,**optional*) – Whether to print out information about the optimization cycle. Defaults to True.

### Returns

A dictionary containing the optimization result, with the following keys:

`cost`

: The lowest cost found during the optimization.

`parameters`

: The optimal parameters associated to the lowest cost.

`cost_history`

: The history of best cost values up to each optimization step.

`step`

: A dictionary containing the information about the last optimization step,
to be used to resume the optimization.
It contains the optimizer `state`

and the `test_points`

at which
the cost function should be evaluated next.

### Return type

dict

### SEE ALSO

`boulderopal.closed_loop.step`

: Perform a single step in a closed-loop optimization.

`boulderopal.execute_graph`

: Evaluate generic functions.

`boulderopal.run_gradient_free_optimization`

: Perform model-based optimization without using gradient values.

`boulderopal.run_optimization`

: Perform gradient-based deterministic optimization of generic real-valued functions.

`boulderopal.run_stochastic_optimization`

: Perform gradient-based stochastic optimization of generic real-valued functions.

## Notes

At each iteration, the cost function will be called with the same number of data points as initially provided in initial_parameters. However, in some situations the optimizer might request more points (for example, if a certain number of points is required in order to move the algorithm to the next state) or, occasionally, fewer (for example, if moving the algorithm to the next state requires the evaluation of a specific point and nothing more).

If the optimization loop is halted via a KeyboardInterrupt then the function returns the best results obtained in the optimization thus far.

This function will make a server call at each iteration. Each call will be associated with a different action_id. The action_id of the final iteration can be found in result[“step”][“metadata”][“action_id”] where result is the dictionary returned by the function.