# NeuralNetworkInitializer

class NeuralNetworkInitializer(*, bounds, seed=None, rng_seed=None)

Configuration for the neural network based optimizer. The optimizer builds and trains a neural network to fit the cost landscape with the data you provide. Then a set of test points are returned, which minimize the neural network’s fitted cost landscape. A gradient based optimizer is used to minimize this landscape, with the points starting from different random initial values. This method is recommended when you can provide a large amount of data about your system. The network architecture used by this optimizer is chosen for its good performance on a variety of quantum control tasks. Note that you must pass a non-empty list of results in the input to the initial step when using this initializer. For best results, these should be evenly sampled over the whole parameter space.

Variables
• bounds (List[qctrl.dynamic.types.closed_loop_optimization_step.BoxConstraint]) – The per-parameter bounds on the test points. The bounds are defined by imposing a box constraint on each individual parameter. That is, for each parameter $$x_j$$, the optimizer is only allowed to search the next test point subject to the constraint such that $$x^{\rm lower}_j \leq x_j \leq x^{\rm upper}_j$$. These constraints must be in the same order as parameters in CostFunctionResult.

• seed (int, optional) – Seed for the random number generator. If set, must be non-negative. Use this option to generate deterministic results from the optimizer.

• rng_seed (int, optional) – This parameter will be removed, please use seed instead.