@@ -9,48 +9,114 @@ internal optimizer interface.
9
9
10
10
The advantages of using the algorithm with optimagic over using it directly are:
11
11
12
+ - You can collect the optimizer history and create criterion_plots and params_plots.
13
+ - You can use flexible formats for your start parameters (e.g. nested dicts or
14
+ namedtuples)
12
15
- optimagic turns unconstrained optimizers into constrained ones.
13
16
- You can use logging.
14
17
- You get great error handling for exceptions in the criterion function or gradient.
15
- - You get a parallelized and customizable numerical gradient if the user did not provide
16
- a closed form gradient.
17
- - You can compare your optimizer with all the other optimagic optimizers by changing
18
- only one line of code .
18
+ - You get a parallelized and customizable numerical gradient if you don't have a closed
19
+ form gradient.
20
+ - You can compare your optimizer with all the other optimagic optimizers on our
21
+ benchmark sets .
19
22
20
23
All of this functionality is achieved by transforming a more complicated user provided
21
24
problem into a simpler problem and then calling "internal optimizers" to solve the
22
25
transformed problem.
23
26
24
- ## The internal optimizer interface
27
+ (functions_and_classes_for_internal_optimizers)=
25
28
26
- (to be written)
29
+ ## Functions and classes for internal optimizers
30
+
31
+ The functions and classes below are everything you need to know to add an optimizer to
32
+ optimagic. To see them in action look at
33
+ [ this guide] ( ../how_to/how_to_add_optimizers.ipynb )
34
+
35
+ ``` {eval-rst}
36
+ .. currentmodule:: optimagic.mark
37
+ ```
38
+
39
+ ``` {eval-rst}
40
+ .. dropdown:: mark.minimizer
41
+
42
+ The `mark.minimizer` decorator is used to provide algorithm specific information to
43
+ optimagic. This information is used in the algorithm selection tool, for better
44
+ error handling and for processing of the user provided optimization problem.
45
+
46
+ .. autofunction:: minimizer
47
+ ```
48
+
49
+ ``` {eval-rst}
50
+ .. currentmodule:: optimagic.optimization.internal_optimization_problem
51
+ ```
52
+
53
+ ``` {eval-rst}
54
+
55
+
56
+ .. dropdown:: InternalOptimizationProblem
57
+
58
+ The `InternalOptimizationProblem` is optimagic's internal representation of objective
59
+ functions, derivatives, bounds, constraints, and more. This representation is already
60
+ pretty close to what most algorithms expect (e.g. parameters and bounds are flat
61
+ numpy arrays, no matter which format the user provided).
62
+
63
+ .. autoclass:: InternalOptimizationProblem()
64
+ :members:
65
+
66
+ ```
27
67
28
- ## Output of internal optimizers
68
+ ``` {eval-rst}
69
+ .. currentmodule:: optimagic.optimization.algorithm
70
+ ```
71
+
72
+ ``` {eval-rst}
73
+
74
+ .. dropdown:: InternalOptimizeResult
75
+
76
+ This is what you need to create from the output of a wrapped algorithm.
77
+
78
+ .. autoclass:: InternalOptimizeResult
79
+ :members:
80
+
81
+ ```
82
+
83
+ ``` {eval-rst}
84
+
85
+ .. dropdown:: Algorithm
86
+
87
+ .. autoclass:: Algorithm
88
+ :members:
89
+ :exclude-members: with_option_if_applicable
90
+
91
+ ```
29
92
30
93
(naming-conventions)=
31
94
32
95
## Naming conventions for algorithm specific arguments
33
96
34
- Many optimizers have similar but slightly different names for arguments that configure
35
- the convergence criteria, other stopping conditions, and so on. We try to harmonize
36
- those names and their default values where possible.
37
-
38
- Since some optimizers support many tuning parameters we group some of them by the first
39
- part of their name (e.g. all convergence criteria names start with ` convergence ` ). See
40
- {ref}` list_of_algorithms ` for the signatures of the provided internal optimizers.
97
+ To make switching between different algorithm as simple as possible, we align the names
98
+ of commonly used convergence and stopping criteria. We also align the default values for
99
+ stopping and convergence criteria as much as possible.
41
100
42
- The preferred default values can be imported from ` optimagic.optimization.algo_options `
43
- which are documented in {ref}` algo_options ` . If you add a new optimizer to optimagic you
44
- should only deviate from them if you have good reasons.
101
+ You can find the harmonized names and value [ here] ( algo_options_docs ) .
45
102
46
- Note that a complete harmonization is not possible nor desirable, because often
47
- convergence criteria that clearly are the same are implemented slightly different for
48
- different optimizers. However, complete transparency is possible and we try to document
49
- the exact meaning of all options for all optimizers .
103
+ To align the names of other tuning parameters as much as possible with what is already
104
+ there, simple have a look at the optimizers we already wrapped. For example, if you are
105
+ wrapping a bfgs or lbfgs algorithm from some libray, try to look at all existing
106
+ wrappers of bfgs algorithms and use the same names for the same options .
50
107
51
108
## Algorithms that parallelize
52
109
53
- (to be written)
110
+ Algorithms that evaluate the objective function or derivatives in parallel should only
111
+ do so via ` InternalOptimizationProblem.batch_fun ` ,
112
+ ` InternalOptimizationProblem.batch_jac ` or
113
+ ` InternalOptimizationProblem.batch_fun_and_jac ` .
114
+
115
+ If you parallelize in any other way, the automatic history collection will stop to work.
116
+
117
+ In that case, call ` om.mark.minimizer ` with ` disable_history=True ` . In that case you can
118
+ either do your own history collection and add that history to ` InternalOptimizeResult `
119
+ or the user has to rely on logging.
54
120
55
121
## Nonlinear constraints
56
122
0 commit comments