You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* Switch tutorial to dependency/ies that exist on Maven
* Improve Clojure Module tutorial
* Add namespace docstring
* Bring verbiage up to date with https://mxnet.incubator.apache.org/api/clojure/module.html
* Add newlines for readability and to keep line length <80
* Nix duplicated section in Clojure Symbol API docs
"Multiple Outputs" is a (deprecated) repeat of "Group Multiple
Symbols".
* Improve Clojure Symbol tutorial
* Add namespace docstring
* Bring verbiage up to date with https://mxnet.incubator.apache.org/api/clojure/symbol.html
* Add newlines for readability and to keep line length <80
* Fix missing end-code-block in Clojure NDArray API docs
* Improve Clojure NDArray tutorial
* Add namespace docstring
* Bring verbiage up to date with https://mxnet.incubator.apache.org/api/clojure/ndarray.html
* Add newlines for readability and to keep line length <80
* Improve Clojure KVStore tutorial
* Add namespace docstring
* Bring verbiage up to date with https://mxnet.incubator.apache.org/api/clojure/kvstore.html
* Add newlines for readability and to keep line length <80
* [MXNET-1017] Updating the readme file for cpp-package and adding readme file for example directory. (apache#12773)
* Updating the readme file for cpp-package and adding readme file for example directory.
* Updating the readme file for cpp-package and adding readme file for example directory.
* Addressed the review comments.
* Addressed the review comments
* Fail the broken link job when broken links are found (apache#12905)
* Fix typo in formula in docstring for GRU cell and layer and add clarification to description (gluon.rnn) (apache#12896)
* Fix typo in GRU cell and layers (gluon.rnn) docstring
* empty
* fix the paths issue for downloading script (apache#12913)
;;The data that you want to push can be stored on any device. Furthermore, you can push multiple values into the same key, where KVStore first sums all of these values, and then pushes the aggregated value, as follows:
51
+
;; The data that you want to push can be stored on any
52
+
;; device. Furthermore, you can push multiple values into the same
53
+
;; key, where KVStore first sums all of these values, and then pushes
;;You’ve already seen how to pull a single key-value pair. Similar to the way that you use the push command, you can pull the value into several devices with a single call.
65
+
;;; Pull
66
+
;; You’ve already seen how to pull a single key-value pair. Similar to
67
+
;; the way that you use the push command, you can pull the value into
;;All of the operations that we’ve discussed so far are performed on a single key. KVStore also provides the interface for generating a list of key-value pairs. For a single device, use the following:
74
+
75
+
;;;; List Key-Value Pairs
76
+
77
+
;; All of the operations that we’ve discussed so far are performed on
78
+
;; a single key. KVStore also provides the interface for generating a
79
+
;; list of key-value pairs. For a single device, use the following:
;; The module API provides an intermediate and high-level interface for performing computation with neural networks in MXNet. Module wraps a Symbol and one or more Executors. It has both a high level and intermediate level api
51
66
52
-
;; Preparing a module for Computation
67
+
;;;; Preparing a module for Computation
53
68
54
-
;; construct a module
69
+
;; To construct a module, we need to have a symbol as input. This
70
+
;; symbol takes input data in the first layer and then has subsequent
71
+
;; layers of fully connected and relu activation layers, ending up in
72
+
;; a softmax layer for output.
55
73
56
74
(let [data (sym/variable"data")
57
75
fc1 (sym/fully-connected"fc1" {:data data :num-hidden128})
;; You can also write this with the `as->` threading macro.
66
84
67
85
68
86
(defout (as-> (sym/variable"data") data
@@ -75,40 +93,62 @@
75
93
;=> #'tutorial.module/out
76
94
77
95
78
-
;; By default, context is the CPU. If you need data parallelization, you can specify a GPU context or an array of GPU contexts.
79
-
;; like this (m/module out {:contexts [(context/gpu)]})
96
+
;; By default, context is the CPU. If you need data parallelization,
97
+
;; you can specify a GPU context or an array of GPU contexts, like
98
+
;; this: `(m/module out {:contexts [(context/gpu)]})`
80
99
81
-
;; Before you can compute with a module, you need to call `bind` to allocate the device memory and `initParams` or `set-params` to initialize the parameters. If you simply want to fit a module, you don’t need to call `bind` and `init-params` explicitly, because the `fit` function automatically calls them if they are needed.
100
+
;; Before you can compute with a module, you need to call `bind` to
101
+
;; allocate the device memory and `initParams` or `set-params` to
102
+
;; initialize the parameters. If you simply want to fit a module, you
103
+
;; don’t need to call `bind` and `init-params` explicitly, because the
104
+
;; `fit` function automatically calls them if they are needed.
;; You can pass in batch-end callbacks using batch-end-callback and epoch-end callbacks using epoch-end-callback in the `fit-params`. You can also set parameters using functions like in the fit-params like optimizer and eval-metric. To learn more about the fit-params, see the fit-param function options. To predict with a module, call `predict` with a DataIter:
132
+
;; You can pass in batch-end callbacks using batch-end-callback and
133
+
;; epoch-end callbacks using epoch-end-callback in the
134
+
;; `fit-params`. You can also set parameters using functions like in
135
+
;; the fit-params like optimizer and eval-metric. To learn more about
136
+
;; the fit-params, see the fit-param function options. To predict with
137
+
;; a module, call `predict` with a DataIter:
138
+
139
+
(defresults
140
+
(m/predict mod {:eval-data test-data}))
103
141
104
-
(defresults (m/predict mod {:eval-data test-data}))
;;The module collects and returns all of the prediction results. For more details about the format of the return values, see the documentation for the `predict` function.
146
+
;; The module collects and returns all of the prediction results. For
147
+
;; more details about the format of the return values, see the
148
+
;; documentation for the `predict` function.
110
149
111
-
;;When prediction results might be too large to fit in memory, use the `predict-every-batch` API
150
+
;; When prediction results might be too large to fit in memory, use
151
+
;; the `predict-every-batch` API.
112
152
113
153
(let [preds (m/predict-every-batch mod {:eval-data test-data})]
114
154
(mx-io/reduce-batches test-data
@@ -118,23 +158,33 @@
118
158
;;; do something
119
159
(inc i))))
120
160
121
-
;;If you need to evaluate on a test set and don’t need the prediction output, call the `score` function with a DataIter and an EvalMetric:
161
+
;; If you need to evaluate on a test set and don’t need the prediction
162
+
;; output, call the `score` function with a data iterator and an eval
163
+
;; metric:
122
164
123
-
(m/score mod {:eval-data test-data :eval-metric (eval-metric/accuracy)}) ;=>["accuracy" 0.2227]
;;This runs predictions on each batch in the provided DataIter and computes the evaluation score using the provided EvalMetric. The evaluation results are stored in metric so that you can query later.
168
+
;; This runs predictions on each batch in the provided DataIter and
169
+
;; computes the evaluation score using the provided EvalMetric. The
170
+
;; evaluation results are stored in metric so that you can query
171
+
;; later.
126
172
127
-
;;Saving and Loading Module Parameters
128
173
129
-
;;To save the module parameters in each training epoch, use a `checkpoint` function
130
174
175
+
;;;; Saving and Loading
176
+
177
+
;; To save the module parameters in each training epoch, use the
178
+
;; `save-checkpoint` function:
131
179
132
180
(let [save-prefix "my-model"]
133
181
(doseq [epoch-num (range3)]
134
182
(mx-io/do-batches train-data (fn [batch
135
183
;; do something
136
-
]))
137
-
(m/save-checkpoint mod {:prefix save-prefix :epoch epoch-num :save-opt-statestrue})))
184
+
]))
185
+
(m/save-checkpoint mod {:prefix save-prefix
186
+
:epoch epoch-num
187
+
:save-opt-statestrue})))
138
188
139
189
;; INFO org.apache.mxnet.module.Module: Saved checkpoint to my-model-0000.params
140
190
;; INFO org.apache.mxnet.module.Module: Saved optimizer state to my-model-0000.states
@@ -144,20 +194,22 @@
144
194
;; INFO org.apache.mxnet.module.Module: Saved optimizer state to my-model-0002.states
145
195
146
196
147
-
;;To load the saved module parameters, call the `load-checkpoint` function:
197
+
;; To load the saved module parameters, call the `load-checkpoint`
;;To initialize parameters, Bind the symbols to construct executors first with bind function. Then, initialize the parameters and auxiliary states by calling `init-params` function.
154
-
204
+
;; To initialize parameters, bind the symbols to construct executors
205
+
;; first with the `bind` function. Then, initialize the parameters and
206
+
;; auxiliary states by calling the `init-params` function.\
;;To resume training from a saved checkpoint, instead of calling `set-params`, directly call `fit`, passing the loaded parameters, so that `fit` knows to start from those parameters instead of initializing randomly:
239
+
;; To resume training from a saved checkpoint, pass the loaded
240
+
;; parameters to the `fit` function. This will prevent `fit` from
241
+
;; initializing randomly.
187
242
188
-
;; reset the training data before calling fit or you will get an error
243
+
;; (First, reset the training data before calling `fit` or you will
0 commit comments