@@ -179,11 +179,11 @@ instance as :meth:`aiohttp.ClientSession.request` ``data`` argument::
179
179
180
180
await session.post('http://example.com', data=mpwriter)
181
181
182
- Behind the scenes :meth: `MultipartWriter.serialize ` will yield chunks of every
182
+ Behind the scenes :meth: `MultipartWriter.write ` will yield chunks of every
183
183
part and if body part has `Content-Encoding ` or `Content-Transfer-Encoding `
184
184
they will be applied on streaming content.
185
185
186
- Please note, that on :meth: `MultipartWriter.serialize ` all the file objects
186
+ Please note, that on :meth: `MultipartWriter.write ` all the file objects
187
187
will be read until the end and there is no way to repeat a request without
188
188
rewinding their pointers to the start.
189
189
@@ -206,9 +206,17 @@ using chunked transfer encoding by default. To overcome this issue, you have
206
206
to serialize a :class: `MultipartWriter ` by our own in the way to calculate its
207
207
size::
208
208
209
- body = b''.join(mpwriter.serialize())
209
+ class Writer:
210
+ def __init__(self):
211
+ self.buffer = bytearray()
212
+
213
+ async def write(self, data):
214
+ self.buffer.extend(data)
215
+
216
+ writer = Writer()
217
+ mpwriter.writer(writer)
210
218
await aiohttp.post('http://example.com',
211
- data=body , headers=mpwriter.headers)
219
+ data=writer.buffer , headers=mpwriter.headers)
212
220
213
221
Sometimes the server response may not be well formed: it may or may not
214
222
contains nested parts. For instance, we request a resource which returns
0 commit comments