@@ -39,7 +39,7 @@ runtime (queried at current best imported block).
39
39
Since the blockchain is not always linear, forks need to be correctly handled by
40
40
the transaction pool as well. In case of a fork, some blocks are * retracted*
41
41
from the canonical chain, and some other blocks get * enacted* on top of some
42
- common ancestor. The transactions from retrated blocks could simply be discarded,
42
+ common ancestor. The transactions from retracted blocks could simply be discarded,
43
43
but it's desirable to make sure they are still considered for inclusion in case they
44
44
are deemed valid by the runtime state at best, recently enacted block (fork the
45
45
chain re-organized to).
@@ -49,7 +49,7 @@ pool, it's broadcasting status, block inclusion, finality, etc.
49
49
50
50
## Transaction Validity details
51
51
52
- Information retrieved from the the runtime are encapsulated in ` TransactionValidity `
52
+ Information retrieved from the the runtime are encapsulated in the ` TransactionValidity `
53
53
type.
54
54
55
55
``` rust
@@ -147,7 +147,7 @@ choosing the ones with highest priority to include to the next block first.
147
147
148
148
- ` priority ` of transaction may change over time
149
149
- on-chain conditions may affect ` priority `
150
- - Given two transactions with overlapping ` provides ` tags, the one with higher
150
+ - given two transactions with overlapping ` provides ` tags, the one with higher
151
151
` priority ` should be preferred. However we can also look at the total priority
152
152
of a subtree rooted at that transaction and compare that instead (i.e. even though
153
153
the transaction itself has lower ` priority ` it "unlocks" other high priority transactions).
@@ -163,7 +163,7 @@ the transaction is valid all that time though.
163
163
164
164
- ` longevity ` of transaction may change over time
165
165
- on-chain conditions may affect ` longevity `
166
- - After ` longevity ` lapses the transaction may still be valid
166
+ - after ` longevity ` lapses, the transaction may still be valid
167
167
168
168
### ` propagate `
169
169
@@ -231,15 +231,16 @@ to instead of gossiping everyting have other peers request transactions they
231
231
are interested in.
232
232
233
233
Since the pool is expected to store more transactions than what can fit
234
- to a single block. Validating the entire pool on every block might not be
235
- feasible, so the actual implementation might need to take some shortcuts.
234
+ in a single block, validating the entire pool on every block might not be
235
+ feasible. This means that the actual implementation might need to take some
236
+ shortcuts.
236
237
237
238
## Suggestions & caveats
238
239
239
- 1 . The validity of transaction should not change significantly from block to
240
+ 1 . The validity of a transaction should not change significantly from block to
240
241
block. I.e. changes in validity should happen predictably, e.g. ` longevity `
241
242
decrements by 1, ` priority ` stays the same, ` requires ` changes if transaction
242
- that provided a tag was included in block. ` provides ` does not change, etc.
243
+ that provided a tag was included in block, ` provides ` does not change, etc.
243
244
244
245
1 . That means we don't have to revalidate every transaction after every block
245
246
import, but we need to take care of removing potentially stale transactions.
@@ -253,9 +254,9 @@ feasible, so the actual implementation might need to take some shortcuts.
253
254
1 . In the past there were many issues found when running small networks with a
254
255
lot of re-orgs. Make sure that transactions are never lost.
255
256
256
- 1 . UTXO model is quite challenging. The transaction becomes valid right after
257
- it's included in block, however it is waiting for exactly the same inputs to
258
- be spent, so it will never really be included again.
257
+ 1 . The UTXO model is quite challenging. A transaction becomes valid right after
258
+ it's included in a block, however it is waiting for exactly the same inputs
259
+ to be spent, so it will never really be included again.
259
260
260
261
1 . Note that in a non-ideal implementation the state of the pool will most
261
262
likely always be a bit off, i.e. some transactions might be still in the pool,
@@ -277,25 +278,25 @@ feasible, so the actual implementation might need to take some shortcuts.
277
278
278
279
1 . We periodically validate all transactions in the pool in batches.
279
280
280
- 1 . To minimize runtime calls, we introduce batch-verify call. Note it should reset
281
- the state (overlay) after every verification.
281
+ 1 . To minimize runtime calls, we introduce the batch-verify call. Note it should
282
+ reset the state (overlay) after every verification.
282
283
283
284
1 . Consider leveraging finality. Maybe we could verify against latest finalised
284
285
block instead. With this the pool in different nodes can be more similar
285
286
which might help with gossiping (see set reconciliation). Note that finality
286
287
is not a strict requirement for a Substrate chain to have though.
287
288
288
289
1 . Perhaps we could avoid maintaining ready/future queues as currently, but
289
- rather if transaction doesn't have all requirements satisfied by existing
290
+ rather if a transaction doesn't have all requirements satisfied by existing
290
291
transactions we attempt to re-import it in the future.
291
292
292
293
1 . Instead of maintaining a full pool with total ordering we attempt to maintain
293
294
a set of next (couple of) blocks. We could introduce batch-validate runtime
294
- api method that pretty much attempts to simulate actual block inclusion of
295
+ api method that pretty much attempts to simulate actual block inclusion of
295
296
a set of such transactions (without necessarily fully running/dispatching
296
297
them). Importing a transaction would consist of figuring out which next block
297
- this transaction have a chance to be included in and then attempting to
298
- either push it back or replace some of existing transactions.
298
+ this transaction has a chance to be included in and then attempting to
299
+ either push it back or replace some existing transactions.
299
300
300
301
1 . Perhaps we could use some immutable graph structure to easily add/remove
301
302
transactions. We need some traversal method that takes priority and
@@ -320,7 +321,7 @@ The pool consists of basically two independent parts:
320
321
The pool is split into ` ready ` pool and ` future ` pool. The latter contains
321
322
transactions that don't have their requirements satisfied, and the former holds
322
323
transactions that can be used to build a graph of dependencies. Note that the
323
- graph is build ad-hoc during the traversal process (getting the ` ready `
324
+ graph is built ad-hoc during the traversal process (using the ` ready `
324
325
iterator). This makes the importing process cheaper (we don't need to find the
325
326
exact position in the queue or graph), but traversal process slower
326
327
(logarithmic). However most of the time we will only need the beginning of the
@@ -342,26 +343,26 @@ to limit number of runtime verification calls.
342
343
Each time a transaction is imported, we first verify it's validity and later
343
344
find if the tags it ` requires ` can be satisfied by transactions already in
344
345
` ready ` pool. In case the transaction is imported to the ` ready ` pool we
345
- additionally * promote* transactions from ` future ` pool if the transaction
346
+ additionally * promote* transactions from the ` future ` pool if the transaction
346
347
happened to fulfill their requirements.
347
- Note we need to cater for cases where transaction might replace a already
348
+ Note we need to cater for cases where a transaction might replace an already
348
349
existing transaction in the pool. In such case we check the entire sub-tree of
349
350
transactions that we are about to replace, compare their cumulative priority to
350
351
determine which subtree to keep.
351
352
352
- After a block is imported we kick-off pruning procedure. We first attempt to
353
- figure out what tags were satisfied by transaction in that block. For each block
354
- transaction we either call into runtime to get it's ` ValidTransaction ` object,
353
+ After a block is imported we kick-off the pruning procedure. We first attempt to
354
+ figure out what tags were satisfied by a transaction in that block. For each block
355
+ transaction we either call into the runtime to get it's ` ValidTransaction ` object,
355
356
or we check the pool if that transaction is already known to spare the runtime
356
- call. From this we gather full set of ` provides ` tags and perform pruning of
357
- ` ready ` pool based on that. Also we promote all transactions from ` future ` that
358
- have their tags satisfied.
357
+ call. From this we gather the full set of ` provides ` tags and perform pruning of
358
+ the ` ready ` pool based on that. Also, we promote all transactions from ` future `
359
+ that have their tags satisfied.
359
360
360
361
In case we remove transactions that we are unsure if they were already included
361
- in current block or some block in the past, it is being added to revalidation
362
- queue and attempted to be re-imported by the background task in the future.
362
+ in the current block or some block in the past, it gets added to the revalidation
363
+ queue and attempts to be re-imported by the background task in the future.
363
364
364
365
Runtime calls to verify transactions are performed from a separate (limited)
365
- thread pool to avoid interferring too much with other subsystems of the node. We
366
+ thread pool to avoid interfering too much with other subsystems of the node. We
366
367
definitely don't want to have all cores validating network transactions, because
367
368
all of these transactions need to be considered untrusted (potentially DoS).
0 commit comments