This repository was archived by the owner on Jan 22, 2025. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Re-work rent to not require in-order index and pubkey #18233
Labels
stale
[bot only] Added to stale content; results in auto-close after a week.
Comments
cc @ryoqun |
The other idea was that to prevent grinding attacks, the rent collection keeps a memory of the last pubkey it has collected rent from. Then, it collects rent from N pubkeys following that pubkey, where This implies not having a fixed schedule, but the runtime can still start reading accounts into memory preemptively on an as-needed basis, at a stable rate. This would avoid having to reshuffle rent collection each epoch. Why is this good enough?
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. |
This was referenced Apr 6, 2024
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Problem
Rent collection is done on a set of keys with the same pubkey prefix today at the end of a slot in a blocking operation which involves potentially looking up account state from disk incurring page faults.
The in-order requirement on the index also limits the design-space to structures which can iterate the elements in-order efficiently which may impede the speed of the index for other operations such as writing new account states or doing lookups.
Proposed Solution
Find a way to collect rent from a random or pseudo-random set of keys, maybe something like an RNG seeded with the block hash which then picks N number of pubkeys which is designed to on average collect rent from each account every epoch but might not be guaranteed as it is today.
A somewhat related idea is to have some special rent update which does not write the whole account state again, but maybe references a previous update and a new lamports value to indicate that it was changed. This could keep the
The text was updated successfully, but these errors were encountered: