-
Notifications
You must be signed in to change notification settings - Fork 993
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PoS add re-delegation #34
Comments
There's a potential problem with slashing that's applied after re-delegation, because the re-delegated amount would be unbonded and bonded to the new target at unbonding offset, but unlike tokens that are unbonded for withdrawal, which are being slashed when they are being withdrawn, there's no further action required with the current re-delegation design. When we re-visit re-delegation, we should add spec of it as it has to be accepted by the validity predicate and we should re-visit the integration spec in https://dev.anoma.net/master/explore/design/ledger/pos-integration.html?highlight=redelegate#delegator-transactions - this is not very clear, because we're re-using |
At the point the re-delegated amount is bonded to the new target, can we check if there were any slashes since discovered that need to be applied? (this may require some sort of queue, but the processing required should be low and thus safe, I think?) |
Yeah, we can do that with some queue that is processed by the protocol - it will need to update data in the re-delegation target if and when some slash(es) for the source are found |
We're putting this back in scope for Namada as it's an important feature for users and having to wait for unbonding period to re-delegate is not only cumbersome for users, it also makes them forfeit their delegation rewards they could have received during the unbonding period. We discussed a potential solution to re-delegation and we'll sketch it out the in spec next. |
A high-level sketch of re-delegation functionality:
|
Wait, why? Can we at least reduce this to iterations proportional to validator pairs (of redelegations) per epoch?
We can queue these calculations and do them when the redelegation completes (at unbonding_len), perhaps? Then we can avoid separate logic for delegations-which-were-once-redelegations.
Hmm, is this a special bond or just any bond from the validator's address to itself? The latter we should probably allow to be re-delegated? Not sure I understand the implied distinction here. |
I think we can only reduce it to re-delegations that are not more than
Yeah, that sounds good, agree it should make it simpler.
Self-bond is from validator's addess to itself, but we don't allow validators to delegate to other validators in general. I think another thing we still need to figure out in more detail are the rewards for re-delegation. With auto-bonding, re-delegation should still receive rewards from original validator up before the pipeline offset, but we're not processing individual bonds when distributing rewards. The original validator's rewards should probably be "transferred" somehow with the re-delegation to the new validator's bond, so we might need to do a bit more for re-delegations on both the old and the new validator's stake until before pipeline. |
But can't we treat all redelegations from A to B in epoch e (A, B, e arbitrary) the same? We shouldn't need to iterate over them separately, no? (if not, why would we?)
We can perform processing once per redelegation at the pipeline offset delay to update the voting power / rewards / bond amount, I think. |
Ah right, yes that should be fine. We should only need as many iterations as the number of unique re-delegation target validators in each relevant epoch.
When a re-delegation is requested in epoch However, we actually need to apply this in advance at |
That all sounds reasonable to me! |
…dux-store refactor/32 - Restructure redux store
Add a transaction for re-delegation (using a version with longer delay, which has to wait for unbonding epoch, but simpler to implement than faster re-delegation).
impl depends on #124
The text was updated successfully, but these errors were encountered: