Conversation
contracts/Staking.sol
Outdated
| v6_randomSamplingStorage = V6_RandomSamplingStorage(hub.getContractAddress("V6_RandomSamplingStorage")); | ||
| chronos = Chronos(hub.getContractAddress("Chronos")); | ||
| epochStorage = EpochStorage(hub.getContractAddress("EpochStorageV8")); | ||
| epochStorageV6 = EpochStorage(hub.getContractAddress("EpochStorageV6")); |
There was a problem hiding this comment.
Are we sure this exists in the current hub on mainnet? can you check?
There was a problem hiding this comment.
no, we need to check
contracts/Staking.sol
Outdated
| bytes32 delegatorKey = _getDelegatorKey(msg.sender); | ||
| // settle all pending score changes for the node's delegator | ||
| _prepareForStakeChange(chronos.getCurrentEpoch(), identityId, delegatorKey); | ||
| v6_claim.prepareForStakeChangeV6External(chronos.getCurrentEpoch(), identityId, delegatorKey); |
There was a problem hiding this comment.
this makes sense. We should also have _validateDelegatorEpochClaimsV6 in each of these functions to ensure all possible rewards have been claimed
contracts/Staking.sol
Outdated
| v6_claim.prepareForStakeChangeV6External(currentEpoch, fromIdentityId, delegatorKey); | ||
| // settle all pending score changes for the node's delegator | ||
| _prepareForStakeChange(currentEpoch, toIdentityId, delegatorKey); | ||
| v6_claim.prepareForStakeChangeV6External(currentEpoch, toIdentityId, delegatorKey); |
There was a problem hiding this comment.
Bug: V6 Logic Applied Incorrectly Across Nodes
The Staking contract unconditionally calls v6_claim.prepareForStakeChangeV6External() in addStake, transferStake, startWithdrawal, cancelWithdrawal, addOperatorFeeToStake, and claimDelegatorRewards. This V6-specific logic should only apply to V6 nodes, and its unconditional execution can lead to incorrect data operations for non-V6 nodes. This contrasts with V6_Claim.claimDelegatorRewardsV6, which correctly includes the V6 node validation.
Locations (6)
contracts/V6_Claim.sol
Outdated
| ) external profileExists(identityId) { | ||
| stakingMain.claimDelegatorRewards(identityId, epoch, delegator); | ||
| claimDelegatorRewardsV6(identityId, epoch, delegator); | ||
| } |
There was a problem hiding this comment.
Bug: Function Errors: Missing Error Handling & Identity Validation
The claimDelegatorRewardsCombined function has two issues:
- It lacks a
try-catchblock, preventing the execution ofclaimDelegatorRewardsV6()ifstakingMain.claimDelegatorRewards()reverts, contradicting the function's comment. - It does not explicitly validate that the
identityIdis a V6 node at its entry point, despite being designed for V6-specific operations.
Locations (1)
| uint96 totalNodeStakeAfter = totalNodeStakeBefore + addedStake; | ||
| // TODO: Maybe it's best to create a withdrawal request if addedStake pushes the node stake over 2m limit. Otherwise delegators might be never able to claim | ||
| require(totalNodeStakeAfter <= parametersStorage.maximumStake(), "Max stake exceeded"); | ||
|
|
There was a problem hiding this comment.
Bug: Migration Rewards Claiming Blocked by Stake Limits
The increaseDelegatorStakeBase function reverts if claiming pre-filled migration rewards would cause the node's total stake to exceed the maximumStake limit (2M). This prevents delegators from claiming their legitimate rewards, potentially locking them permanently. A TODO comment in the code acknowledges this issue, suggesting a withdrawal request mechanism as a more appropriate solution.
Locations (1)
contracts/Staking.sol
Outdated
| lastClaimedV6 = v812Epoch - 1; | ||
| } | ||
|
|
||
| require(lastClaimed <= lastClaimedV6 + 1, "DelegatorsInfo advanced too far compared to V6 store"); |
There was a problem hiding this comment.
Bug: Staking Rewards Claim Errors
A compilation error exists in Staking.sol due to an undefined variable lastClaimedMain in the claimDelegatorRewards function. Additionally, the _validateDelegatorEpochClaims function incorrectly applies V6-specific validation logic to all nodes, including those created after the V6 cutoff, potentially causing erroneous reverts for non-V6 nodes during stake operations. Furthermore, in claimDelegatorRewards, the initialization of lastClaimed and lastClaimedV6 occurs before the cross-store validation, which is a logical flow issue.
Locations (1)
contracts/Staking.sol
Outdated
| bytes32 delegatorKey = _getDelegatorKey(msg.sender); | ||
| // settle all pending score changes for the node's delegator | ||
| _prepareForStakeChange(chronos.getCurrentEpoch(), identityId, delegatorKey); | ||
| v6_claim.prepareForStakeChangeV6External(chronos.getCurrentEpoch(), identityId, delegatorKey); |
There was a problem hiding this comment.
Bug: V6 Logic Called Incorrectly for All Nodes
The Staking contract unconditionally calls v6_claim.prepareForStakeChangeV6External() for all nodes in multiple stake-changing functions. This V6-specific function is executed without checking if the identityId is actually a V6 node, leading to incorrect V6 storage entries and unnecessary computation. A check based on the node's creation timestamp is required before calling V6-specific logic.
Locations (1)
| stakingMain.claimDelegatorRewards(identityId, epochs[i], delegators[j]); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Bug: Batch Claim Inconsistency and Order Issue
The batchClaimDelegatorRewardsCombined function is inconsistent with claimDelegatorRewardsCombined. It unconditionally calls claimDelegatorRewardsV6 for all nodes, missing the V6_NODE_CUTOFF_TS check, which can lead to V6 claim logic being attempted on nodes created after the V6 cutoff. Furthermore, its order of operations is reversed, calling V6 claim before main claim, potentially causing incorrect behavior and failed claims.
Locations (1)
| DelegatorsInfo(address(v6_delegatorsInfo)), | ||
| RandomSamplingStorage(address(v6_randomSamplingStorage)) | ||
| ); | ||
| } |
There was a problem hiding this comment.
Bug: V6 Logic Applied Incorrectly to All Nodes
The Staking contract unconditionally applies V6-specific logic to all nodes. This includes calling claimV6Helper.prepareForStakeChangeV6External() during various stake operations (e.g., addStake, transferStake, createWithdrawalRequest, claimDelegatorRewards). Additionally, the _validateDelegatorEpochClaims function performs V6 validation by checking both main and V6 delegator stores for all nodes. This behavior is incorrect; V6 logic should only apply to nodes that existed in the V6 system (i.e., created before a specific cutoff timestamp), as indicated by PR discussions and patterns in other contracts like StakingManager.sol. Applying V6 logic to non-V6 nodes results in unnecessary state changes, inefficient operations, and incorrect validation constraints.
| } | ||
|
|
||
| require(lastClaimed <= lastClaimedV6 + 1, "DelegatorsInfo advanced too far compared to V6 store"); | ||
|
|
There was a problem hiding this comment.
Bug
V6-specific logic is unconditionally applied to all nodes, instead of only legacy V6 nodes. This includes calls to claimV6Helper.prepareForStakeChangeV6External() in V8_1_1_Rewards_Period.sol's increaseDelegatorStakeBase and several stake-related functions in Staking.sol (e.g., stake, restake, withdrawStake, restakeWithdrawal, restakeOperatorFee, claimDelegatorRewards). Additionally, V6 delegator info checks within Staking.sol's _validateDelegatorEpochClaims are also applied universally. This V6 logic should be guarded by a check for legacy V6 nodes (profileStorage.getOperatorFeeEffectiveDateByIndex(identityId, 0) < claimV6Helper.v6NodeCutoffTs()), as its universal application can lead to incorrect V6 state updates or unnecessary interactions with V6 storage for non-V6 nodes.
contracts/V6_Claim.sol
Outdated
| v6_delegatorsInfo.setLastClaimedEpoch(identityId, delegator, v812ReleaseEpoch - 1); | ||
| lastClaimed = v812ReleaseEpoch - 1; | ||
| } | ||
| require(lastClaimedMain == lastClaimed + 1, "V6 store not one epoch behind main store"); |
There was a problem hiding this comment.
Bug: Claim Validation Timing Issue
In V6_Claim.sol::claimDelegatorRewardsV6, the V6 delegator's lastClaimedEpoch is set before its consistency with the main store is validated. This allows the subsequent require statement to check against a value that may have just been modified, rendering the validation ineffective. If the check fails, the V6 store is left in an inconsistent state due to the prior modification.
Locations (1)
| ); | ||
| } | ||
| } | ||
|
|
There was a problem hiding this comment.
Bug
The Profile.sol and Staking.sol contracts incorrectly apply V6-specific logic unconditionally to all nodes, instead of only to V6 legacy nodes.
Profile.sol::setOperatorFee: This function includes a check requiringv6_delegatorsInfoto have claimed operator fees for the previous epoch. This check is applied to all nodes, even those that were never V6 nodes, preventing legitimate operator fee updates.Staking.solstake-related functions: Several functions (e.g.,addStake,moveStake,withdrawStake,claimDelegatorRewards) unconditionally callclaimV6Helper.prepareForStakeChangeV6External().
Both issues stem from missing conditional checks (e.g., using v6NodeCutoffTs) to determine if a node is a V6 legacy node, leading to unexpected reverts and hindering normal contract operations for non-V6 nodes.
Locations (6)
| stakingMain.claimDelegatorRewards(identityId, epochs[i], delegators[j]); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Bug: Batch Claim Fails on Mixed Node Versions
The batchClaimDelegatorRewardsCombined function unconditionally calls claimDelegatorRewardsV6 for all nodes. This is inconsistent with claimDelegatorRewardsCombined, which only calls V6 logic for nodes created before a specific cutoff timestamp. As claimDelegatorRewardsV6 reverts for non-V6 nodes, batchClaimDelegatorRewardsCombined will fail when processing any non-V6 nodes, making it unusable for mixed node types.
Locations (1)
| uint96 totalNodeStakeAfter = totalNodeStakeBefore + addedStake; | ||
| // TODO: Maybe it's best to create a withdrawal request if addedStake pushes the node stake over 2m limit. Otherwise delegators might be never able to claim | ||
| require(totalNodeStakeAfter <= parametersStorage.maximumStake(), "Max stake exceeded"); | ||
|
|
There was a problem hiding this comment.
Bug: Delegator Rewards Blocked by Stake Limits
Delegators are permanently unable to claim their migration rewards via the increaseDelegatorStakeBase function if adding the reward amount would cause the node's total stake to exceed its maximum limit. The function reverts with "Max stake exceeded", making these rewards permanently inaccessible. This issue is acknowledged by a TODO comment in the code.
Locations (1)
contracts/V8_1_1_Rewards_Period.sol
Outdated
| uint96 totalNodeStakeBefore = stakingStorage.getNodeStake(identityId); | ||
| uint96 totalNodeStakeAfter = totalNodeStakeBefore + addedStake; | ||
| // TODO: Maybe it's best to create a withdrawal request if addedStake pushes the node stake over 2m limit. Otherwise delegators might be never able to claim | ||
| require(totalNodeStakeAfter <= parametersStorage.maximumStake(), "Max stake exceeded"); |
There was a problem hiding this comment.
Bug: Stake Overflow Prevents Reward Claiming
The increaseDelegatorStakeBase function reverts with "Max stake exceeded" when adding a delegator's reward pushes the node's total stake above the maximum limit. This permanently prevents delegators from claiming their pre-filled rewards, effectively locking their funds, despite a TODO comment suggesting a withdrawal request should be created instead.
| if (reward811 > 0 && !claimed811) { | ||
| v8_1_1_rewards_period.increaseDelegatorStakeBase(identityId, delegator); | ||
| } | ||
| } |
There was a problem hiding this comment.
Bug
The claimDelegatorRewardsCombined function in StakingManager calls stakingMain.claimDelegatorRewards before v6Claim.claimDelegatorRewardsV6. This order causes v6Claim.claimDelegatorRewardsV6 to fail its require(lastClaimedMain == lastClaimed + 1) check because lastClaimedMain is advanced by the main claim before the V6 store's lastClaimed is updated. This issue is compounded by potential discrepancies in the initial lastClaimedEpoch values, as the V6 store uses v6_delegatorsInfo.v812ReleaseEpoch() while the main store uses parametersStorage.v81ReleaseEpoch(), leading to permanent desynchronization. Additionally, within v6Claim.claimDelegatorRewardsV6, the require(lastClaimedMain == lastClaimed + 1) check is incorrectly placed after a block that modifies lastClaimed, causing it to validate against a potentially altered state.
Additional Locations (1)
|
@FilipJoksimovic29 |
No description provided.