The Eclipse Score
The Eclipse Score, or, how to determine validators who are not effectively representing their delegators via voting
INTRODUCTION AND METHODOLOGY
The purpose of this work is to create a way to rank validators based on at least 3 metrics, to rank validators who are not effectively representing their delegators via voting.
Database: terra.core.fact_governance_votes и terra.core.ez_staking table of Flipsidecrypto database.
Merge. Each validator has an Operator address and an Account address. To merge staking and voting metrics, I removed the "vapor" characters from the VALIDATOR_ADDRESS column in the terra.core.ez_staking table, and also removed the last 5 characters of the validator address from both tables; the rest characters of the Operator address and Account address are the same
Stage 1. Calculation of metrics
To create a coefficient for ranking validators, I took the following metrics:
-
Staked_amount: This metric was calculated taking delegated LUNA
-
The amount of Luna a validator self-bonds to its staking pool. A validator with a higher amount of self-bonded Luna have more involvement in the process, and it also makes them more accountable. This process is not mandatory, however, it increases the trust in the validator.
-
Involvment - Shows what percentage of votes the validator has participated in. Validators should always be up-to-date with the current state of the ecosystem so that they can easily adapt to any change
-
Timelife: The longer a validator exists, the more time it has had to raise/lower its reputation and prove its responsibility to the community
Stage 2. Score calculation for each metric
For each metric, a score was calculated from 0 to 10. For the Staked amount (st_luna_score) and Self-bound (self_st_score) metrics, the score was calculated through ranking. So, the first top 10 received a score of 10, the last validators in the list - 0 points each. To evaluate the score of the engagement metric, the percentage of participation in voting was calculated, then for this metric, as well as for the Timelife score metric, it was calculated depending on the range
Stage 3. Adding weight to each score
So, I found a score for each metric from 0 to 10. However, each metric has a different weight in the calculation of the final coefficient. So, for example, Self-bound is an optional process, however, it adds a little weight to the trust and shows the responsibility of the validator. The engagement metric, on the contrary, is the main metric in this calculation, since it is necessary to rank validators who are not effectively representing their delegators via voting.
Therefore, I ranked these metrics by importance and added coefficients to them to increase/decrease weight (from most important to least):
- involvement_score (*1,75)
- st_luna_score (*1,5)
- timelife_score (*1,25)
- self_st_score (*1,1)
Stage 4. Adjusting
In the ranking of validators in this case, their participation in voting is of great importance. Therefore, the last step to avoid getting into the top of the rank of validators with low involvement, I added the condition that if the validator participates in less than 10% of the votes, then the final value of the metric is multiplied by 0.5; if less than 20% - by 0.6, if less than 30%, then by 0.7, if less than 40%, then by 0.8, if less than 50%, then at 0.9
Also I filtered this table and if validator didn't vote than he doesn't count