Agreeing To Disagree: Proposing a Weakly-Subjective Finality Solution for Ethereum Classic


ETC Core, in consultancy with OpenRelay and ChainSafe, develop and propose a finality algorithm called MESS for Ethereum Classic that, if adopted by the honest majority, would raise chain state finality near-exponentially with negligible risk of network bifurcation.


After enduring several massive reorganizations, Ethereum Classic is under pressure to change.

A low hashrate has caused Ethereum Classic’s consensus algorithms to yield inconvenient and undesirable finality rates. Relative to Ethereum Mainnet and hashrate-for-hire services, a low hashrate means that it is relatively cheap to dominate the chain’s recent history; implying that transactions made on chain should not be considered permanent until some undesirably long window has elapsed.

In order for the chain to regain its utility again in this aspect, this “confidence rate” needs to shrink from many hours (or even days) to, ideally, minutes.

Because this finality characteristic as a product of hashrate is a function of deeply embedded assumptions about the essential behavior of the network and its blockchain. One of the core challenges of a solution is to achieve a “fix” for the network which, from the perspective of the theory and code on the network, is actually for a problem that doesn’t exist! We need to modify the behavior of the chain such that we get the properties we want — namely, quicker finality — and to do so without breaking anything along the way.

This is the challenge that I, together with consulting partners from OpenRelay and ChainSafe, have taken on in what we’ve called an “Artificial Finality” working group.

After about a month of some seriously sleepless nights and one or two new gray hairs, we’re happy to propose what we consider a best solution.

Modified Exponential Subjective Scoring (MESS)

For those who’ve been around the block once or twice, this name may ring a bell. Deep in the archives of and Ethereum Foundation’s blog, Vitalik Buterin describes what he calls “Exponential Subjective Scoring.”

We have evolved this 6-year-old idea toward what we anticipate to be a robust, efficient, and uncompromising solution.

We’ve proposed a specification at the Ethereum Classic Improvement Proposals repository and forum, and are, this week, in the process of putting a dedicated test network of 150 nodes through its paces.


I think our (and the network’s!) criteria for an acceptable solution has been aptly phrased by one of ETC’s long-time community members Cody Burns:

Small reorgs are normal and healthy, large reorgs are suspicious. Given a reorg of unusual length, nodes should evaluate the current chain head they have with preference over the incoming chain with longer length or higher difficulty (cited).

This is not, of course, without nuance, as we’ll see, but speaks clearly to the general aim.

General Case

One of the general heuristics learned in our research is that “rough edges” consistently present severely exploitable focal points that an adversary can target. A simple example of this is clear with a naive “disallow N-block reorganizations” solution. As discussed at more length an earlier post, this presents what we termed an “Edge-of-Eternity” scenario, where the adversary’s victims are segregated by virtue of being momentarily asynchronous to some other set of peers, and, in their differential rejection or acceptance of a segment subject to destitution (saving external recovery miracles mechanisms), since that discrete window — a “rough edge” — becomes, by definition, a permanent bifurcation.

The subjective view can be understood as what a node knows within some time frame about global state, and the objective view as all that a node may (eventually) know of the global state. Rough edges can be generally classified as conditions where a subjective view of state is handled with a level of consequence that ought to be deferred to the objective. Solutions that over-confidently weight the subjective view is liable to vulnerabilities that derive from a potentially imperfect view at or within some time period, where that imperfection may be the work of a malicious adversary.

The strength of the network comes from the ability of a node to revise its consideration of state based on new information. This fluidity enables the healthy and natural competition that is at the very heart of chain growth, as miners race and jostle to provide a next best block — or blocks. Mechanisms that forbid this work in direct antagonism to this essential life cycle. As such, the risk they pose to the network is typically in the form of network bifurcation (ie chain splits, partitions).

Our solution should be as “smooth” as possible.

Specific Case

In developing a solution for ETC it was important to us that it be:

  • Low or no impact on existing consensus or sealing rules. This mitigates risk that may otherwise be assumed, both in the theoretical scope as well as that of implementation complexity.
  • Concise enough to implement and test quickly and comprehensively. Urgency is a priority, and in order to achieve something reliable, it should avoid undue complexity if possible.
  • Not likely to cause damage or expose the network to risk outweighing its benefit. Solutions having reasonably achievable exploits would replace one weakness with another.

Alternatives considered

We’ve considered and developed an exhaustive (or at least exhausting!) set of options including existing or externally proposed options like PirlGuard, VeriBlock, block sealing algorithm modifications, and various checkpointing solutions; and have designed and evaluated our own solutions including changes to the monetary policy, systems we called Permapointing and Flashpointing, a dynamic probability-based reorg acceptance algorithm we called Continuously Probable Reorganizations, and a solution using what we called Endorser Transactions that democratized chain preference publication and observation.

All of our work will eventually be made public with the hope that our designs and experiments, though imperfect, may add some value to the larger domain of public distributed systems research.


We find that MESS offers unique and compelling characteristics that, if applied at the network scale, offer a satisfactorily high rate of finality with little additional risk or cost.

Under MESS, nodes make a subjective decision to prefer chain segments they see first over segments they see later. This preference is defined by establishing what Vitalik terms a “gravity” assigned to segments present in a node’s view relative to those not-yet. The scale of this preference is parameterized by the relative (total) difficulty ratios of the competing segments and the age of their common ancestor, weighting the first-occurring segment with a “gravity” that the competing segment must match in order to be accepted.

Our evolution of this concept proposes the curve of gravity to be defined by a bounded sin function of the form 15 * sin((x + 12000pi)/8000) + 15 + 1, shown below relative to the "original" ESS and an intermediate proposition at both micro and micro scales. The x value is measured as the age of the common ancestor in seconds, and the y value represents the assigned (anti)gravity.



This function can be interpreted as: “For a proposed reorganization spanning 600 seconds, the (new) competitor chain segment needs to achieve 1.04x the total difficulty of its predecessor." Segments spanning more than a few handfuls of minutes are forced to achieve a rapidly growing factor of the incumbent difficulty.

In addition to extending the preferential lenience of short-term (and healthy!) competition, the function is able to provide a differentiable ceiling transition at f(x=25132)=31. Although this transition is, in practice, certainly a far-edge case, it works to remove an unnecessary rough edge and to remove an impractical skyrocketing gravity that would only serve to ultimately destitute unfortunate of an eclipse attack.

The real benefit of this proposition over the other alternatives is that the opportunity for network partition is determined by the ratio of relative difficulty, where in order to maintain a partition an adversary needs to maintain a precise and near-constant competing difficulty ratio, and if (and when) they fail that, the probability of network resolution rises dramatically with each next block.

From Buterin’s 2014 post:

ESS has the property that, unlike more naive approaches at subjectivity, it mostly avoids permanent network splits; if the time between the first node on the network hearing about block B and the last node on the network hearing about block B is an interval of k blocks, then a fork is unsustainable unless the lengths of the two forks remain forever within roughly k percent of each other (if that is the case, then the differing gravities of the forks will ensure that half of the network will forever see one fork as higher-scoring and the other half will support the other fork). Hence, ESS is weakly subjective with X roughly corresponding to how close to a 50/50 network split the attacker can create (eg. if the attacker can create a 70/30 split, then X = 0.29). cited

The Ethereum Classic massive reorganizations of August 2020, being on the order of handfuls of hours each, would have required the upper bound of 31 times the network’s then-current hashrate of 5 terahash.

Costs (Risks)

Abstention from selfish mining deterrence

A perspective is presented below comparing reorganization acceptance with segments of varying difficulty and depth against a constant 200-block chain segment of average difficulty.

Visible here of note are the randomly distributed blocks in green, and respectively, red.

Their random distribution is derived from the work of Eyal and Sirer which addresses the issue of a PoW network’s vulnerability to selfish mining given circumstances where a mining entity controls at least 33% of the total hashrate, in which case their work proposes a coin toss to arbitrate equal-difficulty segments as a mitigation of the strategy’s profitability.

The very-far upper left (say, x <= 4, -1 <= y <= 7) blocks in red and green are in question here, since we can reasonably consider that magnitude of depth within the bounds of honesty and probability. As visible, the (anti)gravity imposition of the MESS algorithm causes these short exactly-equivalent-difficulty sections that would otherwise be randomly accepted to be consistently rejected.

As such, and as the proposed specification currently stands, we would expect to sacrifice the benefit of actively deterring this marginally “unfair” strategy by dominant miners.

There are options to avoid this cost, including disabling the algorithm for segments shorter than N with equivalent difficulty (allowing the coin toss to prevail) or adding another layer of probability that would diminish as equivalent-difficulty segments grow in length or age.

This is an open door, and may be addressed and pursued in the future.

Eclipse vulnerabilities

For a node to make any subjective decisions about chain acceptance poses a vulnerability for its eclipse by an adversary.

Consideration for this scenario has yielded the following safety mechanisms adjacent to MESS’s implementation in Core-Geth:

  • MESS is only enabled once it has completed synchronization with the network. Nodes coming online originally or after a long period of absence are most vulnerable, and this strategy mitigates the risk significantly.
  • MESS is only enabled while a peer has >=10 peers. This works to ensure that the node has a sufficiently diverse view to be able to make healthy subjective decisions.
  • A node having an unchanged head block for a period of 130 seconds disables MESS. This prevents an eclipsed node from being permanently stuck on an abandoned attacker chain.
  • The gravity cap forces an eclipse attacker to maintain a minimum difficulty. An eclipsed node will “escape the trap” if it is able to have a view of a heavier (main, honest) chain with 31x relative difficulty. For the majority honest network relative to an attacker, this presents a cumbersome, and costly, criteria.

Agreeing To Disagree

As is (or would be) the case with any subjective acceptance arbitration algorithm, network-level efficacy will be driven by the level and consistency of node adoption, particularly by the honest mining majority. Observable variances in arbitration schemes of any kind between nodes would create a potential vulnerability.

Status and Next Steps

The specification is under review and discussion at Ethereum Classic’s Improvement Proposals repository, here. We encourage you to review it yourself and join the conversation.

The implementation is complete and proposed at Core-Geth, and work has begun on Besu.

Core-Geth has published a “Pre-Release” activating the feature on Ethereum Classic’s Mordor testnet, and for nodes that have upgraded there, it will be live now. All are welcome to help test this public network.

Testing has begun and will continue through the week on the dedicated “MESSNet” testnet, and we’ll provide a summary of tests and their results once that’s complete.

For background on earlier, related considerations of finality in this context, please find these earlier documents:

Written by: Isaac A., ETC Core Protocol Lead and Client Developer

Agreeing To Disagree: Proposing a Weakly-Subjective Finality Solution for Ethereum Classic was originally published in etc_core on Medium, where people are continuing the conversation by highlighting and responding to this story.

Publication date: 
09/18/2020 - 16:11

The views and opinions expressed in this article are solely those of the authors and do not reflect the views of Bitcoin Insider. Every investment and trading move involves risk - this is especially true for cryptocurrencies given their volatility. We strongly advise our readers to conduct their own research when making a decision.