• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

Get rid of reference node proposal:

crowning

Well-known member
  • Writing technology for serializing the Masternode Payments history (to improve the reference nodes accuracy)

This one bugs me for a while now because why serialize the payments history when it's already in the blockchain?

So I wrote a little script and invite everyone to pick the algorithm apart for flaws and/or things a Masternode can't do internally:
  1. masternode list | grep "ENABLED": we now have the IPs of all enabled Masternodes
  2. masternode list pubkey: we now have the IPs and pubkeys of all Masternodes. Throw away the ones not in 1.
  3. Scan the blockchain backwards for a certain window of blocks, e.g. 3x number_of_masternodes or whatever is doable with acceptable performance. Select the one with the oldest last payment as payee. All Masternodes should easily find consensus in this case.
    1. if one or more Masternodes (new ones) don't have a payment in that window chose randomly one of them as payee. If it's TOO hard to find consensus this way we can simple use the one with the lowest IP/pubkey/whatever. It's statistically not an advantage.
( BTW my script perfectly works this way, however it cheats because it uses the blockchain explorer which has the advantage of having the interesting information already in its database )

1+2 are more or less trivial to implement inside a Masternode, the tricky part is to do 3. with acceptable performance.

Well, Evan announced somewhere that the proof of service (PoSe) for Masternodes could include tests for available RAM, CPU speed and whatnot...so keeping this blockchain-window in memory to speed up 3. would be an useful PoSe test.

Another point which might be a problem: how to avoid "pubkey hopping" (© crowning), means someone simply changes the pubkey of a Masternode after each payment to look like a new Masternode?

Well, with a new pubkey you'd have to move your 1000 DASH payment there. And the vin of this payment has a timestamp. We could just check the timestamp of the vin. If it's inside our history-window (or newer than all other payee-candidates, whatever works best) the Masternode will not be (yet) selected as payee because it's obviously too new.

Thoughts? Flaws? Free beer for crowning?
 
Using blockchain to get masternode payment history is pretty nice idea. Basically all we need is to fill vWinning vector https://github.com/darkcoin/darkcoin/blob/master/src/masternode.h#L291 during startup simply looping back in block history.

The rest of described payment logic is already there https://github.com/darkcoin/darkcoin/blob/master/src/masternode.cpp#L413-L498

Select the one with the oldest last payment as payee. All Masternodes should easily find consensus in this case.
That's where the problem hide. Masternode list is not the same on all nodes - there is always at least 1-2 MN difference. So some nodes will verify one node as a winner and some will verify another one. There is no way to verify MN list that another node has (unless it's in blockchain-like structure). And in case we let every node use MN list like it's now as a consensus basis that means continuos multi-forking.
 
Beyond what Udjin said, pulling the info from the chain poses another issue with the donation system. You can only see the address that was paid and multiple masternodes will commonly donate to the same address, so they'll continue to get paid over and over. That's why I was thinking about just dumping the vWinners vector and loading it up at boot. Later on we can come up with a better blockchain based system that takes all of this into account.
 
Beyond what Udjin said, pulling the info from the chain poses another issue with the donation system. You can only see the address that was paid and multiple masternodes will commonly donate to the same address, so they'll continue to get paid over and over. That's why I was thinking about just dumping the vWinners vector and loading it up at boot. Later on we can come up with a better blockchain based system that takes all of this into account.

I didn't know that the donation system was implemented this way...but yes, in this case the blockchain is pretty useless :-/

mncache.dat could be abused for this temporary solution...
 
Using blockchain to get masternode payment history is pretty nice idea. Basically all we need is to fill vWinning vector https://github.com/darkcoin/darkcoin/blob/master/src/masternode.h#L291 during startup simply looping back in block history.

The rest of described payment logic is already there https://github.com/darkcoin/darkcoin/blob/master/src/masternode.cpp#L413-L498


That's where the problem hide. Masternode list is not the same on all nodes - there is always at least 1-2 MN difference. So some nodes will verify one node as a winner and some will verify another one. There is no way to verify MN list that another node has (unless it's in blockchain-like structure). And in case we let every node use MN list like it's now as a consensus basis that means continuos multi-forking.

I'm not sure this is a real problem... This isn't as complicated as it seems. It's not unusual for multiple winning blocks to conflict, but the network's majority finds consensus and tosses the others. The root matter is multiple strings being proposed as a "winner" and one has to be picked. Doesn't matter if it's a masternode pubkey or a block... It may be not the cleanest solution, but a simple race to 51% does the job... If 89% of the network picks the same winner, the one that 11% picked gets tossed. Whatever the reason doesn't really matter. So a small part of the network didn't have this particular MN on it's list, the majority did. If a node is perpetually finding itself outcast, we're facing a bandwidth problem or a firewall/gateway/router problem that would also negatively impact PoSe. Seems to me that said node being outcast repeatedly is exactly how it should be... It could be a temporary mempool type system. As other nodes add their pick, it spreads just like unconfirmed transactions. Similar to block validation, but simpler, once a supermajority percentage is reached, the pick is declared as the only valid choice, and we move on just like a validated block... Including this end result in the blockchain would be optional as the fact that it's there is the proof already from a historical perspective... My Neptunes don't store every failed hash they push... There's no need...
 
.........The root matter is multiple strings being proposed as a "winner" and one has to be picked. Doesn't matter if it's a masternode pubkey or a block...
I can't agree on this. It does matter and there is a fundamental difference imo. You don't pick a block out of 2000 already known, you hash hard millions of possible blocks to find the one yet unknown that a) is connected to previous and b) fits current difficulty. The chain built with most work is the one that is "true". So blocks - hard to find, easy to verify while MN in MN list - easy to find (hash the list once and grab the one with highest "score"), hard [impossible] to verify (you don't know the list on another node for sure).
 
you don't know the list on another node for sure.

It looks like this is the point of issue. Call me stupid, but who cares if the other nodes have it or don't have it on their list? We're trying to pick something that IS there, not something that isn't.

My suggestion is crude, but why wouldn't it work?

Just start picking stuff. It could be totally random as a starting point. Consensus achieved by nothing more than the number of IDs picked the most. If one came up four times, another came up twice, just start spreading those until one hits a supermajority and call it the winner... There isn't anything we have to verify or authenticate in advance, propogate first, THEN pick one...

Just start building a mempool of MNID == MNITPICKED

Sort to the top all sames.

When mempool hits 95% of MN count, pick the top 2 which, by total random chance, ended up getting selected by a few nodes.

Spread them... Race to supermajority. Two elements of randomness involved now.

A control for age could be used by the individual nodes themselves during their individual selection process during the mempool forming stage.

I know this is crude, but why wouldn't it work?

It would have some bell-curve randomness to it, but not much as the age control would generally keep things rolling in a circle. Nodes with such bad connectivity as to fail the list, or fail to update their own list, would be marginalized as PoSe already attempts to do... Same essential outcome.
 
Beyond what Udjin said, pulling the info from the chain poses another issue with the donation system. You can only see the address that was paid and multiple masternodes will commonly donate to the same address, so they'll continue to get paid over and over. That's why I was thinking about just dumping the vWinners vector and loading it up at boot. Later on we can come up with a better blockchain based system that takes all of this into account.

Would it be feasible to add 2 new fields to a block which could hold the address of one Masternode each?
 
Would it be feasible to add 2 new fields to a block which could hold the address of one Masternode each?
I'm not sure I get it. Do you mean we could add it to blockheader, bump block version and perform a hard fork? Or..?

EDIT: Or add another txes-like structure to block, make a hash of it (like merkle tree for txes), add this hash to blockheader, bump block version and perform a hard fork?
EDIT2: just to clarify: "txes-like" for "donation" (splitted MN payment)
 
Last edited by a moderator:
EDIT: Or add another txes-like structure to block, make a hash of it (like merkle tree for txes), add this hash to blockheader, bump block version and perform a hard fork?

This! I even think it would be possible to just include the hashes in the existing merkle_root to make things more easy, but there may be side effects I'm not aware of.

EDIT2: just to clarify: "txes-like" for "donation" (splitted MN payment)

I did not think of donation-splitting (but that would be an option of course), I just want to persist the winning Masternode address in field_1 (no matter where a possible donation goes), and optional if there are NEW Masternodes addresses a random new one in field_2.

This way once the bootstrapping is done (could be easily done by the reference node) we'll be able to add one more new Masternode per block, 576 per day (if that's not enough we could even add additional fields to speed things up, but that's IMO not needed).

With those 2 fields (one for existing payment winners, one for new Masternodes entering the network) we would have a public payment list, secured by the miners, with minimal overhead/bloat.

I did a simulation on paper and it seems to work.

If I should be _really_ bored this weekend :smile: I may even write a little simulation script for this.
 
This! I even think it would be possible to just include the hashes in the existing merkle_root to make things more easy, but there may be side effects I'm not aware of.

I wouldn't touch it. Just in case. And it would be easier to keep up to date with btc code too.

I did not think of donation-splitting (but that would be an option of course), I just want to persist the winning Masternode address in field_1 (no matter where a possible donation goes), and optional if there are NEW Masternodes addresses a random new one in field_2.

This way once the bootstrapping is done (could be easily done by the reference node) we'll be able to add one more new Masternode per block, 576 per day (if that's not enough we could even add additional fields to speed things up, but that's IMO not needed).

With those 2 fields (one for existing payment winners, one for new Masternodes entering the network) we would have a public payment list, secured by the miners, with minimal overhead/bloat.

I did a simulation on paper and it seems to work.

If I should be _really_ bored this weekend :smile: I may even write a little simulation script for this.

Hmm... Look:
- array of winner MN-associated addresses (and only them) should be in block and hash of it should be in block header
- sum of these "outputs" should be equal to MN part of block reward
See? These things can be strictly forced and checked later. That why I thought that could be used for a) splitting MN payments b) constructing winners list by scanning blockchain back.

And on another note let's see what we have for MN list itself:
- you can't stop anyone from moving 1000s and flooding this single slot (disrupting queue of legit MNs)
- you can't require that field to be always filled and why would miners fill that at all then? They can simply ignore it.
Unless there is a fee for entering that list... hmm...:rolleyes:
 
- array of winner MN-associated addresses (and only them) should be in block and hash of it should be in block header
- sum of these "outputs" should be equal to MN part of block reward
See? These things can be strictly forced and checked later. That why I thought that could be used for a) splitting MN payments b) constructing winners list by scanning blockchain back.

But how would new Masternodes enter the list?

And on another note let's see what we have for MN list itself:
- you can't stop anyone from moving 1000s and flooding this single slot (disrupting queue of legit MNs)

The queue of legit Masternodes is in field_1. Someone could only try to flood field_2 and reduce the probability for other new Masternodes to be chosen for that field. The intensives to do something useful with those 1000s are higher.

- you can't require that field to be always filled and why would miners fill that at all then? They can simply ignore it.
Unless there is a fee for entering that list... hmm...:rolleyes:

That's indeed a weak point, I admit I don't know enough of the mining-end of blockchain-technology to know what can be done there.
 
Well, first of all, I would propose to think of arrays rather then fields. Why limit yourself to fields only when we can use arrays?

......
The intensives to do something useful with those 1000s are higher.
.......
Hmmm, it depends on what your goals are. If I had a goal of killing Dash and I could do this by buying just a few 1k to disrupt/destroy the whole 3000k infrastructure would I do this?..


But how would new Masternodes enter the list?
......

That's a good question. Another good one imo is how would they leave the list. And how to let everyone else know that they are still there. :rolleyes:

.....
Someone could only try to flood field_2 and reduce the probability for other new Masternodes to be chosen for that field.
.....
That's indeed a weak point, I admit I don't know enough of the mining-end of blockchain-technology to know what can be done there.

That's why I started thinking about fee :smile: I think we can build this utilizing standard txes actually. Starting MN on a new vin (yet unknown in current payment cycle) should cost some DASH, say only unspent vins that were received in txes with some predefined fee are now legit for starting MNs and this fee is 0.05 DASH for example. For normal masternoders it should be a one-time entry payment and not a big deal. Starting MN on known vin (for protocol bumps) should be off-chain (like it is now) and free. So, back to new one: if it's just a tx with some meaningful fee then there is no real way to spam it and you are still able to build kind of MN list out of blockchain (and you need to scan blocks on start anyway to do much heavier operation so there should be no to little impact on sync time). We can even add data we need with OP_CODEs I guess.

But anyway, I see no way to find if MN is active or not from blockchain with that model. So no real benefits over current system in that part imo...:sad:
 
But anyway, I see no way to find if MN is active or not from blockchain with that model. So no real benefits over current system in that part imo...:sad:

I guess using mn.Check() and mn.isEnabled() is again in the hands of the miners/pools, and the greedy ones would just label everything disabled but their own Masternodes...

Looks like having-the-Masternodes-properly-persisted and avoid-blockchain-bloat are mutual exclusive :sad:
 
Back
Top