Some of those are mine, and they have multiple entries even though I never attempted it. They disappear quickly. This is a mess... After hundreds of start-over-from-scratch attempts, I finally got 1 masternode to "stick" to the network. No idea why since I did nothing differently. MNs just halt comms and disappear. At least for me... Hundreds of attempts doing the exact same thing, and only one stuck...
I'm about to try adding a second one, but I'm pretty sure start-many is going to kick the working one off by switching pubkeys for no reason, and leave me with nothing.
EDIT: as suspected, any attempt to start another MN results in the previously working MN going into 70min fail.
Clearly, there is something very important missing from the "guides." MNs just don't stay on the network.
That could explain what's going on actually... Scenario for them to appear twice+ could be something like this (just an example):
1. You started all you have at the moment by start-many
2. Then you added few at the end of masternode.conf or some of started masternodes failed so you have to restart them
3. You issued start-many again (?) but this time few vins are already locked by previous start-many so for the first IP+masternodeprivekey wallet will choose another vin and fire dsee message to the network.
4. Because you have correct masternodeprivkey on your remote masternode it will not spawn "ghost" masternode (so it's not the result of a bug I suggested before - that would be the issue if someone else was trying to use your IP providing his own masternodeprivkey) but instead it will override vin on remote masternode starting to send dseep with new signature
5. As you MNs now sending new signatures with dseep old ones will surely fall out of list
What to do now:
1. I would wait ~2 hours until none of them are displaying twice (not everyone is updated so "masternode drift" bug still could have some effect and maybe you'll have to wait more)
2. Then find out which one of them left on the list and clean them out from masternode.conf (temporarily, backup original file somewhere)
3. Issue start-many with new masternode.conf containing only stopped MNs
If I get scenario and "to do" right you should have full list of running MNs at the end
............
Damn, I was writing too long....
And I totally forgot you are not using start-many....
Sorry no idea so far then
Have to thing on this more...
.............
Anyway I guess the scenario I wrote could be real so I'll keep it here just in case anyone will hit it