• Forum has been upgraded, all links, images, etc are as they were. Please see Official Announcements for more information

Multiplexing masternodes on single VPS

xkcd

Well-known member
Masternode Owner/Operator
This guide will attempt to show you how you can run more than one Masternode (MN) on a single VPS a technique I call multiplexing. To save space due to data redundancy the masternodes will share a common blockchain. The guide assumes you have setup your masternode as per guide https://www.dash.org/forum/threads/...e-setup-with-systemd-auto-re-start-rfc.39460/ and uses the VULTR hosting provider, because they offer additonal IPs for your hosting.

Starting with your existing running masternode created from my guide, goto your admin panel on VULTR and add another IPv4 IP.


upload_2020-3-14_13-46-33.png


Next as the dashadmin user click on 'networking configuration' on vultr and copy and paste the text for the latest Ubuntu build as described on thatpage for your OS.

upload_2020-3-14_15-30-1.png

After updating /etc/netplan/10-ens3.yaml with your new IP apply the changes.

Code:
sudo netplan apply

To enable the new IP now restart the VPS from the Vultr admin panel, sudo reboot won't do it.

ssh back into the machine and test the new network by trying to ssh on it, eg.

Code:
ssh dashadmin@<new IP>

You should see a prompt asking you to confirm the security fingerprint of the server (yes/no). If you see that, then your IP is working correctly, otherwise you need to troubleshoot your VPS before continuing with this guide....

Next we create some new users, one for the second DASH MN and the other for the common files (blocks) that both MNs will share.

As dashadmin,

Code:
sudo useradd -m -c dash02 dash02 -s /bin/bash
sudo useradd -m -c dash-common dash-common -s /usr/sbin/nologin
sudo usermod dashadmin -a -G dash02,dash-common
sudo usermod dash -a -G dash-common
sudo usermod dash02 -a -G dash-common

Now we set a hard password for the new users, you do not have to write these down, you will never need them.

Code:
< /dev/urandom tr -dc A-Za-z0-9 | head -c${1:-32};echo
sudo passwd dash02
< /dev/urandom tr -dc A-Za-z0-9 | head -c${1:-32};echo
sudo passwd dash-common

Next, we create directories and set permissions.

Code:
sudo mkdir -p /home/dash-common/.dashcore/blocks
sudo chown -v -R dash-common.dash-common /home/dash-common/
sudo chmod -v -R g+wxr /home/dash-common/

Next, shutdown the running dashd and move the common files to the shared user, create a few links back to these files and restart the node...

Run the below as the dashadmin user
Code:
sudo systemctl stop dashd

Sudo to the dash user and run the rest.

Code:
sudo su - dash

Code:
# Creates a variable of the list of files (blocks) we need to copy over, excluding the current block, which is still being written into.
files=$(ls /home/dash/.dashcore/blocks/blk*dat|head -$(($(ls /home/dash/.dashcore/blocks/blk*dat|wc -l)-1)))

# Append the list of rev files.
files+=$(echo;ls /home/dash/.dashcore/blocks/rev*dat|head -$(($(ls /home/dash/.dashcore/blocks/rev*dat|wc -l)-1)))

# Move the blocks over to the common location
for f in $files;do mv -v $f /home/dash-common/.dashcore/blocks/;done

# Now the dash user will create symlinks back to those files to replace the moved ones.

cd ~/.dashcore/blocks/
for f in /home/dash-common/.dashcore/blocks/*;do ln -vs $f $(basename $f);done

Now, while the dashd is still down, we need to change the dash.conf file slightly.
Ensure the externalip is set like normal, but also set a new parameter called
bind= to the same IP, this is your orginal node. Then add a new parameter like below
rpcport=9998

Code:
nano ~/.dashcore/dash.conf

As the dashadmin user, update the file permisssions once more.

Code:
sudo chown -v -R dash-common.dash-common /home/dash-common/
sudo chmod -v -R g+wrx /home/dash-common/.dashcore/blocks/

Restart the dashd as the dashadmin user....

Code:
sudo systemctl start dashd

Verify that the dashd has restarted successfully before moving on, otherwise troubleshoot your changes.

Initialise, the second masternode, this will be a clone of the other masternode, but we will update the details in the dash.conf shortly.

Stop the dashd daemon once more and copy the files over. :)
Remove some stale cache files that would conflict with the original node, these will be rebuilt. note the below rm command will remove wallets, you should not have any dash stored on a MN anyway, but make sure that is the case before proceeding. o_O

Code:
sudo systemctl stop dashd
sudo cp -va /home/dash/.dashcore /home/dash02
sudo chown -v -R dash02.dash02 /home/dash02/
sudo rm -fr /home/dash02/.dashcore/{.lock,d*.log,*.dat} /home/dash02/.dashcore/backups/


Now that the data has been copied over, you can again restart the orginal node as the dashadmin user....
Code:
sudo systemctl start dashd

Since, the dash02 is a clone of the first node, we need to enter the specifics for this node, edit the dash.conf file like so.

Code:
sudo nano /home/dash02/.dashcore/dash.conf

and the things you need to change are listed below.

rpcuser
rpcpassword
externalip
bind
masternodeblsprivkey
rpcport


Change the RPC port to 9997, although you can choose another number larger than 1024 that the system is not currently using... Make sure the bind and externalip are both set to your new (second) IP and that rpcuser/password are set to something different than the original node, also place the bls key for the new masternode too.


Now we setup up a systemd unit file for this second node so it starts and shuts down automatically.
The below should be run as the dashadmin user.

Code:
sudo mkdir -p /etc/systemd/system&&\
sudo bash -c "cat >/etc/systemd/system/dashd02.service<<\"EOF\"
[Unit]
Description=Dash Core Daemon (2)
After=syslog.target network-online.target


[Service]
Type=forking
User=dash02
Group=dash02

OOMScoreAdjust=-1000

ExecStart=/opt/dash/bin/dashd -pid=/home/dash02/.dashcore/dashd.pid
TimeoutStartSec=10m

ExecStop=/opt/dash/bin/dash-cli stop

TimeoutStopSec=120

Restart=on-failure
RestartSec=120

StartLimitInterval=300
StartLimitBurst=3

[Install]
WantedBy=multi-user.target

EOF"

I have removed the comments from the above file to condense it, but it is very similar to the one used to start/stop the other dashd.

Next, we register the file with systemd and start the daemon.

Code:
sudo systemctl daemon-reload &&\
sudo systemctl enable dashd02 &&\
sudo systemctl start dashd02 &&\
echo "Dash02 is now installed as a system service and initializing..."
 
Last edited:
Make sure you install sentinel :mad:

run the below as the dash02 user !

sudo su - dash02

Code:
cd &&\
git clone https://github.com/dashpay/sentinel &&\
cd sentinel &&\
virtualenv venv &&\
venv/bin/pip install -r requirements.txt &&\
venv/bin/py.test test &&\
venv/bin/python bin/sentinel.py

# Add a crontab entry.
echo "*/10 * * * * { test -f ~/.dashcore/dashd.pid&&cd ~/sentinel && venv/bin/python bin/sentinel.py;} >> \
 ~/sentinel/sentinel-cron.log 2>&1" \
|crontab -&&echo "Successfully installed cron job."




Verify the daemons are running in top, also at this time, make sure you have executed the protx 3-part shuffle to register your new MN on the DASH network.

upload_2020-3-14_20-12-58.png

Let's check the nodes are bound to the correct IPs.
As the dashadmin user...
Code:
echo "dash....."
sudo grep "Bound\|AddLocal" /home/dash/.dashcore/debug.log
echo "dash02....."
sudo grep "Bound\|AddLocal" /home/dash02/.dashcore/debug.log

What you should see is the second node is bound to a different (secondary) IP.

On my system I have the folowing disk space being used.

upload_2020-3-14_20-15-56.png


Notice that there is 14GB which is common to both installations, so I effectively saved 14GB of disk space, if I were to spin up a third node, I would only need another 5 GB to do so.

upload_2020-3-14_20-18-2.png


I still have 45 GB to go, plenty for growth and another node. o_O

RAM usage is looking fine too, if I run low, I would increase the swap space rather than adding costly RAM. :cool:

upload_2020-3-14_20-19-40.png


DASH is generally not a CPU bound app, it tends to be memory hungry and disk hungry.

I hope you've enjoyed the guide!
Leave a like and a comment. :D
 
Last edited:
Update to this guide, I have been running without issues for weeks now with three instances of dashd on the VPS above, but a couple of days ago the VPS ran out of memory, both RAM and SWAP were full (6GB) a hard restart was required to sort it out. To figure out what happened, I now have a cron running that records the free RAM available and the RAM used by each of the dashd instances running every minute, I aim to see if the escalation in RAM usage is gradual or sudden. I have also increased my swap to 4GB, so I now have 4+4=8GB available on the VPS. I have also pre-emptively written a script that will monitor the RAM usage of the system and restart the dashd using the most RAM before it crashes the system, but I rather avoid such workarounds if at all possible.

If you face out of memory issues, let us know, it could help the devs figure out if this normal behaviour or some kind on memory leak.
 
I have been running multiplexed masternodes for many months now without any issues at all, however, over time, any new blocks created will be stored by each node resulting in redundancy. The below script will save disk space by moving duplicate blocks into the dash-common user's directory and adjust each node accordingly.

To do this, run the below block as root and be careful to not make errors and do it all in one sitting without logining out and back in!

To become root, as the dashadmin user issue sudo su -

Code:
# Shutdown all the nodes!  This is important otherwise the data will not be consistent!
systemctl stop dashd dashd02 dashd03

files=$(find /home/dash/.dashcore/blocks -type f -name "blk*"|sort|head -$(($(find /home/dash/.dashcore/blocks -type f -name "blk*"|wc -l)-1)))
files+=$(echo;find /home/dash/.dashcore/blocks -type f -name "rev*"|sort|head -$(($(find /home/dash/.dashcore/blocks -type f -name "rev*"|wc -l)-1)))
# Only do the below if the $files variable contains elements.
for f in $files;do mv -v $f /home/dash-common/.dashcore/blocks/;done
chmod -v -R g+wrx /home/dash-common/.dashcore/blocks/
chown -vR dash-common:dash-common /home/dash-common/.dashcore/blocks/
cd /home/dash/.dashcore/blocks/
for f in $files;do ln -vs "/home/dash-common/.dashcore/blocks/$(basename $f)" $(basename $f);done
chown -Rv dash:dash /home/dash/.dashcore/blocks/

cp -v /home/dash02/.dashcore/dash.conf /tmp/
rm -fr /home/dash02/.dashcore/
cp -va /home/dash/.dashcore /home/dash02
rm -fr /home/dash02/.dashcore/{.lock,.walletlock,d*.log,*.dat} /home/dash02/.dashcore/backups/
cp -v /tmp/dash.conf /home/dash02/.dashcore/
chown -v -R dash02:dash02 /home/dash02/

# This block is the same as the above block duplicated for this node.
cp -v /home/dash03/.dashcore/dash.conf /tmp/
rm -fr /home/dash03/.dashcore/
cp -va /home/dash/.dashcore /home/dash03
rm -fr /home/dash03/.dashcore/{.lock,.walletlock,d*.log,*.dat} /home/dash03/.dashcore/backups/
cp -v /tmp/dash.conf /home/dash03/.dashcore/
chown -v -R dash03:dash03 /home/dash03/

# Just reboot and all the nodes will come back online themselves.
reboot
 
Last edited:
Dear MultiDashers! I have updated the OP and the maintenance post just above this one to now also archive the rev*dat files, which saves us an additional 3GB disk space per instance !!! o_O

1626012599026.png


I have tested a full reindex and the dashd was perfectly happy with it, so I consider this stable. To take advantage of this change simply follow the steps in the above post and the rev files will be moved over and replaced with links. :cool:
 
OK, update on this guide in particular for v18 upgrade.
This is for people upgrading for v0.17 to v18.

Follow these steps carefully for a successful upgrade (worked for me).

  1. Shutdown all nodes. eg sudo systemctl stop dashd01 dashd02 dashd03
  2. Ideally, do your system updates here and then.
  3. Update Dash binaries to v18.
  4. Start dash01 ONLY! Your first instance, eg sudo systemctl start dashd01
  5. When it has done it's upgrade, shut it down.
  6. Now run the refactor script, from above or from my github page ensure the username prefix variable is matching your set and number of nodes is correct. https://github.com/kxcd/Masternode-Zeus/blob/main/multiplexing/refactor-nodes.sh
  7. After the reboot, you should have all nodes working correctly.

Key point to note here, the idea is to get the first node, ie the reference node to make the updates, indexing and whatever and then propogate that to the rest of the nodes, so they just start on a chain that was already updated to v18.
 
Thanks for this guide. The idea is very interesting, but isn't it better to spread the risk of crashing VPS ?
Same question for Evonodes : their payment ratio may be better (at least for now), but 1 VPS crashing and losing 1 MN payment may be a better choice than losing the equivalent of 4 MN payments ? :unsure:
 
Thanks for this guide. The idea is very interesting, but isn't it better to spread the risk of crashing VPS ?
Same question for Evonodes : their payment ratio may be better (at least for now), but 1 VPS crashing and losing 1 MN payment may be a better choice than losing the equivalent of 4 MN payments ? :unsure:
Sure there is some additional risk of outage impacting several nodes, but this reduces maintainence tasks for the multi-MNO and reduces cost for such individuals. Choose a good hosting service that is reliable to avoid downtime.
 
Sure there is some additional risk of outage impacting several nodes, but this reduces maintainence tasks for the multi-MNO and reduces cost for such individuals. Choose a good hosting service that is reliable to avoid downtime.
What services would you recommend ? I experienced LunaNode and they are quite good, but even they have rare random outages.
 
What services would you recommend ? I experienced LunaNode and they are quite good, but even they have rare random outages.
Vultr is still very good and I believe some new plans from Hetzner are good too. All my nodes are mulitplexed and I am finding good uptime and easier to manage them which is a blessing for me as my time is invaluable.
 
Thank you very much for your guides and resources xkcd, I find them valuable! I would like to contribute to this guide by explaining how to run two masternodes on a single DigitalOcean droplet utilizing the Reserved IP address feature.

Start this guide with a running zeus masternode. Make sure to have a backup or recent snapshot. Let's go right ahead and create a secondary public IP address for your droplet. Skip forward to the multiplexing guide if you already have a Reserved IP assigned to your droplet.

Navigate to https://cloud.digitalocean.com/networking/reserved_ips. Select your droplet from the input field and click Assign Reserved IP. I'll refer to this address later as <reserved-IP-address>. You can now SSH into your droplet using either the Droplet or Reserved IP address, give it a try. All firewall rules for your droplet are also active for the Reserved IP.

Now let's follow xkcd's multiplexing guide. Skip the networking part and start off where you create the new users. You can come back to this guide when configuring the dash.conf values. The following instructions will help you determine some of the required values.

Outbound traffic for the second dashd's will need to be mapped to the Public Reserved IP. But the public Reserved IP address is not directly exposed inside the droplet. You will need to use the Anchor IP that belongs to the Reserved IP for internal configurations. Let's find this Anchor IP address, this example is on Ubuntu or Debian. If you run a different or older OS the read digitalocean.com/outbound-traffic. Don't confuse the <anchor-gateway-IP-address> mentioned as it belongs to the droplet, not the Reserved IP

SSH into your droplet:
Code:
ssh dashadmin@<reserved-IP-address>
nano /etc/netplan/50-cloud-init.yaml

Find the two IP addresses listed under network/ethernets/eth0/addresses. The first address is the standard public IP address of your droplet. We will use it later as <droplet-IP-address>, disregard the /20 or similar. The second address is the Anchor IP gateway address for the public Reserved IP or <anchor-reserved-IP-address>.

Let's use these addresses to bind both dashd's to their public IP addresses. Open the configuration for your first dashd and set the following properties:
externalip=<droplet-IP-address>
bind=<droplet-IP-address>:9999

Code:
nano /home/dash/.dashcore/dash.conf

Generate a masternodeblsprivkey for <reserved-IP-address> with port 9999. Then open the configuration for your second dashd to set these properties:
masternodeblsprivkey=<generated-key>
externalip=<reserved-IP-address>
bind=<anchor-reserved-IP-address>:9999

Code:
nano /home/dash02/.dashcore/dash.conf

Restart the daemon after you have updated the configuration.
Code:
sudo systemctl restart dashd dashd02

Using mnowatch, we can test if both daemons are exposed correctly, both commands should return OPEN.
Code:
curl --interface <droplet-IP-address> https://mnowatch.org/9999/
curl --interface <anchor-reserved-IP-address> https://mnowatch.org/9999/

Continue xkcd's multiplexing guide to complete your setup. After finishing you can monitor your masternodes using DashNinja.

I hope you will find this addition useful!
 
Last edited:
Very interesting way of operating nodes! :)

What would be a good rule of thumb for ram, say for every MN added? perhaps 3 is a good limit / VM also..?

guessing it would be hard to consolidate nodes without having to change the public ip also..
 
Very interesting way of operating nodes! :)

What would be a good rule of thumb for ram, say for every MN added? perhaps 3 is a good limit / VM also..?

guessing it would be hard to consolidate nodes without having to change the public ip also..
about +3 GB per additional node, +2GB might do it, but make sure you increase the swap file too. Changing IPs is easy with the protx update_service ... invocation, no loss in queue either.
 
Back
Top