This is documentation for MapR Version 5.0. You can also refer to MapR documentation for the latest release.

Skip to end of metadata
Go to start of metadata

You can remove a node using the node remove command, or in the MapR Control System using the following procedure. Removing a node detaches the node from the cluster, but does not remove the MapR software from the cluster.

To remove a node using the MapR command-line interface:

Before you start, drain the node of data by moving the node to the /decommissioned physical topology. All the data on a node in the/decommissioned topology is migrated to volumes and nodes in the /data topology Use the node remove command to remove one or more server nodes from the cluster. To run this command, you must have full control (fc) or administrator (a) permission. The syntax is:

See node remove for a full explanation of the syntax. After you issue the node remove command, wait several minutes to ensure that the node has been completely removed.

To remove a node using the MapR Control System:

Before you start, drain the node of data by moving the node to the /decommissioned physical topology. All the data on a node in the /decommissioned topology is migrated to volumes and nodes in the /data topology.

Run the following command to check if a given volume is present on the node:

Run this command for each non-local volume in your cluster to verify that the node being removed is not storing any volume data.

  1. In the Navigation pane, expand the Cluster group and click the Nodes view.
  2. Select the checkbox beside the node or nodes you wish to remove.
  3. Click Manage Services and stop all services on the node.
  4. Wait 5 minutes. The Forget Node button becomes active.
  5. Click the Forget Node button to display the Forget Node dialog.
  6. Click Forget Node to remove the node.
Icon

If you are using Ganglia, restart all gmeta and gmon daemons in the cluster. See Ganglia.

You can also remove a node by clicking Forget Node in the Node Properties view.

Decommissioning a Node

Use the following procedures to remove a node and uninstall the MapR software. This procedure detaches the node from the cluster and removes the MapR packages, log files, and configuration files, but does not format the disks.

Before Decommissioning a Node

Icon

Make sure any data on the node is replicated and any needed services are running elsewhere. If the node you are decommissioning runs a critical service such as CLDB or ZooKeeper, verify that enough instances of that service are running on other nodes in the cluster. See Planning the Cluster for recommendations on service assignment to nodes.

To decommission a node permanently:

Before you start, drain the node of data by moving the node to the /decommissioned physical topology. All the data on a node in the /decommissioned topology is migrated to volumes and nodes in the /data topology.

Run the following command to check if a given volume is present on the node:

Run this command for each non-local volume in your cluster to verify that the node being decommissioned is not storing any volume data.

  1. Change to the root user (or use sudo for the following commands).
  2. Stop the Warden:
    service mapr-warden stop
  3. If ZooKeeper is installed on the node, stop it:
    service mapr-zookeeper stop
  4. Determine which MapR packages are installed on the node:
    • dpkg --list | grep mapr (Ubuntu)
    • rpm -qa | grep mapr (Red Hat or CentOS)
  5. Remove the packages by issuing the appropriate command for the operating system, followed by the list of services. Examples:
    • apt-get purge mapr-core mapr-cldb mapr-fileserver (Ubuntu)
    • yum erase mapr-core mapr-cldb mapr-fileserver (Red Hat or CentOS)
  6. Remove the /opt/mapr directory to remove any instances of hostid, hostname, zkdata, and zookeeper left behind by the package manager.
  7. Remove any MapR cores in the /opt/cores directory.
  8. If the node you have decommissioned is a CLDB node or a ZooKeeper node, then run configure.sh on all other nodes in the cluster (see Configuring the Node).
Icon

If you are using Ganglia, restart all gmeta and gmon daemons in the cluster. See Ganglia.

  • No labels