This is documentation for MapR Version 5.0. You can also refer to MapR documentation for the latest release.

Skip to end of metadata
Go to start of metadata

The CapacityScheduler is a pluggable scheduler for Hadoop that allows multiple tenants to securely share a large cluster. Resources are allocated to each tenant's applications in a way that fully utilizes the cluster, governed by the constraints of allocated capacities.

Queues are typically set up by administrators to reflect the economics of the shared cluster. The Capacity Scheduler supports hierarchical queues to ensure that resources are shared among the sub-queues of an organization before other queues are allowed to use free resources.

Capacity Scheduler Features

The CapacityScheduler supports these features:

  • Hierarchical Queues
    Hierarchical queues ensure that resources are shared among the sub-queues of an organization before other queues are allowed to use free resources, thereby providing more control and predictability.
  • Capacity Guarantees
    Queues are allocated a fraction of the capacity of the grid, which means that a certain capacity of resources are at their disposal. All applications submitted to a queue have access to the capacity allocated to the queue. Administrators can configure soft limits and optional hard limits on the capacity allocated to each queue.
  • Security
    Each queue has strict Access Control Lists (ACLs). The ACLs control which users can submit applications to individual queues. Also, safeguards ensure that users cannot view or modify applications from other users. Per-queue and system administrator roles are also supported.
  • Elasticity
    Free resources can be allocated to any queue beyond its capacity allocation. As tasks scheduled on these resources complete, the resources become available to be reassigned to applications on queues running below their capacity. (Note that pre-emption is not supported.) This ensures that resources are available in a predictable and elastic manner to queues, thus preventing artificial silos of resources in the cluster and improving cluster utilization.
  • Multi-tenancy
    A comprehensive set of limits is provided to prevent a single application, user, or queue from monopolizing resources of the queue or the cluster as a whole. This ensures that the cluster is not overwhelmed.
  • Operability
    • Runtime Configuration
      The queue definitions and properties, such as capacity or ACLs, can be changed in a secure manner by administrators at runtime, which minimizes disruption to users. Also, a console is provided for users and administrators to view the current allocation of resources to various queues in the system. Administrators can add queues at runtime, but queues cannot be deleted at runtime.
    • Drain applications
      Administrators can stop queues at runtime to ensure that while existing applications run to completion, no new applications can be submitted. If a queue is in the STOPPED state, new applications cannot be submitted to that queue or any of its child queues. Existing applications continue to completion, so the queue can be drained gracefully. Administrators can also start the stopped queues.
  • Resource-based Scheduling
    Support for resource-intensive applications, where an application can optionally specify higher resource requirements than the default, thereby accommodating applications with differing resource requirements. Currently, memory is the only supported resource requirement.

Setting up ResourceManager to use CapacityScheduler

To configure the ResourceManager to use the CapacityScheduler, set the following property in the yarn-site.xml file:

Property NameValue

Setting up Queues

The ResourceManager uses the configuration file capacity-scheduler.xml, where you can configure various scheduling parameters related to queues. These parameters include:

  • the short queue name
  • the full queue path name
  • a list of associated child queues and applications
  • the guaranteed capacity (expressed as a percentage of slots in the cluster) available to the jobs in the queue
  • the maximum capacity of the queue
  • a list of active users and their resource allocation limits
  • the state of the queue (running or stopped)
  • access control lists that determine who can access the queue

The CapacityScheduler has a pre-defined queue called root. All queues in the system are children of the root queue. Further queues can be set up by configuring yarn.scheduler.capacity.root.queues with a list of comma-separated child queues.

Queue Properties

The capacity-scheduler.xml file includes three types of queue properties:

  • Resource allocation
  • Running and pending application limits
  • Queue administration and permissions

Resource Allocation Properties

The following table lists resource allocation properties:


Queue capacity in percentage (%) expressed as a float (for example, 12.5). The sum of capacities for all queues, at each level, must equal 100.

Applications in the queue may consume more resources than the queue's capacity if there are free resources, which provides elasticity.


Maximum queue capacity in percentage (%) expressed as a float.

This property limits the elasticity for applications in the queue. The default is -1 which disables it.


Sets the minimum value, expressed as an integer, on the percentage of resources allocated to a user, if there is a demand for resources.

A value of 100 implies no user limits are imposed. The default is 100.

The maximum value depends on the number of users who have submitted applications. For example, if this property is set to 25 and two users
have submitted applications to a queue, the maximum percent of queue resources for each user is 50%. If a third user submits an application, no single user
can use more than 33% of the queue resources. With four or more users, no user can use more than 25% of the queue resources.


The multiple of the queue capacity that can be configured to allow a single user to acquire more resources.

Value is specified as a float.

The default is 1, which ensures that a single user can never take more than the queue's configured capacity no matter how idle the cluster is.


Specifies the ResourceCalculator implementation to be used to compare resources in the scheduler.
The default value is DiskBasedResourceCalculator, which uses memory, CPU and disk. Other values for this parameter include:

  • DefaultResourceCalculator, which uses memory only
  • DominantResourceCalculator, which uses dominant-resource to compare multi-dimensional resources such as memory and CPU
  • DiskBasedDominantResourceCalculator, which uses dominant-resource to compare multi-dimensional resources, such as memory, CPU and disk

Running and Pending Application Limits

Applications are considered active if they are either running or pending. The following table lists properties that specify running and pending application limits:


Maximum number of applications in the system that can be concurrently active, both running and pending.

Limits on each queue are directly proportional to their queue capacities and user limits.

This is a hard limit; any applications submitted when this limit is reached will be rejected. The default is 10000. This applies to all queues.

Overrides yarn.scheduler.capacity.maximum-applications on a per queue basis.

Maximum percent of resources in the cluster that can be used to run application masters - controls the number of concurrent active applications.

Limits on each queue are directly proportional to their queue capacities and user limits. Specified as a float. For example, 0.5 = 50%.

The default is 0.1. This can be set for all queues with yarn.scheduler.capacity.maximum-am-resource-percent

Overrides yarn.scheduler.capacity.maximum-am-resource-percent on a per queue basis.

Queue Administration and Permissions

The following table lists queue administration and permission properties:


The state of the queue. Possible values are RUNNING or STOPPED. If a queue is in the STOPPED state, new applications cannot be submitted to that queue or any of its child queues.

If the root queue is STOPPED, no applications can be submitted to the entire cluster. Existing applications continue to completion, so the queue can be drained gracefully.


The ACL that controls who can submit applications to the given queue. If the given user/group belongs to the ACL on a given queue or one of the parent queues in the hierarchy, they can submit applications. 

ACLs for this property are inherited from the parent queue if not specified.


The ACL that controls who can administer applications on the given queue. If the given user/group has the necessary ACLs on the given queue or one of the parent queues in the hierarchy, they can administer applications. 

ACLs for this property are inherited from the parent queue if not specified.

Setting up a Hierarchy of Queues

CapacityScheduler uses a concept called a queue path to configure a hierarchy of queues. The queue path is the full path of the queue's hierarchy, starting at root. The following example has three top-level child-queues a, b, and c and some sub-queues for a and b:


Queue paths are defined for each level under the root queue. A queue's children are defined with the parameter yarn.scheduler.capacity.<queue-path>.queues, where <queue-path> takes the form root.<child>root.<child>.<child>, and so on. For example, the queue path to a2 is designated as root.a.a2.


Children do not inherit properties directly from the parent unless otherwise noted.

The corresponding queue definition block of the capacity-scheduler.xml file is shown below.

Changing Queue Configuration

You can change queue properties and add new queues by editing capacity-scheduler.xml. Make sure that the updated queue configuration is valid and that the queue-capacity at each level equals 100%.

For the changes to take effect, run the following command:


Queues cannot be deleted, only added.

Queue Access Control Lists

Queue Access Control Lists (ACLs) allow administrators to control who may take actions on particular queues. They are configured with the following properties:

These properties can be set per queue. Currently the only supported administrative action is killing an application. Anyone who has permission to administer a queue may also submit applications to it. These properties take values in a format like user1,user2 group1,group2 or group1,group2. An action on a queue will be permitted if its user or group is in the ACL of that queue or in the ACL of any of that queue's ancestors. So if queue2 is inside queue1, and user1 is in queue1's ACL, and user2 is in queue2's ACL, then both users may submit to queue2.

The root queue's ACLs are "*" by default which, because ACLs are passed down, means that everyone may submit to and kill applications from every queue. To restrict access, change the root queue's ACLs to something other than "*".

By default, the yarn.admin.acl property in yarn-site.xml is also set to "*", which means any user can be the administrator. If queue ACLs are enabled, you also need to set the yarn.admin.acl property to the correct admin user for the YARN cluster. For example:

If you do not set this property correctly, users will be able to kill YARN jobs even when they do not have access to the queues for those jobs. 

  • No labels