Using Warewulf* with OpenLDAP*
User management and authentication is very important for cluster systems. Ensuring that users have access to the nodes and important data is available on all of the nodes can be very complicated indeed. The process is relatively straightforward on systems where local authentication is needed. However, in large labs and multi-cluster/system environments that make use of the Lightweight Directory Access Protocol (LDAP) or Network Information Services (NIS), there are extra considerations. This article will focus on some of the additional considerations required when using OpenLDAP1 along with Warewulf*.
Managing users on a Warewulf cluster generally involves two stages: 1) ensuring there is a consistent user environment across the compute nodes, and 2) enabling user authentication on the nodes. By default, Warewulf mounts the /home directory from the head node on all of the compute nodes, using NFS. This is generally acceptable for cases where only local users exist, but in cases where /home is not local, it becomes more complicated.
In configurations where /home is provided to the head node of the cluster with a remote LDAP server, the head node must act as both a slave to the external server, and as the master to the clustered compute nodes. One way to make this configuration work is to setup the LDAP servers using the N-Way Multi-Master Replication technique, outlined on the OpenLDAP website. Some of the other replication techniques outlined on the OpenLDAP website can also be used, depending on how the main LDAP server is setup for your environment.
In addition to the LDAP server replication setup, it may be desired to set up password-less ssh and to distribute user credentials from a central place on the head node. These procedures are can be found on the Warewulf website. The documentation portion of the Warewulf website offers much additional information to guide the setup of clusters.
*Other names and brands may be claimed as the property of others.
1 OpenLDAP is an open-source implementation of LDAP. You can find additional information about OpenLDAP at http://www.openldap.org.
How to Set Up Intel® Xeon Phi™ Coprocessor Cards Using Warewulf* 3.4
The upcoming release of the open source provisioning system Warewulf* 3.4 will include full support for Intel® Xeon Phi™ coprocessors. Besides being fully compliant with the Intel® Cluster Ready specification, Warewulf 3.4 will provide a simple way to install the Intel® Manycore Platform Software Stack (Intel® MPSS) for Intel Xeon Phi coprocessors and configure the compute nodes to enable them during their boot process.
Having the Centos* 6 server operating system installed on the cluster head node, follow the standard installation for Warewulf and include packages for the Intel Xeon Phi coprocessor, which are named after the processor card architecture, Intel® Many Integrated Core Architecture (Intel® MIC Architecture). Install the warewul-mic-3.4-1.el6.x86_64.rpm
on the head node and copy the warewul-mic-node-3.4-1.el6.x86_64.rpm
file to the /root
directory for wwinit
to install it as explained below.
Using the Warewulf initialization utility (wwinit) it is possible to automate the steps of Intel MPSS installation on the head node and in the compute nodes image. Just copy the Intel MPSS installer to the /root directory and run:
>> OFEDFORCE=1 CHROOTDIR=<path to the compute node file system> wwinit MIC
This will install and configure Intel MPSS in the image of the head node and the compute nodes, enabling Infiniband* support. Additionally, it will prepare the warewul-mic-node
package to set up and start the Intel Xeon Phi coprocessor cards in the compute nodes during the operating system boot process. If Infiniband* support is not desired, remove the OFEDFORCE=1
from the command. At this point the compute nodes’ image and bootstrap capsules must be re-built.
>> wwvnfs --chroot <path to the compute node file system>
>> wwbootstap `uname -r`
Assuming that the compute node’ objects are already created and the physical nodes have been assigned, the next step would be to define the number of Intel Xeon Phi coprocessor cards per node. In this example we assume that there are two cards per node.
>> wwsh mic set --mic=2
The next step is to set the card IP addresses. In this example, we configure the IP addresses for the cards without defining the nodes, letting Warewulf assign successive IP addresses for the cards in all the nodes. Note that for the cards to communicate, it is mandatory that the cards’ IP addresses be in the same network as the cluster communication network. In this example, we use 192.168.0.0/24.
>> wwsh node set --netdev=mic0 --ipaddr=192.168.0.52 --netmask=255.255.255.0
>> wwsh node set --netdev=mic1 --ipaddr=192.168.0.102 --netmask=255.255.255.0
During the following boots of the compute nodes, the Intel Xeon Phi coprocessor cards will automatically be configured and set into an operational state. You can review the /etc/hosts file in the head node to see the names assigned to each card in the cluster.
For a complete and detailed guide on how to build and deploy a cluster with Intel Xeon Phi coprocessors using Warewulf 3.4, please refer to the Intel Cluster Ready Reference Designs.
* Other names and brands may be claimed as the property of others.