In this part, we'll configure Heartbeat to manage IP address failover on the two storage interfaces. We'll also install and configure an iSCSI target to provide block-level storage to clients.
IP address failover
We want Heartbeat to manage the two IP addresses we will be providing iSCSI services over. From looking back at our original network plan, we can see that these are 192.168.1.1, and 192.168.2.1. These are on two separate subnets to ensure that packets go out of the correct interface when our clients connect to them using multipathing (which will be done in the next part). There are other ways of accomplishing this (using ip routing), but this is the easiest.
Edit your /etc/ha.d/haresources configuration file on both nodes, so that it looks like the following :
You can see that it's using the built-in IPaddr script (in /etc/ha.d/resource.d) to bring up the IP addresses on the designated interfaces. The last line, arp_filter will call a script we'll now create. Put the following contents in the file /etc/init.d/arp_filter :
#!/bin/bash for FILTER in /proc/sys/net/ipv4/conf/eth*/arp_filter; do echo "$FILTER was $(cat $FILTER), setting to 1..." echo 1 > $FILTER done
And then make sure it is executable :
# chmod +x /etc/init.d/arp_filter
The reason we need this additional script is documented at http://linux-ip.net/html/ether-arp.html#ether-arp-flux-arpfilter. It causes the nodes to perform a route lookup to determine the interface through which to send ARP responses, instead of the default behavior, which is to replying from all Ethernet interfaces. This is needed as our cluster nodes are connected to several different networks.
Now, restart Heartbeat on the two nodes, and you should see your eth2 and eth3 interfaces come up and have an IP address assigned to them. Try restarting Heartbeat on the two nodes to observe the resources migrating between them. We can now move on to setting up the iSCSI server.
A great overview of iSCSI is on Wikipedia : http://en.wikipedia.org/wiki/ISCSI. Essentially, it allows you to run SCSI commands over an IP network, which lets you create a low-cost SAN without having to invest in expensive Fibre Channel cabling and switches. The shared block devices we'll create appear to the clients as regular SCSI devices - you can partition, format and mount them the same as you would a regular directly attached device. iSCSI clients are called "initiators", and the server part is called a "target".
Out of these four, the STGT and IET targets seem to be the most commonly used. The STGT target in particular is worth investigation as it is included in Red Hat Enterprise Linux and derivatives. We'll be using the IET target, however. It seems to be one of the more popular iSCSI target implementations, builds cleanly on Debian, and critically allows the service to be stopped while initiators are logged in - which we need to do in a failover scenario.
Note 1 : Check the README.vmware if you are going to use it as a backing store for VMWare!
Note 2 : As the IET target contains a kernel module, you will need to build and install it again each time you update or install a new kernel. This means you will have to double check it each time you run a system update!
First, we'll make sure we have the necessary tools to build the target :
You should now have two empty files under /proc/net/iet, session and volume, and output similar to the following will show up in /var/log/messages :
Feb 9 14:09:40 otter kernel: iSCSI Enterprise Target Software - version 0.4.17 Feb 9 14:09:40 otter kernel: iscsi_trgt: Registered io type fileio Feb 9 14:09:40 otter kernel: iscsi_trgt: Registered io type blockio Feb 9 14:09:40 otter kernel: iscsi_trgt: Registered io type nullio
The first target we'll create using a backing store (the underlying storage) of our LVM volume created earlier (/dev/storage/test). On the master node with the DRBD device and LVM volume active, run the following commands :
# ietadm --op new --tid=1 --params Name=iqn.2009-02.com.example:test # ietadm --op new --tid=1 --lun=0 --params Type=fileio,Path=/dev/storage/test
The second command adds a LUN (ID 0) to this target, assigns the LVM volume /dev/storage/test as the backing store, and tells the target to provide access to this device via the "fileio" method. Check the ietd.conf man page for details on the various options you can use - in particular, you may want to try benchmarking the fileio,blockio, and using write-back caching.
If you now check the contents of /proc/net/iet/volume, you'll see the target listed :
However, if you restart the target daemon, you'll see the target disappear. To make a permanent entry, edit /etc/ietd.conf and add the following :
Target iqn.2009-02.com.example:test Lun 0 Path=/dev/storage/test,Type=fileio Alias test
See the /etc/ietd.default file created earlier to see some of the other options you can set - although you can safely stick to the bare minimum defaults for the moment. Now, when you restart the daemon, you'll see your volumes being created at startup.
We'll now add this to our Heartbeat configuration. Make sure the iSCSI service is stopped and that /etc/ietd.conf is the same on both nodes, and then edit /etc/ha.d/haresources to manage the iscsi-target init script :