In general,

Multiple nodes can be used to perform backup operations using the TSM scheduler. By granting proxy authority to the agent nodes, they will perform scheduled backup operations on behalf of the target node. Each agent node must use the asnodename option within their schedule to perform multiple node backup for the agent node.

 

The following examples show the administrative client-server commands using the scheduler to back up a GPFS™ file system, using three nodes in the GPFS cluster which participate in the backup.

  • The administrator defines four nodes on the Tivoli® Storage Manager server, using the following commands: (1) REGISTER NODE node_1 mysecretpa5s, (2) REGISTER NODE node_2 mysecretpa5s, (3) REGISTER NODE node_3 mysecretpa5s, (4) REGISTER NODE node_gpfs mysecretpa5s. node_1, node_2, node_3 and node_gpfs. node_1, node_2, and node_3 are only used for authentication; all file spaces are stored with node_gpfs.
  • The administrator defines a proxynode relationship between the nodes, using the following commands: GRANT PROXYNODE TARGET=node_gpfs AGENT=node_1, node_2, node_3.
  • The administrator defines the node name and asnodename for each of the machines in their respective dsm.sys files, using the following commands: (1) nodename node_1, (2) asnodename node_gpfs.
  • The administrator defines a schedule for only node_1 to do the work, using the following commands: (1) DEFINE SCHEDULE STANDARD GPFS_SCHEDULE ACTION=MACRO OBJECTS=”gpfs_script”, (2) DEFINE ASSOCIATION STANDARD GPFS node_gpfs.
  • To execute the schedule on node node_gpfs, enter the client command: DSMC SCHED.

Another way to back up GPFS is to use Tivoli Storage Manager to look for the incremental changes. The GPFS file system can be divided into three branches and each branch can be statically assigned to each node using the virtualmountpoint option. In the following example, you have a file system called /gpfs with three branches: /gpfs/branch_1, /gpfs/branch_2, and /gpfs/branch_3.

  • The administrator defines four nodes on the Tivoli Storage Manager server: node_1, node_2, node_3 and node_gpfs. node_1, node_2 and node_3 are only used for authentication; all file spaces are stored with node_gpfs.
    REGISTER NODE node_1 mysecretpa5s 
    REGISTER NODE node_2 mysecretpa5s 
    REGISTER NODE node_3 mysecretpa5s 
    REGISTER NODE node_gpfs mysecretpa5s
  • The administrator defines a proxynode relationship between the nodes:
    GRANT PROXYNODE TARGET=node_gpfs AGENT=node_1,node_2,node_3
  • The administrator defines the node name, virtualmountpoint and domain for each of the three machines in their respective dsm.sys files:
    nodename		      node_1 
    virtualmountpoint	/gpfs/branch_1
    domain		        /gpfs/branch_1
    Note: The user does not want to define asnodename in the options file. In this case the asnodename must be on the schedule so that each one of the nodes can have its own schedule associated with its real node name.
  • The administrator defines a schedule for all nodes: node_1, node_2 and node_3
    DEFINE SCHEDULE STANDARD GPFS_SCHEDULE OPTIONS="-asnode=node_gpfs" 
    DEFINE ASSOCIATION STANDARD GPFS node_1,node_2,node_3
  • To start the scheduler on the three nodes, enter the client command:
    DSMC SCHED