.\" $Id: nsr_storage_node.5,v 1.2 2005/05/12 14:15:24 dravan Exp $ Copyright (c) 2005, Legato Systems Incorporated" .\" Copyright (c) 2005 Legato Systems, Incorporated. .\" All rights reserved. .TH NSR_STORAGE_NODE 5 "Aug 23, 06" "StorEdge EBS 7.3.2" .SH NAME nsr_storage_node \- description of the storage node feature .SH SYNOPSIS The .I storage node feature provides central server control of distributed devices for saving and recovering client data. .SH DESCRIPTION A .I storage node is a host that has directly attached devices that are used and controlled by a Sun StorEdge EBS server. These devices are called .I remote devices, because they are remote from the server. Clients may save and recover to these .I remote devices by altering their "storage nodes" attribute (see .BR nsr_client (5)). A .I storage node may also be a client of the server, and may save to its own devices. .LP The main advantages provided by this feature are central control of remote devices, reduction of network traffic, use of faster local saves and recovers on a storage node, and support of heterogeneous server and storage node architectures. .LP There are several attributes which affect this function. Within the NSR resource (see .BR nsr_service (5)) there are the "nsrmmd polling interval", "nsrmmd restart interval" and "nsrmmd control timeout" attributes. These attributes control how often the remote media daemons (see .BR nsrmmd (8)) are polled, how long between restart attempts, and how long to wait for remote requests to complete. .LP Within the "NSR device" resource (see .BR nsr_device (5)) the resource's name will accept the "rd=hostname:dev_path" format when defining a .I remote device. The "hostname" is the hostname of the storage node and "dev_path" is the device path of the device attached to that host. There are also hidden attributes called "save mount timeout" and "save lockout," which allow a pending save mount request to timeout, and a storage node to be locked out for upcoming save requests. .LP Within the "NSR client" resource (see .BR nsr_client (5)), there are "storage nodes", "clone storage nodes", and "recover storage nodes" attributes: .RS 3 .br The "storage nodes" attribute is used by the server in selecting a storage node when the client is saving data. .br During a cloning operation (which is essentially a recover whose output data is directed straight into another save operation), the "clone storage node" attribute of the (first) client whose data is being cloned is consulted to determine where to direct the data for the save side of the operation. .br The "recover storage nodes" attribute is used by the server in selecting a storage node to be used when the client performs a recover (or the recover side of a clone operation). Note that if the volume in question is already mounted, it will be used from its current location rather than being unmounted and remounted on a system that is in the "recover storage node" list. If the volume in question is in a jukebox, and the jukebox has a value set for its "read hostname" attribute then that designated system will be used instead of consulting the "recover storage node" list, unless the environment variable FORCE_REC_AFFINITY is set to "yes". .RE .LP The "NSR jukebox" resource (see .BR nsr_jukebox (5)), contains the "read hostname" attribute. When all of a jukebox's devices are not attached to the same host, this attribute specifies the hostname that is used in selecting a storage node for recover and read-side clone requests. For recover requests, if the required volume is not mounted, and the client's "storage nodes" attribute does not match one of the owning hosts in the jukebox, then this attribute is used. For clone requests, if the required volume is not mounted, then this attribute is used. .SH INSTALL AND CONFIGURE In order to install a storage node, choose the client and storage node packages, where given the choice. For those platforms that do not have a choice, the storage node binaries are included in the client package. In addition, install any appropriate device driver packages. If not running in evaluation mode, a storage node enabler must be configured on the server for each node. .LP As with a client, ensure that the .BR nsrexecd (8) daemon is started on the storage node. To define a device on a storage node, from the controlling server define a device with the above mentioned "rd=" syntax. For a remote jukebox (on a storage node), run .BR jbconfig (8) from the node, after adding root@storage_node to the server's administrator list, (where root is the user running .BR jbconfig (8) and storage_node is the hostname of the storage node). This administrator list entry may be removed after .BR jbconfig (8) completes. .LP In addition to .BR jbconfig (8), when running .BR scanner (8) on a storage node, root@storage_node must be on the adminstrator list. .LP When a device is defined (or enabled) on a storage node, the server will attempt to start a media daemon (see .BR nsrmmd (8)) on the node. In order for the server to know whether the node is alive, it polls the node every "nsrmmd polling interval" minutes. When the server detects a problem with the node's daemon or the node itself, it attempts to restart the daemon every "nsrmmd restart interval" minutes, until either the daemon is restarted or the device is disabled (by setting "enabled" to "no" in the device's "enabled" attribute). .LP In addition to needing a storage node enabler for each storage node, each jukebox will need its own jukebox enabler. .SH OPERATION A storage node is assignable for work when it is considered functional by the server - .BR nsrexecd (8) running, device enabled, .BR nsrmmd (8) running, and the node is responding to the server's polls. When a client save starts, the client's "storage nodes" attribute is used to select a storage node. This attribute is a list of storage node hostnames, which are considered in order, for assignment to the request. .LP The exception to this node assignment approach is when the server's index or bootstrap is being saved - these save sets are always directed to the server's local devices, regardless of the server's "storage nodes" attribute. Hence, the server will always need a local device to backup such data, at a minimum. These save sets can later be cloned to a storage node, as can any save set. .LP If a storage node is created first (by defining a device on the host), and a client resource for that host is then added, that hostname is added to its "storage nodes" attribute. This addition means the client will back up to its own devices. However, if a client resource already exists, and a device is later defined on that host, then the client's hostname must be added manually to the client's "storage nodes" attribute. This attribute is an ordered list of hostnames; add the client's own name as the first entry. .LP The volume's location field is used to determine the host location of an unmounted volume. The server looks for a device or jukebox name in this field, as would be added when a volume resides in a jukebox. Volumes in a jukebox are considered to be located on the host to which the jukebox is connected. The location field can be used to bind a stand-alone volume to a particular node by manually setting this field to any device on that node (using the "rd=" syntax). For jukeboxes which do not have all of their devices attached to the same host, see the previous description of the "read hostname" attribute. .LP There are several commands that interact directly with a device, and so must run on a storage node. These include .BR jbconfig (8), .BR nsrjb (8) and .BR scanner (8), in addition to those in the device driver package. Invoke these commands directly on the storage node rather than on the server, and use the server option ("-s server_host", where server_host is the controlling server's hostname). .SH CLONING FUNCTION A single clone request may be divided into multiple sub-requests, one for each different source machine (the host from which save sets will be read). For example, suppose a clone request must read data from volumeA and volumeB, which are located on storage nodes A and B, respectively. Such a request would be divided into two sub-requests, one to read volumeA from storage node A and another to read volumeB from storage node B. .LP A clone request involves two sides, the source that reads data and the target that writes data. These two sides may be on the same host or on different hosts, depending on the configuration. The source host is determined first and then the target host. If the volume is mounted, the source host is determined by its current mount location. If the volume is not mounted at the time of the clone request and it resides in a jukebox, then the source host is determined by the value of the jukebox's "read hostname" attribute. .LP Once the source host is known, the target host is determined by examining the "clone storage nodes" attribute of the client resource of the source host. If this attribute has no value, the "clone storage nodes" attribute of the server's client resource is consulted. If this attribute has no value, the "storage nodes" attribute of the server's client resource is used. .SH LIMITATIONS A server cannot be a storage node of another server. .SH SEE ALSO .BR jbconfig (8), .BR mmlocate (8), .BR nsr_client (5), .BR nsr_device (5), .BR nsr_jukebox (5), .BR nsr_service (5), .BR nsrclone (8), .BR nsrexecd (8), .BR nsrjb (8), .BR nsrmmd (8), .BR nsrmon (8), .BR scanner (8).