Problem-solving resources for Tivoli Storage Manager administrators

Comments Off

Abstract

A compiled list of links from the technical support team, containing documents to aid you in your administration of Tivoli Storage Manager.

Content

Tivoli Storage Manager communities
Communities provide a place where product users, business partners, and developers can collaborate and exchange information about Tivoli Storage Manager products.

Product Support
Start at the support portal for Tivoli Storage Manager to get answers to questions and for all your support needs.
RSS feed for IBM Software Support

      Use an RSS feed to stay up-to-date with the latest content created for Tivoli Storage Manager:

http://www.ibm.com/software/support/rss/tivoli/663.xml?rss=s663&ca=rsstivoli

Recommended fixes

      Read about

recommended fixes

      for Tivoli Storage Manager.

Supported operating systems and requirements

      The

overview topic

      has links to detailed information about hardware and software requirements.

Collecting troubleshooting data

      Read about how to collect data that can aid in problem determination in

Collecting Data: Read First for Tivoli Storage Manager products

    .

Tivoli Storage Manager product documentation

NEW! IBM Knowledge Center consolidates product documentation from all IBM information centers. It’s ready for you to use now in an open beta:

  1. Learn more
  2. Try it out: www.ibm.com/support/knowledgecenter/SSGSG7/
  3. Take the survey

Known Beta limitations

      • You might experience some minor functional issues because fine tuning is in progress.
      • Content is still being configured and added, so the content you see might not be exactly what you expect.
      • Search results might not be what you expect, or might not be in all the languages that you expect. Configuring and indexing of content for search is in progress.

Information Centers

      The information centers contain documentation for IBM Tivoli Storage Management products.

Tip:

        When IBM Knowledge Center exits beta status, information center pages will be redirected to the corresponding pages in the IBM Knowledge Center.

      V7.1:

http://pic.dhe.ibm.com/infocenter/tsminfo/v7r1

      V6.4:

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4

      V6.3:

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r3

      V6.2:

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2

      V6.1:

http://publib.boulder.ibm.com/infocenter/tsminfo/v6

      -

End of support April 30, 2014

      V5.5 and earlier:

http://publib.boulder.ibm.com/infocenter/tivihelp/v1r1/index.jsp

      -

End of support April 30, 2014

Problem Determination Information

V7.1: http://pic.dhe.ibm.com/infocenter/tsminfo/v7r1/topic/com.ibm.itsm.tsm.doc/t_tshoot_tsm.html

V6.4: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.nav.doc/t_probdeterm.html

V6.3: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r3/topic/com.ibm.itsm.nav.doc/t_probdeterm.html

V6.2: http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.nav.doc/t_probdeterm.html

V6.1: http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.nav.doc/t_probdeterm.html – End of support April 30, 2014

V5.5: http://www.ibm.com/support/docview.wss?&uid=pub1sc32014201 – End of support April 30, 2014

Optimizing performance

      V7.1:

http://pic.dhe.ibm.com/infocenter/tsminfo/v7r1/topic/com.ibm.itsm.perf.doc/c_performance.html

      V6.4:

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r4/topic/com.ibm.itsm.nav.doc/c_performance.html

        Completely revised performance information was published in June 2013 for V6.4 and V6.3 servers and clients. The performance information in the V6.4 and V6.3 information centers is identical.

      V6.3:

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r3/topic/com.ibm.itsm.nav.doc/c_performance.html

      V6.2:

http://pic.dhe.ibm.com/infocenter/tsminfo/v6r2/topic/com.ibm.itsm.nav.doc/c_performance.html

      V6.1:

http://publib.boulder.ibm.com/infocenter/tsminfo/v6/topic/com.ibm.itsm.nav.doc/c_performance.html

      - End of support April 30, 2014
      V5.5:

http://www.ibm.com/support/docview.wss?&uid=pub1sc32014101

      - End of support April 30, 2014

Monthly newsletter
Stay informed with the monthly Tivoli Storage Manager newsletter.

Featured documents for Tivoli Storage Manager
The Tivoli Storage Manager support team maintains a list of featured documents that they recommend.

IBM Education Assistant for Tivoli Storage Manager
IBM Education Assistant has short, task-focused audio and visual presentations to help you learn key information about the product.

IBM Support Assistant plug-in for Tivoli Storage Manager
IBM Support Assistant is a complimentary software offering that provides you with a workbench to help you with problem determination. Product add-ons customize the IBM Support Assistant for focused, product-specific help resources, search capabilities, and automated data collection.

Sign up to receive weekly mailings for all products via the My Notifications tool
By registering for this service you can receive information about downloads, flashes, forums and discussions, problem-solving information, publications, and support sites, for the products that you choose. Start here: http://www.ibm.com/software/support/einfo.html

Comments Off

TSM Linux Backup-Archive client “best effort” supported distributions

Comments Off

Question

Is there some type of support for the IBM Tivoli Storage Manager (TSM) 6.2, 6.3, 6.4, and 7.1 Linux x86/x86_64 Backup-Archive client on Linux distributions other than Red Hat (RHEL) and SUSE (SLES)?

Cause

Link to Linux x86/x86_64 Hardware and Software Requirements for fully supported Linux client distributions: http://www.ibm.com/support/docview.wss?&uid=swg21052223

Answer

Yes, “best effort” support is included with support for the TSM Linux x86/x86_64 Backup-Archive client and API client, on the Linux distributions listed below, for as long as the TSM version and Linux distributions are in regular support.

“BEST EFFORT” SUPPORT DEFINITION

  • IBM Support will accept calls for TSM Linux x86/x86_64 Backup-Archive and API client issues on the distributions and release levels listed below
  • IBM Support will not require that the customer recreate the problem on fully supported RHEL or SLES distributions before calling in for support
  • IBM Support will cease problem determination if the problem is determined to be unique to the distributions or release levels listed below. The problem must then be pursued with that distribution’s provider or community.

“Best effort” support is not applicable to the:

  • TSM Space Management client
  • TSM server
  • TSM storage agent
  • TSM FastBack
  • FlashCopy Manager
  • any of the TSM data protection products (e.g. TSM for Databases, TSM for Mail, TSM for Enterprise Resource Planning) except TSM for Virtual Environments (see this TSM for VE technote: http://www.ibm.com/support/docview.wss?uid=swg21474116)

For information about Linux support for other TSM components, please see the appropriate component support page.

7.1 “Best Effort” Support

6.4 “Best Effort” Support

6.3 “Best Effort” Support

6.2 “Best Effort” Support

Support Considerations

 7.1 DISTRIBUTION AND RELEASE LEVELS ELIGIBLE FOR “BEST EFFORT” SUPPORT FOR THE BACKUP-ARCHIVE AND API CLIENTS (on Linux x86_64):

 

  • CentOS 5 and 6
  • Debian* 6 and 7
  • Fedora 19 and 20 (requires 7.1.0.3 or higher client)
  • OpenSUSE 13
  • Scientific Linux 5 and 6
  • Ubuntu* 12, 13, and 14.04

“Best Effort” support is not available for the automatic client deployment and Journal Based Backup functions on Linux x86_64.

* Note for Debian-based distributions (Debian and Ubuntu): The TSM Linux x86_64 client is made available in RPM package format. Third party tools can be used to convert the package to Debian format for installation. IBM Support will not accept calls related to package conversion or installation of converted packages.

6.4 DISTRIBUTION AND RELEASE LEVELS ELIGIBLE FOR “BEST EFFORT” SUPPORT FOR THE BACKUP-ARCHIVE AND API CLIENTS (on Linux x86_64): 

  • CentOS 5 and 6
  • Debian* 6
  • Fedora 16 and 17
  • Scientific Linux 5 and 6
  • Mandriva Linux 2011
  • Ubuntu* 10.04, 11.10, and 12

“Best Effort” support is not available for the automatic client deployment and Journal Based Backup functions on Linux x86_64.

* Note for Debian-based distributions (Debian and Ubuntu): The TSM Linux x86_64 client is made available in RPM package format. Third party tools can be used to convert the package to Debian format for installation. IBM Support will not accept calls related to package conversion or installation of converted packages.

 6.3 DISTRIBUTION AND RELEASE LEVELS ELIGIBLE FOR “BEST EFFORT” SUPPORT FOR THE BACKUP-ARCHIVE AND API CLIENTS (on Linux x86_64):

 

  • CentOS 5 and 6
  • Debian* 5 and 6
  • Fedora 14, 15, 16, and 17
  • Scientific Linux 5 and 6
  • Asianux 3
  • Mandriva Linux 2010 and 2011
  • Ubuntu* 10, 11, and 12.04

“Best Effort” support is not available for the automatic client deployment and Journal Based Backup functions on Linux x86_64.

* Note for Debian-based distributions (Debian and Ubuntu): The TSM Linux x86_64 client is made available in RPM package format. Third party tools can be used to convert the package to Debian format for installation. IBM Support will not accept calls related to package conversion or installation of converted packages.

6.2 DISTRIBUTION AND RELEASE LEVELS ELIGIBLE FOR “BEST EFFORT” SUPPORT FOR THE BACKUP-ARCHIVE AND API CLIENTS:

 

  • CentOS 4 and 5
  • Debian* 4 and 5
  • Fedora 11 and 12
  • Scientific Linux 4 and 5
  • Asianux 3
  • Mandriva Linux 2010
  • Ubuntu* 8.04, 9, and 10

“Best Effort” support is not available for the automatic client deployment function on Linux x86/x86_64.

* Note for Debian-based distributions (Debian and Ubuntu): The TSM Linux x86/x86_64 client is made available in RPM package format. Third party tools can be used to convert the package to Debian format for installation. IBM Support will not accept calls related to package conversion or installation of converted packages.

SUPPORT CONSIDERATIONS

 

  • 6.2, 6.3, 6.4, and 7.1 Linux x86/x86_64 Backup-Archive client and API client only
  • Support for currently supported Linux Backup-Archive client functionality and file systems
  • SELinux support for distributions based on RHEL (CentOS and Scientific Linux)
  • Automounter support for AutoFS only
  • Automatic client deployment and Journal Based Backup functions are not supported

Note: TSM Version 6.1 has reached End of Support.

Comments Off

Using mping command to verify multicast communication

Comments Off
In PowerHA 7.1 clusters mping is very useful tool to check that multicast communications work for CAA.

By default, PowerHA® SystemMirror® uses unicast communications for heartbeat. For cluster communication, you can optionally select to configure a multicast address or have CAA automatically select the multicast address if your network is configured to support multicast communication. If you choose to use multicast communication, do not attempt to create a cluster until you verify that multicast packets can be sent successfully across all nodes that are part of the cluster. end of change

To test end-to-end multicast communication for all nodes used to create the cluster on your network, run the mping command to send and receive packets between nodes.

If you are running PowerHA SystemMirror 7.1.1, or later, you cannot create a cluster if the mping command fails. If the mping command fails, your network is not set up correctly for multicast communication. If so, review the documentation for your switches and routers to enable multicast communication.

You can run the mping command with a specific multicast address; otherwise, the command uses a default multicast address. You must use the multicast addresses that are used for creating the cluster as input for the mping command.

Note: The mping command uses the interface that has the default route. To use the mping command to test multicast communication on a different interface that does not have the default route, you must temporarily add a static route with the required interface to the multicast IP address.

start of change The following example shows a success case and a failure case for the mping command, where node A is the receiver and node B is the sender. end of change

start of change Success case:

Receiver

root@nodeA:/# mping -r -R -c 5
mping version 1.1
Listening on 227.1.1.1/4098:

Replying to mping from 9.3.207.195 (nodeB.aus.stglabs.ibm.com) bytes=32 seqno=0 ttl=1
Replying to mping from 9.3.207.195 (nodeB.aus.stglabs.ibm.com) bytes=32 seqno=1 ttl=1
Replying to mping from 9.3.207.195 (nodeB.aus.stglabs.ibm.com) bytes=32 seqno=2 ttl=1
Replying to mping from 9.3.207.195 (nodeB.aus.stglabs.ibm.com) bytes=32 seqno=3 ttl=1
Replying to mping from 9.3.207.195 (nodeB.aus.stglabs.ibm.com) bytes=32 seqno=4 ttl=1

Sender

root@nodeB:/# mping -R -s -c 5
mping version 1.1
mpinging 227.1.1.1/4098 with ttl=1:

32 bytes from 9.3.207.190 (nodeA.aus.stglabs.ibm.com) seqno=0 ttl=1 time=0.985 ms
32 bytes from 9.3.207.190 (nodeA.aus.stglabs.ibm.com) seqno=1 ttl=1 time=0.958 ms
32 bytes from 9.3.207.190 (nodeA.aus.stglabs.ibm.com) seqno=2 ttl=1 time=0.998 ms
32 bytes from 9.3.207.190 (nodeA.aus.stglabs.ibm.com) seqno=3 ttl=1 time=0.863 ms
32 bytes from 9.3.207.190 (nodeA.aus.stglabs.ibm.com) seqno=4 ttl=1 time=0.903 ms

--- 227.1.1.1 mping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.863/0.941/0.998 ms

end of change

start of change Failure case:

Receiver

root@nodeA:/# mping -r -R -c 5 -6
mping version 1.1
Listening on ff05::7F01:0101/4098:

Replying to mping from fe80::18ae:19ff:fe72:1a15 bytes=48 seqno=0 ttl=1
Replying to mping from fe80::18ae:19ff:fe72:1a15 bytes=48 seqno=1 ttl=1
Replying to mping from fe80::18ae:19ff:fe72:1a15 bytes=48 seqno=2 ttl=1
Replying to mping from fe80::18ae:19ff:fe72:1a15 bytes=48 seqno=3 ttl=1
Replying to mping from fe80::18ae:19ff:fe72:1a15 bytes=48 seqno=4 ttl=1

Sender

root@nodeB:/# mping -R -s -c 5 -6
mping version 1.1
mpinging ff05::7F01:0101/4098 with ttl=1:


--- ff05::7F01:0101 mping statistics ---
5 packets transmitted, 0 packets received, 100% packet loss
round-trip min/avg/max = 0.000/0.000/0.000 ms

end of change

Note: start of change To verify a result, you must check the sender side of the mping command only. Also, note the percentage of packet loss. To verify whether multicast is working on a network, you must perform the mping tests with both nodes tested as both the sender and receiver. Typically, the non-verbose output provides you the necessary information. However, if you choose to use the -v flag with the mping command, a good knowledge about the internals of the program is necessary, without which the verbose output can be misunderstood. You can also check the return code from the sender side of the mping command. If an error occurs, the sender side returns 255. Upon success, it returns 0. end of change

Cluster Aware AIX (CAA) selects a default multicast address if you do not specify a multicast address when you create the cluster. The default multicast address is created by combining the logical OR of the value (228.0.0.0) with the lower 24 bits of the IP address of the node. For example, if the IP address is 9.3.199.45, then the default multicast address would be 228.3.199.45.

The Internet Protocol version 6 (IPv6) addresses are supported by PowerHA SystemMirror 7.1.2, or later. When IPv6 addresses are configured in the cluster, Cluster Aware AIX (CAA) activates heartbeating for the IPv6 addresses with an IPv6 multicast address. You must verify that the IPv6 connections in your environment can communicate with multicast addresses.

To verify that IPv6 multicast communications are configured correctly in your environment, you can run the mping command with the -6 option. When you run the mping command, it verifies the IPv6 multicast communications with the default IPv6 multicast address. To specify a specific IPv6 multicast address, run the mping command with the -a option and specify an IPv6 multicast address. You do not need to specify the -6 option when using the -a option. The mping command automatically determines the family of the address passed with the -a option.

Comments Off