Archive for the 'AIX' Category

Products EOL / Life Cycle

 

 

—————–

 

 

AIX support lifecycle information

 

 

 

 

Product lifecycle

 

Abstract

Lists the duration of fix support availability and end of fix support for AIX Technology Levels (TL).

Content

AIX TL fix support

Release

Release Date

Fixes Available Until

AIX 5.2 TL8

February 2006

February 2007

 

AIX 5.2 TL9

July 2006

November 2007

 

AIX 5.2 TL10

June 2007

April 2009

End of Service

AIX 5.3 TL5

August 2006

November 2007

 

AIX 5.3 TL6

June 2007

30 May 2009

 

AIX 5.3 TL7

November 2007

30 November 2009

 

AIX 5.3 TL8

April 2008

30 April 2010

 

AIX 5.3 TL9

November 2008

30 November 2010

 

AIX 5.3 TL10

May 2009

31 May 2011

 

AIX 5.3 TL11

October 2009

2 years later

 

AIX 5.3 TL12

April 2010

2 years later

 

AIX 6.1 TL0

November 2007

30 November 2009

 

AIX 6.1 TL1

May 2008

31 May 2010

 

AIX 6.1 TL2

November 2008

30 November 2010

 

AIX 6.1 TL3

May 2009

31 May 2011

 

AIX 6.1 TL4

November 2009

2 years later

 

AIX 6.1 TL5

April 2010

2 years later

 

AIX 6.1 TL6

September 2010

3 years later

 

AIX 6.1 TL7

October 2011

3 years later

 

AIX 7.1 TL0

September 2010

3 years later

 

AIX 7.1 TL1

October 2011

3 years later

 

 

 

 

Red Hat Enterprise Linux versions 5 and 6 share a multiphase life cycle that can span 13 years, while versions 3 and 4 share a life cycle of 10 years. During the first five and a half years of the life cycle (“production 1”), there is full support and software and hardware drivers are updated. In later phases, support and updates are gradually reduced, with only critical and security-related bug fixes being provided to customers who pay for support in the last three years (“extended life cycle”).[16]

Version

Date of Release

End of Support Dates

Red Hat Enterprise Linux 2.1

2002-03-26 (AS)

2003-05-01 (ES)

2009-05-31[17]

Red Hat Enterprise Linux 3

2003-10-23

2006-07-20 (End of Production 1)

2007-06-30 (End of Production 2)
2010-10-31[18] (End of Production 3 / End of Regular Life Cycle)
2014-01-30 (End of Extended Life Cycle)

Red Hat Enterprise Linux 4

2005-02-14

2009-03-31 (End of Production 1)

2011-03-31 (End of Production 2)
2012-02-29 (End of Production 3 / End of Regular Life Cycle)
2015-02-28 (End of Extended Life Cycle)

Red Hat Enterprise Linux 5

2007-03-15

Q4 2012 (End of Production 1)

Q1 2014 (End of Production 2)
2017-03-31 (End of Production 3 / End of Regular Life Cycle)
2020-03-31 (End of Extended Life Cycle)

Red Hat Enterprise Linux 6

2010-11-10

Q2 2016 (End of Production 1)

Q2 2017 (End of Production 2)
2020-11-30 (End of Production 3 / End of Regular Life Cycle)
2023-11-30 (End of Extended Life Cycle)

Red Hat Enterprise Linux 7

 ?

 ?

  Prior version outside of Regular Life Cycle

  Prior version with varying degrees of support

  Current version

  Future version

Note: A version outside of its Regular Life Cycle is normally unsupported, but support can still be obtained through Red Hat while the release is in its Extended Life Cycle through an add-on subscription, Extended Life Cycle Support

IBM WebSphere Application Server

 

This table is derived from IBM Information Center: Specifications and API documentation and WebSphere product lifecycle dates.

WebSphere version

8.5

8.0

7.0

6.1

6.0

5.1

5.0

4.0

3.5

Release date

15 Jun 2012 [4]

17 Jun 2011

17 Oct 2008

30 Jun 2006

31 Dec 2004

16 Jan 2004

03 Jan 2003

15 Aug 2001

31 Aug 2000

End of support

     

30 Sept 2012

30 Sept 2010

30 Sept 2008

30 Sept 2006

30 April 2005

30 Nov 2003

J2SE/Java SE

[5]

6

6

5

1.4

1.4

1.3

1.3

1.2

JavaEE

6

6

5

1.4

1.4

1.3

1.3

1.2

1.2 (not fully compliant)

Servlet

3.0

3.0

2.5

2.4

2.4

2.3

2.3

2.2

2.1&2.2

JSP

2.2

2.2

2.1

2.0

2.0

1.2

1.2

1.1

0.91 and 1.0&1.1

EJB

3.1

3.1

3.0

3.0 [6]

2.1

2.0

2.0

1.1

1.0

JDBC

4.1

4.0

4.0

3.0

3.0

       

IBM has shipped several versions and editions of WebSphere Application Server.

 

Worldwide Server Market Share

Top 5 Corporate Family, Worldwide Server Systems Factory Revenue, Second Quarter of 2011 (Revenues are in Millions)

Vendor

2Q11 Revenue

2Q11 Market Share

2Q10 Revenue

2Q10 Market Share

2Q11/2Q10 Revenue Growth

1. IBM

$4,008

30.5%

$3,219

28.9%

24.5%

1. HP

$3,922

29.8%

$3,589

32.2%

9.3%

3. Dell

$1,814

13.8%

$1,725

15.5%

5.1%

4. Oracle

$941

7.2%

$903

8.1%

4.2%

4. Fujitsu

$849

6.5%

$363

3.3%

133.6%

Others

$1,621

12.3%

$1,355

12.1%

19.7%

All Vendors

$13,156

100.0%

$11,154

100.0%

17.9%

RHEL 6 Vs RHEL 5

Advantages of RHEL6 over RHEL5

 
 
Red Hat Enterprise Linux 6 (RHEL6)

Red Hat Enterprise Linux (RHEL) is an open source, linux based operating system developed by Red Hat Inc. It is popularly used as server operating system. Its first release was the RHEl 2.1 which was released in the year 2002. After the first version of RHEL, new and better versions quickly followed like RHEL 3,4,5,etc. Now in 2010, the newest version has been released. It is RHEL 6. Now in this post lets discuss the main advantages of RHEL6 over RHEL5.

 

RHEL6 being the latest release obviously have a lot of new features. The advantages are:

 
 
  •   A new level of virtualisation
          RHEL6 introduces the use of KVM (Kernel-based Virtual Machine) as its hypervisor. In the earlier releases Xen hypervisor was used. The main advantage of KVM is that a new kernel should not be installed like in Xen. It also supports the installation of many virtual operating systems like Windows, Linux, Solaris,etc. It is easy to manage.
  •   Ext4 is made the default filesystem
          Ext4 has many new advantages than Ext3 which is used in earlier versions of RHEL. Ext4 is comparatively faster and easy to manage. It supports supports up to 100TB with the addition of Scalable Filesystem Add-one.   

      

  •  Improved level of Security
         
    RHEL6 has advanced level of security. SELinux (Security Enhanced Linux) features are improved and a new set of SELinux rules has been added to provide security to virtual machines from hackers and attackers. This new feature is called SVirt.
  •  New Networking Features
        
    RHEL6 is released with improved and new networking features. It supports IPv6. It uses NFSv4 (Network File Transfer) for the sharing of files in the network rather than NFSv3. It also supports iSCSI (internet Small Computer System Interface) partitions. The network manager in RHEL6 supports Wi-Fi capabilities.

  •  Use of Drivers
        
    RHEL6 has drivers for speeding up operations under KVM, VMware and Xen.

  • Increase in the support period provided by Red Hat.
         
    RHEL6 has a long period of support provided by Redhat. It provides updates for 7 years and also a extra 3 years of service as  paid service. Therefore it means that its period of support is twice the period of support provided by other linux distributors like Ubuntu , Debian, etc.

  • Improvements of minor updates
         
    Red Hat releases minor versions such as 6.1, 6.2. These minor versions are the accumulated updates of the major version. The new minor releases will not only contain bug fixes but will also have major changes and new features.  

RHEL6-gnome-desktopRHEL6 has been released with many new feature which make RHEL6 more useful than RHEL5. RHEL6 is somewhat similar to Fedora 12, so the Fedora users should find RHEL6 familiar. Due to all these reasons the release of RHEL6 is a huge step of advancement and also an achievement in the field of open source

       

nfs in AIX

nfs in AIX

 

server side:

 

1. Starting nfs service #mknfs –N

 

2. Starting nfs daemons #startsrc -g nfs /startsrc -s nfsd

 

3. Editing /etc/exports file for mentioning the file to share for client.or exportfs -i /test (for temp.)

  Ex: 1. #vi /etc/exports

         2. /test (fs/file to share with client)

         3.: wq!

 

4. Exporting fs #exportfs or #/usr/sbin/exportfs –a

 

5. Check fs has been exported #showmount -e servername

 

Client Side:

 

1. Creating mt. point #mkdir /test

 

2. Mounting the remote fs.  #mount 10.11.80.14:/test /test

 

  10.11.80.14 — remote server ipaddress.

  /test — fs shared by server

 /test — mt pt.for the client to access the nfs shared fs.

 Or go to smitty #smitty mknfsmnt

 

3. Check for mounted fs #df -g

 

4. Go into the dir. #cd /test

 

5. List the file. #ls

 

    Now we can access and modify the file from the server Based on the file permission and nfs share permission (Where we make entry in. — step 3 on server side) 

 

6. (Optional) edit the /etc/filesystems file and

  put entry for mounting the fs while booting.

IBM PowerHA SystemMirror for AIX

PowerHA® SystemMirror has many benefits.

PowerHA SystemMirror helps you with the following:

  • The PowerHA SystemMirror planning process and documentation include tips and advice on the best practices for installing and maintaining a highly available PowerHA SystemMirror cluster.
  • Once the cluster is operational, PowerHA SystemMirror provides the automated monitoring and recovery for all the resources on which the application depends.
  • PowerHA SystemMirror provides a full set of tools for maintaining the cluster while keeping the application available to clients.

PowerHA SystemMirror allows you to:

  • Quickly and easily setup a basic two-node cluster by using the typical initial cluster configuration SMIT path, application configuration assistants (Smart Assists), or the Create a Cluster wizard and the Add Application wizard in PowerHA SystemMirror for IBM® Systems Director.
  • Test your PowerHA SystemMirror configuration by using the Cluster Test Tool. You can evaluate how a cluster behaves under a set of specified circumstances, such as when a node becomes inaccessible, a network becomes inaccessible, and so forth.
  • Ensure high availability of applications by eliminating single points of failure in a PowerHA SystemMirror environment.
  • Leverage high availability features available in AIX®.
  • Manage how a cluster handles component failures.
  • Secure cluster communications.
  • Set up fast disk takeover for volume groups managed by the Logical Volume Manager (LVM).
  • Monitor PowerHA SystemMirror components and diagnose problems that may occur.

Websphere Administration

WebSphere Application Server V8

WebSphere Application Server can help businesses offer richer user experiences through the rapid delivery of innovative applications. Developers can jumpstart their development efforts and leverage existing skills by selecting from the comprehensive set of open standards-based programming models supported, allowing developers to better align project needs with programming model capabilities and developer skills. WebSphere Application Server also speeds application delivery through encouraging reuse and extending the life of existing application assets.

Improve operational efficiency and reliability

WebSphere Application Server helps businesses reduce costs through industry leading performance, operational efficiency, and reliability. Companies can take advantage of WebSphere Application Server’s high performance to consolidate workloads and administrative overhead to reduce total cost of ownership without sacrificing system reliability.
WebSphere Application Server’s proven transactional support helps companies maintain transaction integrity and overall reliability to minimize the likelihood of lost business opportunities due to failed transactions or system down time.

 

  • Realize greater TCO with performance enhancements to Java EE applications, OSGi applications, SOA applications, product startup times, application server creation times and installation times for typical business critical large applications.

  • Simplify local and centralized install and maintenance with automated prerequisite and interdependency checking for both distributed and z/OS environments using IBM Installation Manager.

  • Reduce disk footprint requirements through enhanced component install granularity to optionally select whether to install WebSphere Application Server components, such as thin clients, EJB deploy and language packs

  • Reduce developer time and effort during the edit-compile-debug development lifecycle with monitored directory-based install, update and uninstall of Java EE applications.

  • Enhanced operational efficiency and business agility through ability to administratively extend OSGi Applications-based running applications with new functionality without changing the application artifact, to enable independent application extension and evolution as business needs change. Additionally, update a running OSGi Applications-based application by only impacting those bundles affected by the change, enabling rapid update of deployed OSGi Applications.

  • Save time on problem determination and uncover potential performance bottlenecks by using the new High Performance Extensible Logging (HPEL) binary log and trace framework.

  • Increase administrator productivity through automated cloning of nodes based on existing node configuration, job manager updates to manage profiles, and improved ease of locating and editing configurations

  • Achieve enhanced high availability for messaging applications through improved integration with IBM WebSphere MQ and enhanced transactional integrity through tighter IBM DB2 integration.

  • Reduce the cost of deploying and managing geographically dispersed applications through Flexible Management features in WebSphere Application Server Network Deployment to centrally administer applications across WebSphere Application Server, WebSphere Application Server – Express and WebSphere Application Server Network Deployment environments.

  • Simply administrative tasks related to multi-component applications through WebSphere Business Level Applications

  • Improve time to value and reduce risk of down time through migration support for WebSphere Application Server V6.0, V6.1 and V7.0 configuration information using command-line tools and the GUI-based Configuration Migration Tool. Additionally, accelerate application migration from alternative application servers and from WebSphere Application Server V5.1, V6.0, V6.1, or V7.0 to WebSphere Application Server V8.0 using the separately available, no charge, WebSphere Application Server Migration Toolkit.

Increase security and control

WebSphere Application Server offers world-class security and control to help businesses confidently reduce costs and increase business agility. WebSphere Application Server’s rich support for security specifications and granular security controls help administrators productively secure application environments their businesses depend on.

  • Lower risks through end-to-end security hardening enhancements including security updates defined in the Java EE 6 specifications, additional security features enabled by default and improved security configuration reporting.

  • Improve ease of use and control through automation that allows administrators to copy a security domain, along with users and groups, at the global level

  • Enhance security and auditability for applications requiring distributed and z/OS system access through the ability to use z/OS System Authorization Facility (SAF) security to associate a SAF user ID with a distributed identity.

  • Improve security and ease of use through the simplified exchange of user identity and attributes in Web Services using Security Assertion Markup Language (SAML) Token through WS-Security SAML Token Profile 1.1.

  • Achieve faster time-to-value when delivering single sign on web services- based applications though ability to generate SAML tokens, request SAML tokens from an external Security Token Service (STS) and propagate SAML tokens in SOAP messages using the Web Services Security application programming interfaces (WSS API).

  • Improve component reuse and agility through ability to configure a web service application to interoperate with multiple endpoint services, each with potentially different configuration requirements, through specifying multiple policy sets and bindings specific to each endpoint service.

  • Improve security and interoperability through ability to generate and consume tokens using WS-Trust Issue and WS-Trust Validate requests for JAX-WS Web services that use Web Services Security. As a result of these requests, the login module issues, validates, or exchanges tokens with a WS-Trust Security Token Service providers, such as IBM Tivoli Federated Identity Manager.

  • Minimize cross-site scripting vulnerabilities using new HTTPOnly browser attribute for single sign-on applications to prevent client-side applications from accessing cookies.

  • Achieve greater ease of use through administration, and security enhancements for Java API for XML Web Services (JAX-WS) based applications.

 

IBMPowerVM Virtualization without limits

 PowerVM™ provides the industrial-strength virtualization solution forIBMPower Systems™ servers and blades. Based on more than a decadeof evolution and innovation, PowerVM represents the state of the art inenterprise virtualization and is broadly deployed in production environ-ments worldwide by most Power Systems owners.

The IBMPower Systems family of servers includes proven1workloadconsolidation platforms that help clients control costs while improvingoverall performance, availability and energy efficiency. With these serversand IBMPowerVM virtualization solutions, an organization can consoli-date large numbers of applications and servers, fully virtualize its systemresources, and provide a more flexible, dynamic IT infrastructure. Inother words, IBMPower Systems with PowerVM deliver the benefits of virtualization without limits.

 

Employing virtualization

 

You can employ virtualization in many ways to achieveimprovements in efficiency and flexibility:

 

  • Consolidation of multiple workloads, including those onunderutilized servers and systems with varied and dynamicresource requirements
  • Rapid deployment and scaling of workloads to meet changingbusiness demands
  • Aggregation of system resources such as CPUs, memory andstorage into shared pools for dynamic reallocation betweenmultiple workloads
  • Application development and testing in secure, independentdomains
  • Live mobility of active workloads between servers to supportplatform upgrades, systems balancing, or to avoid plannedmaintenance downtime

Getting Started with Shared Storage Pools

The new shared storage pool functionality is enabled with the latest PowerVM 2.2 service pack, and is a feature of PowerVM Standard and Enterprise. If you already have PowerVM, simply download the VIO server fixpack to obtain these new features. (Note: Because this TL is based on AIX 6.1 TL7, your NIM server must be at AIX 6.1 TL7 or AIX 7.1 TL1 to use your NIM server with your VIO server.)

 One thing to note, as Nigel points out in the presentation, is that the most common VIOS storage options have been around for some time:

1) Logical volumes, created from a volume group and presented to client LPARs
2) Whole local disks
3) SAN LUNs
4) File-backed storage, either from a file system on local disk or a file system on SAN disks
5) NPIV LUNs from SAN

Nigel then discusses the newest option: using SAN LUN disks that are placed into a shared storage pool. This new option, he emphasizes, doesn’t eliminate any of the other options. It does not portend the death of NPIV. It’s just an additional VIOS storage choice we now have.

Listen to the replay or look over the slides to gather Nigel’s thoughts on the benefits of shared storage pools. He explains that fibre channel LUNs and NPIV can be complex. They require knowledge of the SAN switch and the SAN disk subsystem. If you need to make changes, it might take your SAN guys awhile to implement them. This can slow overall responsiveness. That’s to say nothing of smaller organizations that don’t have dedicated SAN guys. Live partition mobility can be tough work if your disks aren’t pre-zoned to the different frames.

With a shared storage pool you pre allocate the disk to the VIO servers. Then it’s under your control. You can more easily allocate the space to your virtual machines.

POWER6 and POWER7 servers (including blades) are needed to use shared storage pools. At minimum you should allocate a 1 GB of LUN for your repository and another 1 GB of LUN for data, but in order to be useful, in most cases you’ll need much larger LUN(s) – think terabytes of disk — if you plan to do much with it.

Your VIOS must have the hostname set correctly to resolve the other hostnames. In Nigel’s experience he couldn’t use short hostnames — they had to be changed to their fully qualified names.

He also recommends giving your VIOS a CPU and at least 4 GB of memory. “Skinny” VIOS servers aren’t advisable with shared storage pools. Currently, the maximum number of nodes is four, the maximum physical disks in a pool is 256, the maximum virtual disks in a cluster is 1,024, and the number of clients is 40. A pool can have 5 GB to 4 TB of individual disks, and storage pool totals can range from 20 GB to 128 TB. Virtual disk capacity (LU) can be from 1 GB to 4 TB, with only one repository disk.

If you played around with phase one, you’ll find that many of your limits have been removed. Now you can use shared storage pools for live partition mobility, perform non-disruptive cluster upgrades and use third party multi-pathing software.

You cannot have active memory sharing paging disks on shared storage pool disks.

Nigel covers the relevant terminology (clusters, pools, logical units, etc.).  He also demonstrates how to actually prepare and set up your disks. In a nutshell you must get your LUNs online and zoned to your VIO servers, and you need to set your reserve policy to no_reserve on your LUNs.

After covering the commands for managing clusters — cluster –create, cluster –list, cluster –status, cluster –addnode, cluster –rmnode and cluster –delete — he recommends creating the cluster on one of the VIO servers and then adding additional VIO servers to the shared storage pool cluster. From there, you can allocate space to the VM clients.

Next week I’ll have more information from Nigel’s presentation, including scripts and cheat sheets. In the meantime, why not upgrade your test machine’s VIO servers to the latest level so you can try this functionality?

“The basic idea behind this technology… is that [VIO servers] across machines can be clustered together and allocate disk blocks from large LUNs assigned to all of them rather than having to do this at the SAN storage level. This uses the vSCSI interface rather than the pass through NPIV method. It also reduces SAN admin required for Live Partition Mobility — you get the LUN available on all the VIOS and they organise access from there on. It also makes cloning LPARs, disk snapshots and rapid provisioning possible. Plus thin provisioning — i.e., disk blocks — are added as and when required, thus saving lots of disk space.”

here’s more from RWX’s presentation.

Since shared storage pools are built on top of cluster-aware AIX, the lscluster command also provides more information, including: lscluster –c  (configuration), lscluster –d  (list all hdisks), lscluster –i  (network interfaces), lscluster –s  (network stats).

In the demo, RWX also discusses adding disk space and assigning it to client VMs. Keep in mind that you cannot remove a LUN from the pool. You can replace a LUN but you can’t remove one.

RWX also covers thin and thick provisioning using shared storage pools and shows you how to conduct monitoring. Run topas on your VIOS and then enter D (make sure it’s upper-case) so you can watch the disk I/O get spread across your disks in 64 MB chunks. From there, RWX covers how to set up alerts on your disk pool. If you’re using thin provisioning, you must ensure you don’t run out of space.

RWX also shares his script, called lspool. It’s designed to do the work of multiple scripts by presenting all of the critical information at one time instead of running multiple commands:

# lspool list each cluster and for each list its pools and pool details
~/.profile
clusters=`cluster -list | sed ‘1d’ | awk -F ” ” ‘{ printf $1 ” ” }’`
echo “Cluster list: ” $clusters
for clust in $clusters
do
pools=`lssp -clustername $clust | sed ‘1d’ | awk -F ” ” ‘{ printf $1 ” ” }’`
echo Pools in $clust are: $pools
for pool in $pools
do
lssp -clustername $clust | sed ‘1d’ | grep $pool | read p size free
totalLU numLUs junk
let freepc=100*$free/$size
let used=$size-$free
let usedpc=100*$used/$size
echo $pool Pool-Size: $size MB
echo $pool Pool-Free: $free MB Percent Free $freepc
echo $pool Pool-Used: $used MB Percent Used $usedpc
echo $pool Allocated: $totalLU MB for $numLUs Logical Units
alert -list -clustername $clust -spname $pool | sed ‘1d’ | grep $pool
| read p poolid percent
echo $pool Alert-Percent: $percent
if [[ $totalLU > $size ]]
then
let over=$totalLU-$size
echo $pool OverCommitted: yes by $over MB
else
echo $pool OverCommitted: no
fi
done
done

RWX examines snapshots and cloning with shared storage pools, noting that the different commands — snapshot –create, snapshot –delete, snapshot –rollback and snapshot –list — use different syntax. Sometimes it asks for a –spname flag, other times it asks for a –sp flag. Pay attention so you know the flags that are needed with the commands you’re running. RWX also demonstrates how some of this management can be handled using the HMC GUI.

The viosbr command is also covered. I discussed it here.

RWX recommends that you get started by asking the SAN team to hand over a few TB that you can use for testing. Also make sure your POWER6 and POWER7 servers are at the latest VIOS 2.2 level. It’s worth the effort. This technology will save time, boost efficiency and increase your overall responsiveness to users.

Finally, here’s RWX’s shared storage pools cheat sheet:

1. chdev -dev <device name> -attr reserve_policy=no_reserve
2. cluster -create -clustername rwx -repopvs hdisk2
-spname atlantic -sppvs hdisk3 hdisk5 -hostname bluevios1.ibm.com
3. cluster –list
4. cluster -status -clustername rwx
5. cluster –addnode –clustername rwx –hostname redvios1.ibm.com
6. cluster -rmnode [-f] -clustername rwx -hostname redvios1.ibm.com
7. cluster –delete –clustername rwx
8. lscluster –s or –d or –c or –i = CAA command
9. chsp –add –clustername rwx -sp atlantic hdisk8 hdisk9
10. chsp -replace -clustername rwx -sp atlantic -oldpv hdisk4 -newpv hdisk24
11. mkbdsp -clustername rwx -sp atlantic 16G
-bd vdisk_red6a -vadapter vhost2 [-thick]
12. rmbdsp -clustername rwx -sp atlantic -bd vdisk_red6a
13. lssp -clustername rwx -sp atlantic -bd
14. lssp -clustername rwx
15. alert -set -clustername rwx –spname atlantic -value 80
16. alert -list -clustername rwx -spname atlantic
17. errlog –ls
18. snapshot -create name -clustername rwx -spname atlantic -lu LUs
19. snapshot -delete name -clustername rwx -spname atlantic -lu LUs
20. snapshot -rollback name -clustername rwx -spname atlantic -lu LUs
21. snapshot –list -clustername rwx -spname atlantic
22. viosbr -backup -clustername rwx -file Daily -frequency daily -numfiles 10
23. viosbr -view -file File -clustername Name …
24. viosbr -restore -clustername Name …
25. lsmap -clustername rwx –all

 

 

AIX

The AIX® operating system is designed to deliver outstanding scalability, reliability, and manageability. Best of all, the AIX operating system comes from IBM, the world’s leading technology company.

IBM has broad experience in providing solutions to businesses of every size, in every industry, in every corner of the world. IBM has an excellent reputation for service and support. Whether you’re looking for planning services, integration and installation, tuning, migration, or everyday support, IBM provides service and support to help keep your business running.

AIX is an open, standards-based operating system that conforms to The Open Group’s Single UNIX Specification Version 3. It provides fully integrated support for 32- and 64-bit applications. The AIX operating system provides binary compatible support for the entire IBM UNIX product line including IBM Power™ Systems, System p™, System i™, pSeries®, iSeries™ servers as well as the BladeCenter® blade servers. AIX also supports qualified systems offered by hardware vendors participating in the AIX Multiple Vendor Program. So, as you move to newer versions of the AIX operating system, its excellent history of binary compatibility provides confidence that your critical applications will continue to run.

The latest version of AIX, Version 7, provides the capability to run AIX 5.2 inside of a Workload Partition or AIX 5.3 inside of a Workload Partition in addition to standard AIX 7-based workload partitions. Workload partitions are software-based virtualization available in AIX 7 or AIX 6. This virtualization capability is in addition to the advanced hypervisor-based virtualization provided by the IBM Power Systems platform with PowerVM.