PowerVM™ provides the industrial-strength virtualization solution forIBMPower Systems™ servers and blades. Based on more than a decadeof evolution and innovation, PowerVM represents the state of the art inenterprise virtualization and is broadly deployed in production environ-ments worldwide by most Power Systems owners.
The IBMPower Systems family of servers includes proven1workloadconsolidation platforms that help clients control costs while improvingoverall performance, availability and energy efficiency. With these serversand IBMPowerVM virtualization solutions, an organization can consoli-date large numbers of applications and servers, fully virtualize its systemresources, and provide a more flexible, dynamic IT infrastructure. Inother words, IBMPower Systems with PowerVM deliver the benefits of virtualization without limits.
Employing virtualization
You can employ virtualization in many ways to achieveimprovements in efficiency and flexibility:
- Consolidation of multiple workloads, including those onunderutilized servers and systems with varied and dynamicresource requirements
- Rapid deployment and scaling of workloads to meet changingbusiness demands
- Aggregation of system resources such as CPUs, memory andstorage into shared pools for dynamic reallocation betweenmultiple workloads
- Application development and testing in secure, independentdomains
- Live mobility of active workloads between servers to supportplatform upgrades, systems balancing, or to avoid plannedmaintenance downtime
Getting Started with Shared Storage Pools
The new shared storage pool functionality is enabled with the latest PowerVM 2.2 service pack, and is a feature of PowerVM Standard and Enterprise. If you already have PowerVM, simply download the VIO server fixpack to obtain these new features. (Note: Because this TL is based on AIX 6.1 TL7, your NIM server must be at AIX 6.1 TL7 or AIX 7.1 TL1 to use your NIM server with your VIO server.)
One thing to note, as Nigel points out in the presentation, is that the most common VIOS storage options have been around for some time:
1) Logical volumes, created from a volume group and presented to client LPARs
2) Whole local disks
3) SAN LUNs
4) File-backed storage, either from a file system on local disk or a file system on SAN disks
5) NPIV LUNs from SAN
Nigel then discusses the newest option: using SAN LUN disks that are placed into a shared storage pool. This new option, he emphasizes, doesn’t eliminate any of the other options. It does not portend the death of NPIV. It’s just an additional VIOS storage choice we now have.
Listen to the replay or look over the slides to gather Nigel’s thoughts on the benefits of shared storage pools. He explains that fibre channel LUNs and NPIV can be complex. They require knowledge of the SAN switch and the SAN disk subsystem. If you need to make changes, it might take your SAN guys awhile to implement them. This can slow overall responsiveness. That’s to say nothing of smaller organizations that don’t have dedicated SAN guys. Live partition mobility can be tough work if your disks aren’t pre-zoned to the different frames.
With a shared storage pool you pre allocate the disk to the VIO servers. Then it’s under your control. You can more easily allocate the space to your virtual machines.
POWER6 and POWER7 servers (including blades) are needed to use shared storage pools. At minimum you should allocate a 1 GB of LUN for your repository and another 1 GB of LUN for data, but in order to be useful, in most cases you’ll need much larger LUN(s) – think terabytes of disk — if you plan to do much with it.
Your VIOS must have the hostname set correctly to resolve the other hostnames. In Nigel’s experience he couldn’t use short hostnames — they had to be changed to their fully qualified names.
He also recommends giving your VIOS a CPU and at least 4 GB of memory. “Skinny” VIOS servers aren’t advisable with shared storage pools. Currently, the maximum number of nodes is four, the maximum physical disks in a pool is 256, the maximum virtual disks in a cluster is 1,024, and the number of clients is 40. A pool can have 5 GB to 4 TB of individual disks, and storage pool totals can range from 20 GB to 128 TB. Virtual disk capacity (LU) can be from 1 GB to 4 TB, with only one repository disk.
If you played around with phase one, you’ll find that many of your limits have been removed. Now you can use shared storage pools for live partition mobility, perform non-disruptive cluster upgrades and use third party multi-pathing software.
You cannot have active memory sharing paging disks on shared storage pool disks.
Nigel covers the relevant terminology (clusters, pools, logical units, etc.). He also demonstrates how to actually prepare and set up your disks. In a nutshell you must get your LUNs online and zoned to your VIO servers, and you need to set your reserve policy to no_reserve on your LUNs.
After covering the commands for managing clusters — cluster –create, cluster –list, cluster –status, cluster –addnode, cluster –rmnode and cluster –delete — he recommends creating the cluster on one of the VIO servers and then adding additional VIO servers to the shared storage pool cluster. From there, you can allocate space to the VM clients.
Next week I’ll have more information from Nigel’s presentation, including scripts and cheat sheets. In the meantime, why not upgrade your test machine’s VIO servers to the latest level so you can try this functionality?
“The basic idea behind this technology… is that [VIO servers] across machines can be clustered together and allocate disk blocks from large LUNs assigned to all of them rather than having to do this at the SAN storage level. This uses the vSCSI interface rather than the pass through NPIV method. It also reduces SAN admin required for Live Partition Mobility — you get the LUN available on all the VIOS and they organise access from there on. It also makes cloning LPARs, disk snapshots and rapid provisioning possible. Plus thin provisioning — i.e., disk blocks — are added as and when required, thus saving lots of disk space.”
here’s more from RWX’s presentation.
Since shared storage pools are built on top of cluster-aware AIX, the lscluster command also provides more information, including: lscluster –c (configuration), lscluster –d (list all hdisks), lscluster –i (network interfaces), lscluster –s (network stats).
In the demo, RWX also discusses adding disk space and assigning it to client VMs. Keep in mind that you cannot remove a LUN from the pool. You can replace a LUN but you can’t remove one.
RWX also covers thin and thick provisioning using shared storage pools and shows you how to conduct monitoring. Run topas on your VIOS and then enter D (make sure it’s upper-case) so you can watch the disk I/O get spread across your disks in 64 MB chunks. From there, RWX covers how to set up alerts on your disk pool. If you’re using thin provisioning, you must ensure you don’t run out of space.
RWX also shares his script, called lspool. It’s designed to do the work of multiple scripts by presenting all of the critical information at one time instead of running multiple commands:
# lspool list each cluster and for each list its pools and pool details
~/.profile
clusters=`cluster -list | sed ‘1d’ | awk -F ” ” ‘{ printf $1 ” ” }’`
echo “Cluster list: ” $clusters
for clust in $clusters
do
pools=`lssp -clustername $clust | sed ‘1d’ | awk -F ” ” ‘{ printf $1 ” ” }’`
echo Pools in $clust are: $pools
for pool in $pools
do
lssp -clustername $clust | sed ‘1d’ | grep $pool | read p size free
totalLU numLUs junk
let freepc=100*$free/$size
let used=$size-$free
let usedpc=100*$used/$size
echo $pool Pool-Size: $size MB
echo $pool Pool-Free: $free MB Percent Free $freepc
echo $pool Pool-Used: $used MB Percent Used $usedpc
echo $pool Allocated: $totalLU MB for $numLUs Logical Units
alert -list -clustername $clust -spname $pool | sed ‘1d’ | grep $pool
| read p poolid percent
echo $pool Alert-Percent: $percent
if [[ $totalLU > $size ]]
then
let over=$totalLU-$size
echo $pool OverCommitted: yes by $over MB
else
echo $pool OverCommitted: no
fi
done
done
RWX examines snapshots and cloning with shared storage pools, noting that the different commands — snapshot –create, snapshot –delete, snapshot –rollback and snapshot –list — use different syntax. Sometimes it asks for a –spname flag, other times it asks for a –sp flag. Pay attention so you know the flags that are needed with the commands you’re running. RWX also demonstrates how some of this management can be handled using the HMC GUI.
The viosbr command is also covered. I discussed it here.
RWX recommends that you get started by asking the SAN team to hand over a few TB that you can use for testing. Also make sure your POWER6 and POWER7 servers are at the latest VIOS 2.2 level. It’s worth the effort. This technology will save time, boost efficiency and increase your overall responsiveness to users.
Finally, here’s RWX’s shared storage pools cheat sheet:
1. chdev -dev <device name> -attr reserve_policy=no_reserve
2. cluster -create -clustername rwx -repopvs hdisk2
-spname atlantic -sppvs hdisk3 hdisk5 -hostname bluevios1.ibm.com
3. cluster –list
4. cluster -status -clustername rwx
5. cluster –addnode –clustername rwx –hostname redvios1.ibm.com
6. cluster -rmnode [-f] -clustername rwx -hostname redvios1.ibm.com
7. cluster –delete –clustername rwx
8. lscluster –s or –d or –c or –i = CAA command
9. chsp –add –clustername rwx -sp atlantic hdisk8 hdisk9
10. chsp -replace -clustername rwx -sp atlantic -oldpv hdisk4 -newpv hdisk24
11. mkbdsp -clustername rwx -sp atlantic 16G
-bd vdisk_red6a -vadapter vhost2 [-thick]
12. rmbdsp -clustername rwx -sp atlantic -bd vdisk_red6a
13. lssp -clustername rwx -sp atlantic -bd
14. lssp -clustername rwx
15. alert -set -clustername rwx –spname atlantic -value 80
16. alert -list -clustername rwx -spname atlantic
17. errlog –ls
18. snapshot -create name -clustername rwx -spname atlantic -lu LUs
19. snapshot -delete name -clustername rwx -spname atlantic -lu LUs
20. snapshot -rollback name -clustername rwx -spname atlantic -lu LUs
21. snapshot –list -clustername rwx -spname atlantic
22. viosbr -backup -clustername rwx -file Daily -frequency daily -numfiles 10
23. viosbr -view -file File -clustername Name …
24. viosbr -restore -clustername Name …
25. lsmap -clustername rwx –all