Archive for May, 2012

Products EOL / Life Cycle

 

 

—————–

 

 

AIX support lifecycle information

 

 

 

 

Product lifecycle

 

Abstract

Lists the duration of fix support availability and end of fix support for AIX Technology Levels (TL).

Content

AIX TL fix support

Release

Release Date

Fixes Available Until

AIX 5.2 TL8

February 2006

February 2007

 

AIX 5.2 TL9

July 2006

November 2007

 

AIX 5.2 TL10

June 2007

April 2009

End of Service

AIX 5.3 TL5

August 2006

November 2007

 

AIX 5.3 TL6

June 2007

30 May 2009

 

AIX 5.3 TL7

November 2007

30 November 2009

 

AIX 5.3 TL8

April 2008

30 April 2010

 

AIX 5.3 TL9

November 2008

30 November 2010

 

AIX 5.3 TL10

May 2009

31 May 2011

 

AIX 5.3 TL11

October 2009

2 years later

 

AIX 5.3 TL12

April 2010

2 years later

 

AIX 6.1 TL0

November 2007

30 November 2009

 

AIX 6.1 TL1

May 2008

31 May 2010

 

AIX 6.1 TL2

November 2008

30 November 2010

 

AIX 6.1 TL3

May 2009

31 May 2011

 

AIX 6.1 TL4

November 2009

2 years later

 

AIX 6.1 TL5

April 2010

2 years later

 

AIX 6.1 TL6

September 2010

3 years later

 

AIX 6.1 TL7

October 2011

3 years later

 

AIX 7.1 TL0

September 2010

3 years later

 

AIX 7.1 TL1

October 2011

3 years later

 

 

 

 

Red Hat Enterprise Linux versions 5 and 6 share a multiphase life cycle that can span 13 years, while versions 3 and 4 share a life cycle of 10 years. During the first five and a half years of the life cycle (“production 1”), there is full support and software and hardware drivers are updated. In later phases, support and updates are gradually reduced, with only critical and security-related bug fixes being provided to customers who pay for support in the last three years (“extended life cycle”).[16]

Version

Date of Release

End of Support Dates

Red Hat Enterprise Linux 2.1

2002-03-26 (AS)

2003-05-01 (ES)

2009-05-31[17]

Red Hat Enterprise Linux 3

2003-10-23

2006-07-20 (End of Production 1)

2007-06-30 (End of Production 2)
2010-10-31[18] (End of Production 3 / End of Regular Life Cycle)
2014-01-30 (End of Extended Life Cycle)

Red Hat Enterprise Linux 4

2005-02-14

2009-03-31 (End of Production 1)

2011-03-31 (End of Production 2)
2012-02-29 (End of Production 3 / End of Regular Life Cycle)
2015-02-28 (End of Extended Life Cycle)

Red Hat Enterprise Linux 5

2007-03-15

Q4 2012 (End of Production 1)

Q1 2014 (End of Production 2)
2017-03-31 (End of Production 3 / End of Regular Life Cycle)
2020-03-31 (End of Extended Life Cycle)

Red Hat Enterprise Linux 6

2010-11-10

Q2 2016 (End of Production 1)

Q2 2017 (End of Production 2)
2020-11-30 (End of Production 3 / End of Regular Life Cycle)
2023-11-30 (End of Extended Life Cycle)

Red Hat Enterprise Linux 7

 ?

 ?

  Prior version outside of Regular Life Cycle

  Prior version with varying degrees of support

  Current version

  Future version

Note: A version outside of its Regular Life Cycle is normally unsupported, but support can still be obtained through Red Hat while the release is in its Extended Life Cycle through an add-on subscription, Extended Life Cycle Support

IBM WebSphere Application Server

 

This table is derived from IBM Information Center: Specifications and API documentation and WebSphere product lifecycle dates.

WebSphere version

8.5

8.0

7.0

6.1

6.0

5.1

5.0

4.0

3.5

Release date

15 Jun 2012 [4]

17 Jun 2011

17 Oct 2008

30 Jun 2006

31 Dec 2004

16 Jan 2004

03 Jan 2003

15 Aug 2001

31 Aug 2000

End of support

     

30 Sept 2012

30 Sept 2010

30 Sept 2008

30 Sept 2006

30 April 2005

30 Nov 2003

J2SE/Java SE

[5]

6

6

5

1.4

1.4

1.3

1.3

1.2

JavaEE

6

6

5

1.4

1.4

1.3

1.3

1.2

1.2 (not fully compliant)

Servlet

3.0

3.0

2.5

2.4

2.4

2.3

2.3

2.2

2.1&2.2

JSP

2.2

2.2

2.1

2.0

2.0

1.2

1.2

1.1

0.91 and 1.0&1.1

EJB

3.1

3.1

3.0

3.0 [6]

2.1

2.0

2.0

1.1

1.0

JDBC

4.1

4.0

4.0

3.0

3.0

       

IBM has shipped several versions and editions of WebSphere Application Server.

 

Basic Performance Tuning

Hardware:
– date; uname -a; id; oslevel -s; lparstat -i
– check hardware (prtconf | more — lsdev | grep vail — lscfg | grep +)
– think to think or move the data workload
– check near-static structures (lvm, paging space, settings)
– check historical data (events, errorrs)

Memory – VMM
– shared memory segments: ipcs -bm (most of the time not all of the memory is allocated)
  the given values shows what is the maximum size that mem. segments can grow (it is a major component of the computational memory)
– uptime; vmstat -s  (it increments since boot)
– uptime; vmstat -v (I/O goes through fsbufs -> then pbufs (each of them can be exhausted))
– vmo -L; ioo -L

I/O – LVM:
– df -k (how much content is goverened by 1 inode)
– tech-stack map: RAIDset ->LUN->LVM(VG:lv:fs/with options) -> logical content
– iostat -a, iostat -D
– lvmo -a -v <vgname> (pbufs can be checked and increased if needed)
– iostat -AQ 2 (asynchronous I/O stats)

Processes:
– uptime; ps -ekf| egrep “syncd|lrud|nfsd|biod|wait
  match time of lrud with syncd: if lrud is greater, then it should grab your attention (if lower it is fine)
  if lrud is high it is scanning and freeing and scanning and freeing…
  lrud has high priority, so if it is running not much work can be done (reduce lrud to let other processing running)
– ps -kelmo THREAD (shows the threaded world)
– ps guww (shows in descending %CPU (RSS:in real memory SZ:in virtual memory, STIME: start time, TIME: accumulated system time))
– ps gvww (shows in ascending PID (PGIN:how many pages are moved))
– ps -ef | grep -v “Oct 20” (the day of boot has been grepped out and check what processes have been started from that time)
– ps -ef | grep -LOCAL=NO (for Oracle client sessions)
Network:
– netstat -ss (check non-zero values)
– netstat -v (queue overflow)
– nfsstat

6 in 1 tool:
– vmstat -Iwt 2
if cpubound -> tprof is used to spot those processes which are using
if memory bound -> svmon is used to help to find what is using the most memory
if i/o bound -> filemon will help to find what is causing all of the disk activity
CPU wait is too high, how can I reduce it?

CPU in waiting for I/O mode is not a problem. The CPU is actually in Idle mode but it has noted there is disk I/O outstanding and then it is reported as Wait instead of Idle. Lots of workloads that throw data away faster than it can be read will be seen as high Wait. In Wait for I/O mode it is fully available to run more application code.

In benchmarks, Wait for I/O is seen positively as an opportunity – we can do throw in more work to boost throughput.

Any workload in which the CPU does little work compared to the volume of disk I/O is going to give you high Wait for I/O.

If this high Wait for I/O is a sudden change from the normal pattern then it needs investigating and you should make sure as many disks as possible are involved in the disk I/O.

In fact, faster CPUs would mean even high wait values.
Which process consumes most memory?

topas -P, you can tab to page space column to sort on that. It is called “page space” column, because it shows the memory usage which is backed by that amount of paging space (which is the size of the process in memory

Which process has used the DISK I/O most frequently?

Start nmon –> t for top processes  –>Hit 5 to list them in I/O order, then look at the Char I/O column
Free memory is near zero, how do I free more memory?

This is just how AIX works and is perfectly normal. All memory will be soaked up with copies of filesystem blocks after a reasonable length of time and the free memory will be near zero. If your file systems cache is a large percentage of memory then you are avoiding disk I/O. This is a good thing. You should NOT try to reduce it – this could damage performance.

AIX will then use the lrud process to keep the free list at a reasonable level. If you see the lrud process taking more than 30% of a CPU then you need to investigate and make memory parameter changes.

20% paging space usage, how affect performance?

20% of paging space can be allocated but no actual I/O taking place. You need to look at the paging stats to determine, if paging I/O is actually happening. Allocating paging space would not have a performance impact.