1. Important Goal Mode Notice
2. Cheryl Watson’s TUNING Letter 2002, No. 2 Summary
3. Cheryl Watson’s TUNING Letter 2002, No. 3 Summary
4. Kudos for These Issues
1. Important Goal Mode Notice
The first half of this item was included in our fifty-six page 2002, No. 3 TUNING Letter, but we decided that it was so important that we want as many people to be aware of it as possible. Please feel free to distribute this.
NEWSLETTER READERS – PLEASE SEE THE ADDITIONAL MATERIAL, DENOTED BY ASTERISKS.
Steve Schwaller of Key Bank reported the following situation:
“We just began to rollout our upgrade from OS/390 2.10 to z/OS 1.2 and saw our execution velocities drop and PIs go up. The first upgrade was on a development LPAR and after making sure there weren’t any significant workload changes that coincided with the upgrade, I took a look at the WLM using samples and saw a significant drop in the ‘CPU Using’. We saw a reduction of around 65% of the CPU samples for the consistent workloads which includes the SYSTEM and SYSSTC workloads. This is a development LPAR and the workloads vary day to day so I concentrated on the more consistent and high priority work, but you can see the change across the board.
“We will definitely need to check and adjust our velocities because we have some service classes that run with a PI of .8-1.0 and the Z/OS upgrade will push this to 1.1-1.4. These workloads are currently running well and meeting customer SLAs (Service Level Agreements), so we don’t want them taking more CPU cycles. The other side of this is if we were at response time goals for CICS, this wouldn’t be much of an issue. We are currently collecting measurement data for conversion, but don’t expect to begin converting until maybe later this year.”
What you are seeing could definitely be the result of a change to the way CPU using is calculated starting with APAR OW47277. The APAR description is: “In LPAR mode, there can be ‘phantom’ CPU using samples for a service class when the logical CPU is dispatched but not getting access to a physical CPU. This could distort WLM’s decisions by making it look like work is getting better CPU access than it really is.” The resolution from the APAR is: “Make the CPU using samples more accurate, especially in an LPAR environment, by deriving CPU using samples from CPU service time rather than from direct sampling. This change may result in lower CPU using samples and therefore may affect achieved velocities for systems running in LPAR mode. Achieved velocities will be the same or lower, depending on if there is a change in CPU using samples. Customers should review velocity goals in their WLM policy and adjust downward if needed.”
This APAR was in the list of WLM APARs from Norman Hollander in our TUNING Letter, 2001, No. 3 (page 20), but I didn’t realize the significance of it until Steve mentioned it. The PTF (2/28/01) for APAROW47277, “WLM and SRM Function Test Corrections for LPAR CPU Management,” is usually found on z/OS 1.1, and is incorporated into z/OS releases starting with 1.2.
This is very important information, and the APAR seems to be the only place it’s documented. The result of this change in code is that if you move from any OS/390 release to any z/OS release in LPAR mode, you will need to reduce velocities! Many thanks for pointing this out, Steve!
*** ADDED AFTER THE NEWSLETTER WAS SENT OUT ***
I’ve seen two more installations run into this problem during the past week. They moved from OS/390 to z/OS in an LPAR configuration and they started missing their goals. The bad part of this is that any work that is missing its goal could be taking cycles away from less important work. So it’s extremely important that you reduce these velocities. I don’t know yet, but I would expect that busiest LPARs would see less impact than the smaller LPARs. (That’s simply my theory, but I’d love to get some feedback on this.)
Graham Johnson of Workers Compensation Board of British Columbia looked at the variance in changing velocities and found that the z/OS velocities were as much as 30% lower than the OS/390 velocities. They vary a lot, so it’s impossible to just modify the velocities across the board. You have to do like Steve did and recalculate the velocity. For example, Steve found the following: “The STC workload looks like the CPU using samples have dropped around 60%, which is similar to the first LPAR. The batch, CICS, and DB2 workloads have seen about a 50% decline in CPU using samples. We’ve seen a drop of about 50% in TSO, but this hasn’t affected the PI since it uses response percentiles and the last period is discretionary.”
The calculation of velocity is: velocity = (using samples / using samples + delay samples) * 100. The ‘using samples’ are equal to the total of CPU using samples and (when I/O priority management is turned on) I/O using samples. If you look at the change in CPU using samples before and after moving to z/OS, you can determine the percent of change for each LPAR and service class as Steve did above. Then, you can recalculate the velocity. If you have MXG or Neu-MICS, you probably have the data in your database.
To show the calculation, I’ll use the SMF field names from the RMF type 72, subtype 3 record: R723CTOT is the total number of delay samples, R723CTOU is the total number of using samples, and R723CCUS is the number of CPU using samples (which is part of R723CTOU). Because they don’t have a field for I/O using samples, you can obtain them by subtracting the CPU using samples from the total using samples. Let’s assume that you want to recalculate the velocity for a service class that saw a 60% drop in CPU samples. Use the data from the OS/390 system (because the data will not have been skewed due to the new velocities). First, calculate a ‘new-using-samples’ (which are the sum of the new adjusted CPU using and the I/O using samples), which are equal to: (.40 * R723CCUS) + (R723CTOU – R723CCUS). Now the new velocity will be equal to: (new-using-samples / (new-using-samples + R723CTOT)) * 100.
As Jerry Urbaniak of Acxiom noticed, this puts you between a rock and a hard place if you’re trying to set a single velocity in a WLM policy that is used in a parallel sysplex and shared by both OS/390 and z/OS releases. At this time, I don’t really have a recommended solution. As more people move to z/OS, perhaps we’ll find a solution or recommendation from IBM.
2. Cheryl Watson’s TUNING Letter 2002, No. 2
The forty-four page 2002, No. 2 TUNING Letter was emailed to electronic subscribers on June 6. Print subscribers should receive their issues the week of June 17. You can purchase a printed copy of the current TUNING Letter for $85 at http://www.watsonwalker.com. The following is from the first page:
This entire issue is devoted to tuning the IBM HTTP Web Server. We always like to have something for everyone in each issue, but it was important to keep all this Web server information in a single issue. So we’ve moved our other topics and regular sections to our 2002, No. 3 issue, which is being sent at the same time.
With that said, however, even if you aren’t looking at doing Web serving on your mainframe, this can still be an incredibly valid issue. The reason for this is that to tune the HTTP server, you must first tune the supporting subsystems, such as Unix Systems Services (USS), RACF, LE, and TCP/IP. You’ll find plenty of tuning advice, measurement analysis, and pointers to performance articles on these subsystems in this issue.
If you are currently running any version of Web serving on your mainframe, you’ll find this an extremely useful issue, summarizing all of the tuning possibilities and measurement sources available for the HTTP server. These recommendations can be useful if you’re running HTTP alone, or using it with WebSphere Application Server (WAS) V3.02, V3.5, V4.0, or V4.0.1.
And if you are just starting to think about Web serving on your mainframe, then realize that you can try it out with no additional cost unless you want to run Java on the mainframe. The HTTP server comes free with the operating system.
In this issue, we provide an introduction, an comprehensive bibliography, a tuning checklist of all known tuning options (almost 100), an analysis of the measurements and reports, and finally a recommended and step-by-step plan for how to go about tuning your Web server.
3. Cheryl Watson’s TUNING Letter 2002, No. 3
The fifty-six page 2002, No. 3 TUNING Letter was emailed to electronic subscribers on June 6. Print subscribers should receive their issues the week of June 17. You can purchase a printed copy of the current TUNING Letter for $85 at http://www.watsonwalker.com. The following is extracted from the “Management Issues” section:
There are several dates mentioned in this issue that are extremely important. They have to do with software pricing (page 44) and OS/390 (page 4).
There have been some significant changes in software pricing and pricing options since our focus on pricing in our TUNING Letter, 2001, No. 4. See our descriptions and explanations of these starting on page 45, along with further details on z800 pricing. These items could save you a considerable amount of money.
I think this is one of our best issues ever, because it is full of our readers’ experiences and important APARs that can help you avoid problems in your own installation. These are things that you can’t find in manuals, but are necessary for a smooth running shop. Our S/390 News on page 4 has dozens of important APARs, hints from IBM-Main, and several pages of notes on Workload Manager (including four ways to print policies, an explanation of transaction sampling, how to set Omegamon goals, report class reporting, and several important APARs). In the Focus article on User Experiences (page 26), I include a comprehensive review of the state of 64-bit, an update on moving to faster but fewer CPUs, how to configure your LPARs most efficiently, and how to resolve SMF synchronization differences. We also point out several documentation errors and omissions.
Elsewhere in This Issue
The rest of this issue contains important WSC Flashes and hints, page data set limitations, and notes on the popular CBT tape. In the User Experiences section on page 26, we also cover the number of catalogs per volume, ERV in goal mode, new NetView enclaves, jobs not staying swapped out (when they should!), a HIPER TCP/IP APAR, high CPU overhead from ENF processing, and more notes on zFS. Of course, our section on neat Web sites and new manuals is always important.
4. Kudos for These Issues
We are especially proud of these two issues, and are pleased to see that our readers already agree. We really appreciate your feedback!
Tom Conley of the Pinnacle Consulting Group sent us this kind comment: “After perusing Tuning Letters #2&3, I had to write you to say how stunned I was at the length and breadth of the information contained therein. Those two issues could keep me busy for the next two months easy. Absolutely outstanding!”
Jerry Urbaniak of Acxiom also made our day with: “Congratulations on your recent publishing of not one but two of the best Tuning Letters ever (2002 No. 2 + No. 3)! And given past history that is saying a lot. They are simply packed with valuable information from cover to cover. In addition to your own considerable expertise, it is obvious that these were even further enriched by extensive contributions from subscribers, IBM, and other vendors as well. In this case the whole is much greater than the sum of the parts. I hope these active contributions continue in the future as there is no substitute for actual experiences and developer insights. Your publication is indeed “A Practical Journal …” that can be really used to great benefit. I do not really see how any OS/390 or z/OS installation could get by without the “Tuning Letter.” The subscription is money well invested because for the value received the newsletter easily pays for itself. Thanks and all your efforts are sincerely appreciated!”