Hi all, do you know where I can find free on line tutorial about As400 and Db2? I need a very basic level. Learn when you want, where you want with convenient online training courses. AS400 and DB2 tutorial. Hi all, do you know where I can find free on line tutorial about As400 and Db2? I need a very basic level. Status Solved.
As400 Tutorial For Beginners
As400 Tutorial
AS/400-iSeries Starter Kit by Wayne Madden, iSeries NEWS Editor in Chief Updated chapters by Gary Guthrie, iSeries NEWS Technical Editor
Table of Contents Please note that Starter Kit for the AS/400, Second Edition is copyright 1994. Although much of its content is still valid, much is also out-of-date. The good news is that iSeries NEWS technical editor Gary Guthrie has been working on an updated edition: Starter Kit for the IBM iSeries and AS/400. We've posted sample chapters of the new book here in place of the old ones. (Updated chapters are clearly labeled as such in the Table of Contents.) New Edition Now Available The new Starter Kit for the IBM iSeries and AS/400, co-authored by Gary Guthrie and Wayne Madden, is now available from 29th Street Press (April 2001). Completely updated for the iSeries and expanded to cover new topics such as TCP/IP and Operations Navigator, the new book includes a CD containing all the sample code and utilities presented in the book. For more information or to order, visit the iSeries Network Store.
Acknowledgments
Introduction
SETUP Chapter 1: Before the Power is On Before You Install Your System Develop an Installation Plan Plan Education Prepare Users for Visual and Operational Differences Develop a Migration Plan Develop a Security Plan System Security Level Password Format Rules Identifying System Users Develop a Backup and Recovery Plan Establish Naming Conventions What Next?
Chapter 2: That Important First Session
Signing On for the First Time Establishing Your Work Environment Now What?
Chapter 3: Access Made Easy What Is a User Profile? Creating User Profiles USRPRF (User Profile) PASSWORD (User Password) PWDEXP (Set Password to Expired) STATUS (Profile Status) USRCLS (User Class) and SPCAUT (Special Authority) Initial Sign-On Options System Value Overrides Group Profiles JOBD (Job Description) SPCENV (Special Environment) Message Handling Printed Output Handling Documenting User Profiles Maintaining User Profiles Flexibility: The CRTUSR Command Making User Profiles Work for You
Chapter 4: Public Authorities What Are Public Authorities? Creating Public Authority by Default Limiting Public Authority Public Authority by Design Object-Level Public Authority
Chapter 5: Installing a New Release Planning is Preventive Medicine The Planning Checklist Step 1: Is Your Order Complete? Step 2: Manual or Automatic? Step 3: Permanently Apply PTFs Step 4: Clean Up Your System Step 5: Is There Enough Room? Step 6: Document System Changes Step 7: Get the Latest Fixes Step 8: Save Your System Installation-Day Tasks Step 9: Resolve Pending Operations Step 10: Shut Down the INS Step 11: Verify System Integrity Step 12: Check System Values Ready, Set, Go! Final Advice
Chapter 6: Introduction to PTFs When Do You Need a PTF? How Do You Order a PTF? SNDPTFORD Basics Ordering PTFs on the Internet How Do You Install and Apply a PTF? Installing Licened Internal Code PTFs Installing Licensed Program Product PTFs Verifying Your PTF Installation How Current Are You?
Developing a Proactive PTF Management Strategy Preventive Service Planning Preventive Service Corrective Service
AS/400 OPERATIONS Chapter 7: Getting Your Message Across: User to User Sending Messages 101 I Break for Messages Casting Network Messages Sending Messages into History
Chapter 8: Secrets of a Message Shortstop by Bryan Meyers Return Reply Requested A Table of Matches Give Me a Break Message Take a Break It's Your Own Default
Chapter 9: Print Files and Job Logs How Do You Make It Print Like This? Where Have All the Job Logs Gone?
Chapter 10: Understanding Output Queues What Is an Output Queue? How To Create Output Queues Who Should Create Output Queues? How Spooled Files Get on the Queue How Spooled Files Are Printed from the Queue A Different View of Spooled Files How Output Queues Should Be Organized
Chapter 11: The V2R2 Output Queue Monitor The Old Solution A Better Solution The STRTFROUTQ Utility To Compile These Utilities A Data Queue Interface Facelift RCVDTAQE CLRDTAQ
Chapter 12: AS/400 Disk Storage Cleanup Automatic Cleanup Procedures Manual Cleanup Procedures Enhancing Your Manual Procedures
Chapter 13: All Aboard the OS/400 Job Scheduler! by Bryan Meyers Arriving on Time Running on a Strict Schedule Two Trains on the Same Track Derailment Dangers
Chapter 14: Keeping Up With the Past System Message Show and Tell History Log Housekeeping Inside Information
SYSTEM MANAGEMENT Chapter 15: AS/400 Save and Restore Basics by Debbie Saugen
Designing and Implementing a Backup Strategy Implementing a Simple Backup Strategy Implementing a Medium Backup Strategy Implementing a Complex Backup Strategy An Alternative Backup Strategy The Inner Workings of Menu SAVE Entire System (Option 21) System Data Only (Option 22) All User Data (Option 23) Setting Save Option Defaults Printing System Information Saving Data Concurrently Using Multiple Tape Devices Concurrent Saves of Libraries and Objects Concurrent Saves of DLOs (Folders) Concurrent Saves of Objects in Directories Save-While-Active How Does Save-While-Active Work? Save Commands That Support the Save-While-Active Option Backing Up Spooled Files Recovering Your System Availability Options [sidebar] Preparing and Managing Your Backup Media [sidebar]
Chapter 16: Backup Without Downtime by Debbie Saugen An Introduction to BRMS Getting Started with BRMS Saving Data in Parallel with BRMS Online Backup of Lotus Notes Servers with BRMS Restricted-State Saves Using BRMS Backing Up Spooled Files with BRMS Including Spooled File Entries in a Backup List Restoring Spooled Files Saved Using BRMS The BRMS Operations Navigator Interface Terminology Differences Functional Differences Backup and Recovery with BRMS OpsNav Backup Policies Creating a BRMS Backup Policy Backing Up Individual Items Restoring Individual Items Scheduling Unattended Backup and Restore Operations System Recovery Report BRMS Security Functions Security Options for BRMS Functions, Components, and Items Media Management BRMS Housekeeping Check It Out
WORK MANAGEMENT Chapter 17: Defining a Subsystem Getting Oriented Defining a Subsystem Main Storage and Subsystem Pool Definitions Starting a Subsystem
Chapter 18: Where Jobs Come From Types of Work Entries Conflicting Workstation Entries Job Queue Entries Communications Entries Prestart Job Entries
Autostart Job Entry Where Jobs Go
Chapter 19: Demystifying Routing Routing Data for Interactive Jobs Routing Data for Batch Jobs Routing Data for Autostart, Communications, and Prestart Jobs The Importance of Routing Data Runtime Attributes Is There More Than One Way to Get There? Do-It-Yourself Routing
FILE BASICS Chapter 20: File Structures Structural Fundamentals Data Members: A Challenge Database Files Source Files Device Files DDM Files Save Files
Chapter 21: So You Think You Understand File Overrides Anatomy of Jobs Override Rules Scoping an Override Overriding the Same File Multiple Times The Order of Applying Overrides Protecting an Override Explicitly Removing an Override Miscellanea Important Additional Override Information Overriding the Scope of Open File Non-File Overrides Overrides and Multi-Threaded Jobs File Redirection Surprised?
Chapter 22: Logical Files Record Format Definition/Physical File Selection Key Fields Select/Omit Logic Multiple Logical File Members Keys to the AS/400 Database
Chapter 23: File Sharing Sharing Fundamentals Sharing Examples
BASIC CL PROGRAMMING Chapter 24: CL Programming: You're Stylin' Now! CL Coding Suggestions
Chapter 25: CL Programming: The Classics Classic Program #1: Changing Ownership The Technique Classic Program #2: Delete Database Relationships The Technique Classic Program #3: List Program-File References The Technique
Chapter 26: CL Programs and Database Files Why Use CL to Process Database Files? I DCLare! Extracting Field Definitions Reading the Database File File Positioning What About Record Output? A Useful Example
Chapter 27: CL Programs and Display Files CL Display File Basics CL Display File Examples Considerations
Chapter 28: OPNQRYF Fundamentals The Command Start with a File and a Format Record Selection Key Fields Mapping Virtual Fields OPNQRYF Command Performance SQL Special Features OPNQRYF Special Features
Chapter 29: Teaching Programs to Talk Basic Training Putting the Command to Work Knowing When To Speak
Chapter 30: Just Between Us Programs Job Message Queues The SNDPGMMSG Command ILE-Induced Changes Message Types The Receiving End Program Message Uses Understanding Job Logs
Chapter 31: Hello, Any Messages? Receiving the Right Message Note on the V2R3 RCVMSG Command Parameter Changes Receiving the Right Values Monitoring for a Message Working with Examples What Else Can You Do with Messages? RCVMSG and the MSGTYPE and MSGKEY
OTHER CONCEPTS Chapter 32: OS/400 Commands Commands: The Heart of the System Tips for Entering Commands Customizing Commands Modifying Default Values
Chapter 33: OS/400 Data Areas Creating a Data Area Local Data Areas Group Data Areas
Chapter 1 - Before the Power Is On
With the AS/400, IBM has tried to graft the S/36's ease of use onto the S/38's integrated database and productivity features. In many respects, Big Blue has succeeded -- the AS/400 provides extensive help text, highly developed menu functions, on-line education, and electronic customer support. But the machine's friendliness stops short of 'plug and go' installation, especially for shops converting from a non-IBM system or migrating from an IBM system other than a S/38. Even S/38 migration is not completely plug and go, although the AS/400 has inherited many S/38 characteristics: a complex structure of system objects used to support security, work environment, performance tuning, backup, recovery, and other functions. These objects let you configure a finely tuned and productive machine, but they do not readily lend themselves to education on the fly. As a result, the AS/400 requires thought, foresight, planning, and preparation for a successful installation. Believe me, I know. I have experienced the AS/400 planning and installation process as both a customer and a vendor, and I'd like to share what I've learned by suggesting a step-by-step approach for planning, installing, and configuring your AS/400. First I discuss the steps you can and should take before your system arrives. In subsequent chapters, I take you through your first session on the machine, address how to establish your work environment, and show you how to customize your system. I have outlined the installation process in the AS/400 setup checklist in Figure 1.1. You might want to use this checklist as the cover page to a notebook you could put together to keep track of your AS/400 installation.
Before You Install Your System The first step in implementing anything complex -- especially a computer system -- is thorough planning. A successful AS/400 installation begins long before your system rolls in the door. The first section of the setup checklist in Figure 1.1 lists tasks you should complete before you install your system -- preferably even before it arrives. These items may seem like a great deal of work before you ever see your system, but this work will save you and your company time and trouble when you finally begin installing, configuring, securing, and using your new system. Let's look at each item in this section of the checklist individually.
Develop an Installation Plan A good installation plan serves as a road map. It guides you and your staff and keeps you focused on the work ahead. Figure 1.2 shows a sample installation plan that lists installation details and lets you track the schedule and identify the responsible person for each task. [Although the installation plan includes important considerations about the physical installation -- e.g., electrical, space, and cooling requirements -- these requirements are well documented in IBM manuals, and I do not discuss them here. For details about physical installation, refer to the AS/400 Physical Planning Guide -- Version 2 (GA41-0001), the AS/400 Physical Planning Guide and Reference -- Version 2 (GA41-9571), the AS/400 Migrating from S/36 Planning Guide -- Version 2 (GC41-9623), or the AS/400 Migrating from S/38 Planning Guide -- Version 2 (GC41-9624).] An overall installation plan helps you put the necessary steps for a successful AS/400 setup into writing and tailor them to your organization's specific needs. The plan also helps you identify and involve the right people and gives you a schedule to work with. Identifying and involving the right people is critical to creating an atmosphere that assures a smooth transition to your new system. Management must commit itself to the installation process and must understand and agree to the project's priority. Other pending MIS projects should be examined and assigned a priority based on staff availability in light of the AS/400's installation schedule. Management and the departments you serve must understand and agree on these scheduling changes. On the MIS side, your staff must commit to learning about the AS/400 in preparation for installation and migration. Your staff must also commit itself to completing all assigned tasks, many of which (e.g., time spent verifying the migration or conversion of programs and data) may require extra hours. The time frame outlined in your installation plan will probably change as the delivery date nears. But even as the schedule changes and is refined, it provides a frame of reference for the total time you need to install, configure, and migrate to the new system. You must also answer an important question as part of your plan: Can you run the old and new systems parallel for a period of time? If you can run parallel, you can greatly reduce the time needed for the installation process.
Running parallel also reduces the risk factor involved in your migration and conversion process.
Plan Education I can hear you now: 'We don't have time for classes! We're too busy to commit our people to any education.' I'm sure this will be your response to the suggestion that you plan for training now. I'm also sure that those statements are absolutely true. But education is a vital part of a successful AS/400 installation. Realistically, then, you must schedule key personnel for education. What key groups of personnel need training? The end users, for one. Their education should focus on PC Support and on the AS/400 Office products they will work with. But you and your operations and programming staff will also need some training. If you move to an AS/400 from a S/36, you will see the familiar sign-on screen, the friendly menu format, and the extensive help text associated with the S/36. But the AS/400 also has some unfamiliar territory: You must learn new security concepts, how to modify your work environment to improve performance, and how to control printer output. Training in relational database design and implementation will improve the applications you migrate or write, and learning something about the AS/400's fast-path commands will help you feel more at home and productive in the native environment. If you are moving from a S/38, you will recognize the fast-path commands (with some minor changes), the command entry display (once you find it), the relational database, the work environment objects, and the security concepts. However, you will need additional knowledge about how to implement new security options, the 'current library,' the Programming Development Manager (PDM), available menus, and other new concepts. You'll also have to learn about the new program products and operations on the AS/400. If all this sounds complicated, then you're getting the point: You need system-specific education for a smooth transition to the AS/400. Where can you get such education? Begin by asking your vendor for educational offerings. If you buy from a third party, training support will vary from vendor to vendor. You can also arrange to attend courses at an IBM Guided Learning Center. Another place to get AS/400 education is on the AS/400 itself. To supplement vendor training support, each AS/400 comes with Tutorial System Support (TSS) installed. This on-line tutorial help provides self-paced lessons for programmers, clerical workers, executives, systems analysts, and others (Figure 1.3 lists the various audience paths available by using TSS lessons). You may be able to begin TSS training before your AS/400 arrives by working through your hardware or software vendor.
You can also find a variety of educational offerings in seminars, automated courses, study guides, one-on-one training sessions, and classroom training courses. The key to successful education is matching education to the user. Matching ensures productive use of the time employees spend away from their daily duties.
Prepare Users for Visual and Operational Differences It would be nice if you could assure all your users that they will not find anything different when they sign on to the AS/400 for the first time, but you probably can't. You would be wise to give some thought to the visual and operational differences and explain them to your users in advance. For example, S/38 users used to a single-level sign-on (just entering a password) may be surprised (and unhappy) to find they must sign on to the AS/400 with both a user profile name and a password. Consequently, you could find yourself waist deep in phone calls and complaints on your first day of operation unless you tell your users what to expect. A communication describing the user profile and password and their roles on the system would go a long way toward smoothing the transition for such users. You may encounter another potential problem in the panel interface differences between your former system and the SAA-compliant AS/400. Command key differences, print-control screen differences, help screen differences, and others may cause some initial concern and confusion among your users. The Operation Assistant (OA) interface provided for end-user interaction with the AS/400 is friendly, but telling your users about these
differences before installation will prepare them, head off many complaints, and protect your position.
Develop a Migration Plan The next step in pre-installation planning is to develop a migration or conversion plan. Converted applications almost always make better use of system resources than migrated applications, but you can successfully operate in the AS/400's S/36 or S/38 environment for as long as you need to. Although your goal ultimately should be to 'go native,' most shops choose to migrate first. Migration eases the transition considerably, particularly for S/36 shops, and allows conversion to proceed at a more leisurely pace. For this reason, I recommend most shops migrate first and then convert as time permits. Even if you buy software written for the AS/400 and use your software vendor's expertise to migrate the data, you must still migrate user profiles, your system configuration, and any custom software or utilities on your system. A migration plan organizes this process and, as you carry out the plan, helps you become familiar with the AS/400 and the new features it offers. Figure 1.4 shows a sample migration plan. The key to a successful S/36 migration is knowing what will migrate and what won't. The S/36 Migration Aid software identifies objects that will not migrate to the S/36 environment and keeps audit trails of what has and has not been migrated. The sooner you know what will not migrate, the sooner you can start developing AS/400 solutions for those objects.
One common problem in S/36 migration is expecting all applications to run better in the AS/400's S/36 environment. Unfortunately, the AS/400 cannot cure bad software. Badly written software that runs poorly on your S/36 will still run poorly in the AS/400's S/36 environment. In fact, the AS/400 may accentuate poor performance. IBM has made a commitment to maintain the S/36 environment on the AS/400. Nevertheless, you can -- and should -- gradually convert from the S/36 environment as you find applications that conversion will improve. Successful S/38 migration also begins with the Migration Aid software. As with the S/36, the Migration Aid identifies the objects and products that will not migrate and helps keep track of the migration process. The key to understanding the S/38 migration process is knowing that all S/38 objects are 'object compatible' with the AS/400. Migration is thus a relatively simple process in which you save the objects from the S/38 and restore them onto the AS/400. When a S/38 object is restored onto the AS/400, the system attaches the suffix '38' to the object attribute, as shown in Figure 1.5. The AS/400 uses the suffix to identify the proper environment for the object. For example, when the AS/400 executes a CL program (e.g., SAMPLECL in Figure 1.5), the system uses S/38 environment commands in response to the suffix on the object attribute. If you were to remove the suffix and attempt to recompile the CL program, you would get errors on any S/38 commands that do not exist in the same form on the AS/400 (e.g., DSPOUTQ, DSPACTJOB).
Whether you migrate from a S/36 or a S/38, running parallel for a while greatly reduces the risk involved. You can migrate your applications in stages, testing and verifying each program as you go. If you can't run parallel, you must complete your migration process on the first try, a much trickier proposition. In this case, I recommend that you seek an experienced outside source for assistance in the migration and conversion process. If you decide to begin conversion immediately, be sure you know what you're getting into. Depending on your current system, conversion could involve one week to six months of work for your staff. With S/36 conversions, for example, your staff must work through a complete education plan before even beginning to tackle the conversion process. Again, a good outside consultant, used in a way that provides educational benefits for your staff, could be an immense help. True, you could simply pay a consultant to convert your database and programs for you, but that approach doesn't educate your staff about the new system.
Also, let me offer you a warning: If you plan to replace your existing system and completely remove it before installing your AS/400, you are absolutely asking for trouble! If you find yourself forced into such a scenario, get help. Hire a consultant who has successfully migrated systems to the AS/400.
Develop a Security Plan With your migration plan in writing, you are ready to tackle a security plan. Imagine for a moment that you have your AS/400 fully installed and smoothly running -- and that you haven't altered the security settings yet. In this case, the system is at security level 10, and anyone who turns on a workstation, receives a sign-on screen, and presses Enter has full access to all system objects and functions. Obviously, you need a security plan, and you need to implement it as soon as possible after your system is installed.
System Security Level Figure 1.6 shows a basic security plan. The first and most significant step in planning your security is deciding what level you need. The AS/400 provides five levels of security: 10, 20, 30, 40, and 50. Security Level 10 -- As I implied, system security level 10 might more aptly be called security level zero, or 'physical security only': At level 10, the physical security measures you take, such as locking the door to the computer room, are all you have. If a user has access to a workstation with a sign-on screen, (s)he can simply press Enter, and the system will create a user profile for the session and allow the user to proceed. The profile the system creates in this case has *ALLOBJ (all object) special authority, which is sufficient for the user to modify or delete any object on the system. Although user profiles are not required at level 10, you could still create and assign them and ask each user to type in her assigned user profile at sign-on. You could then tailor the user profiles to have the appropriate special authorities -- you could even grant or revoke authorities to objects. But there is no way to enforce the use of those assigned profiles, and thus no way to enforce restricted special authorities or actual resource security. Level 10 provides no security. Security Level 20 -- Security level 20 adds password security. At level 20, a user must have a user profile and a valid password to gain access to the system. Level 20 institutes minimum security by requiring that users know a user profile and password, thus deterring unauthorized access. However, as with level 10, the default special authorities for each user class include *ALLOBJ special authority, and therefore resource security is, by default, bypassed. Although you can tailor the user profile, the inherent weakness of level 20 remains: the fact that, by default, resource security is not implemented. The *ALLOBJ special authority assigned by default to every user profile bypasses any form of resource security. To implement resource security at level 20, you must remove the *ALLOBJ special authority from any profiles that do not absolutely require it (only the security officer and security administrator need *ALLOBJ special authority). You must then remember to remove this special authority every time you create a new user profile. This method of systematically removing *ALLOBJ authority is pointless since, by default, level 30 security does this for you. On a production system, you must be able to explicitly authorize or deny user authority to specific objects. Therefore, level 20 security is inadequate in the initial configuration, requiring you to make significant changes to mimic what level 30 provides automatically. Security Level 30 -- Level 30 by default supports resource security (users do not receive *ALLOBJ authority by default). Resource security allows objects to be accessed only by users who have authority to them. The authority to work with, create, modify, or delete objects must be either specifically granted or received as a result of existing default public authority. All production systems should be set at security level 30 or higher (levels 40 or 50). Production machines require resource security to effectively safeguard corporate data, programs, and other production objects and to prevent unintentional data loss or modification. Security Level 40 -- The need for level 40 security centers on a security gap on the S/38 that the AS/400 inherited. This gap allowed languages that could manipulate Machine Interface (MI) objects (i.e., MI itself, C/400, and Pascal) to access objects to which the user was not authorized by stealing an authorized pointer from an unsecured object. In other words, an MI program could access an unsecured object and use its authorized pointer as a passkey to an unauthorized object.
To level 30's resource security, level 40 adds operating system integrity security. System integrity security strengthens level 30 security in four ways:
• • • •
By providing program states and object domains By preventing use of restricted MI instructions By validating job initiation authority By preventing restoration of invalid or modified programs
You might wonder what level 40 buys you. In truth, most systems today could run at level 30 and face no significant problems. But in the future, as you purchase more third-party software and as more systems participate in networks, operating-system integrity will become more important. Level 40 provides the security necessary to prevent a vendor or individual from creating or restoring programs on your system that might threaten system integrity at the MI level, thus ensuring an additional level of confidence when you work with products created by outside sources. Yet, if the need arises to create a program that infringes upon system integrity security, you can explicitly change the security level to 30. The advantage of using level 40 is that you control that decision. During installation, set your system level to 30, and monitor the security audit journal for violations that level 40 guards against. If you find none, go to level 40 security. If violations are logged, review them to determine their source. Some packaged software (e.g., some system tools) will require access to restricted MI instructions and will fail. In these cases, you can ask the vendor when his product will be compatible with level 40 and decide what to do based on his response. Security Level 50 -- IBM introduced security level 50 in OS/400 Version 2, Release 3. The primary purpose of security level 50 is to enable OS/400 to comply with the Department of Defense C2 security requirements. IBM added specific features into OS/400 to comply with DOD C2 security as well as to further enhance the system integrity security introduced in level 40. In addition to all the security features/functions found at all prior OS/400 security levels (e.g., 30, 40), level 50 adds
• • • • •
Restricting user domain object types (*USRSPC, *USRIDX, and *USRQ) Validating parameters Restricting message handling between user and system state programs Preventing modification of internal control blocks Making the QTEMP library a temporary object
If your shop requires DOD C2 compliance, you can get more information concerning security level 50 and other OS/400 security features (e.g., auditing capabilities) in two new AS/400 publications: Guide to Enabling C2 Security (SC41-0103) and A Complete Guide to AS/400 Security and Auditing: Including C2, Cryptography, Communications, and PC Implementation (GG24-4200).
Password Format Rules Your next task in security planning is to determine rules for passwords. In other words, what format restrictions should you have for passwords? Without format requirements, you are likely to end up with passwords such as 'joe,' 'sue,' 'xxx,' and '12345.' But are these passwords secret? Will they safeguard your system? You can strengthen your security plan's foundation by instituting some rules that encourage users to create passwords that are secret, hard to guess, and regularly changed. However, you also must remember that sometimes 'hard to guess' translates into 'hard to remember' -- and then users simply write down their passwords so they won't forget. The following password rules will help establish a good starting point for controlling password formats: Rule 1 is that passwords must be a minimum of seven characters and a maximum of 10 characters. This rule deters users who lack the energy to think past three characters when conjuring up that secret, unguessable password. Rule 2 builds on Rule 1: Passwords must have at least one digit. This rule makes passwords become more than just a familiar name, word, or place.
Rule 3 can deter those who think they can remember only one or two characters and thus make their password something like 'XXXXX6' or 'X1X.' Rule 3 simply states that passwords cannot use the same character more than once. On a similar note, Rule 4 states that passwords cannot use adjacent digits. This prevents users from creating passwords such as '1111,' '1234,' or even using their social security number. With these four rules in place, you can feel confident that only sound passwords will be used on the system. But you can enhance your password security still further with one additional rule. Rule 5 says that passwords should be assigned a time frame for expiration. You can set this time frame to allow a password to remain effective for from one to 366 days, thus ensuring that users change their passwords regularly. Passwords are a part of user profiles, which you will create to define the users to the system after the AS/400 is installed. Laying the groundwork for user profiles is the next concern of your security plan.
Identifying System Users Before you install the new machine, you should identify the people who will use the system. Obtain each user's full name and department and the basic applications the user will require on the system. Some users, such as operators and programmers, will need to control jobs and execute save/restore functions on the system. Other users, such as accounts receivable personnel, only need to manipulate spooled files and execute applications from menus. Once you identify the users and determine which system functions they need access to, you can assign each user to one of the following classes (the authorities discussed with each class are granted when the system security level is set to 30 or 40):
• • • • •
SECOFR (security officer) grants the user all authorities: all object, security administrator, save system, job control, service, spool control, and audit authorities (each of these special authorities is explained below). SECADM (security administrator) grants security administrator, save system, and job control authorities. PGMR (programmer) grants save system and job control authorities. SYSOPR (system operator) grants save system and job control authorities. USER (user) grants no special authorities.
Your MIS staff members normally will have either the SYSOPR or the PGMR user class. Your end users should all reside in the USER user class. The USER class carries no special authorities, which is appropriate for most users. They can work within their own job and work with their own spooled files. One rule of thumb when assigning classes is that you should never set up your system such that a user performs regular work with SECOFR authority. The AS/400 has a special QSECOFR profile; when the security officer must perform a duty, the person responsible should sign on using the QSECOFR profile to perform the needed task. Using security officer authority to perform normal work is like playing with a loaded gun. As you plan user profiles, you also need to consider the special authorities you want to grant to the user profiles and user classes. Special authorities allow users to perform certain system functions; without special authority, the functions are unavailable to the user. The AS/400 provides six special authorities:
• • • • • • •
ALLOBJ (all object authority) lets users access any system object. This authority alone, however, does not allow the users to create, modify, or delete user profiles. SECADM (security administrator authority) allows users to create and change user profiles. SAVSYS (save system authority) lets users save, restore, and free storage for all objects. JOBCTL (job control authority) allows users to change, display, hold, release, cancel, and clear all jobs on the system. The user can also control spooled files in output queues where OPRCTL(*YES) is specified. SERVICE (service authority) means users can perform functions from the System Service Tools, a group of executable programs used for various service functions (e.g., line traces and run diagnostics). SPLCTL (spool control authority) allows users to delete, display, hold, and release their own spooled files and spooled files owned by other users. AUDIT (audit authority) allows users to start and stop security auditing as well as control security auditing characteristics.
When you use security level 30, 40, or 50, the AS/400 automatically assigns special authorities based on user class as shown in Figure 1.7. When you create user profiles, you can use the special authorities parameter to
override the authorities granted by the user class, allowing you to tailor authorities as appropriate for specific users. For instance, a user profile might have a user class of SYSOPR, which grants the user special authorities for job control and save/restore functions. By entering only *SAVSYS for the special authorities parameter, you can instruct the system to grant only this special authority, ignoring the normal defaults for the *SYSOPR user class.
You must also plan specific authorities, which control the objects a user can work with (e.g., job descriptions, data files, programs, menus). Going through the remainder of the pre-installation security planning process -- checking your applications for security provisions -- will also help you decide which users need which specific authorities and help you finish laying the groundwork for user profiles on your new system.
Develop a Backup and Recovery Plan Although it may seem premature to plan for backup and recovery on your as-yet-undelivered AS/400, I assure you it is not. First, you should not assume that the backup and recovery plan for your existing system will still work with the AS/400. Second, the AS/400 has a variety of powerful backup and recovery options that you may not be familiar with. Some of these options are difficult and time-consuming to install if you wait until you've migrated your applications and data to the new system. Checksum is a case in point. The AS/400's single-level storage minimizes disk head contention and eliminates the need to track and manage the Volume Table of Contents. But single-level storage can also create recovery problems. Because single-level storage fragments objects randomly among all the system's disks, the loss of any one disk can result in damage to every object on the system. After complete backup, checksum is your best protection against this weakness in single-level storage. With checksum, you configure disk units (i.e., one disk actuator arm and its associated storage) into checksum sets, with no more than one unit from each disk device in a single checksum set. Then, if a disk fails, the system can compare the data in the failed unit in each checksum set with the data in the other (intact) units and can reconstruct the data on the failed unit. This description of how checksum works is (obviously) not complete, but should give you an idea of how valuable it can be. Because checksum installation on an installed system requires that you save your entire system and reload everything, don't pass up this opportunity to consider installing checksum when you install your new AS/400. An auxiliary storage pool (ASP) is another of those features that are much easier to implement when you install your system rather than later. An ASP is a group of disk units. Your AS/400 will be delivered with only the system ASP (ASP 1) installed. Figure 1.8a shows auxiliary storage configured only as the system ASP. The system ASP holds all system programs and most user data. You can customize your disk storage configuration by partitioning some auxiliary storage into one or more user ASPs (Figure 1.8b). Like checksum, user ASPs provide protection from disk failures, because you can segregate specific user data or backup data onto user ASPs. Thus, if you lose a disk unit in the system ASP, your restore time is reduced to a minimum time of restoring the operating system and the objects in the system ASP, while data residing in the user ASPs will be available without any restore. If you lose a disk unit in a user ASP, your restore time will include only the time it takes to restore the user data in that user ASP. You can use user ASPs for journaling and to hold save files. Journaling automatically creates a separate copy of file changes as they occur, thus letting you recover every change made to journaled files up to the instant of the failure. If you have on-line data entry -- such as orders taken over the phone -- that lacks backup files for the data entered, you should strongly consider journaling as a part of your backup and recovery plan. Although you do not need user ASPs to implement journaling, they do make recovery (which is difficult under the best of circumstances) easier. If you do not journal to a user ASP, you should save your journal receivers (i.e., the objects that hold all file changes recorded by journaling) to media regularly and frequently. User ASPs also protect save files from disk failures. A save file is a special type of physical file to which you can target your backup operation. Save files have two major advantages over backing up to media. The first is that you can back up unattended, since you don't have to change diskette magazines or tapes. The second advantage is
that backing up to disk is much faster than backing up to tape or diskette. The major (and probably obvious) disadvantage is that save files require additional disk storage. Nevertheless, save files are worthwhile in many cases; and when they are, isolating save files in a user ASP provides that extra measure of protection. User ASPs are required as part of the disk-mirroring feature the AS/400 offers. User data is placed on various user ASPs. Each ASP uses a set of mirrored disk drives. The mirroring protects the user data in the ASP, and the fact that ASPs are used protects the larger system from a complete loss due to any one single disk failure. While disk mirroring has a substantial initial investment for the additional disk drives, the protection offered is significant for companies that rely on providing 24-hour service. One last option to consider is RAID protection. IBM and other AS/400 DASD vendors currently offer either RAID 1 or RAID 5 disk protection. RAID 1 is similar to OS/400's system mirroring option, except that the disk subsystem handles all the necessary read/write operations instead of OS/400. You duplicate each disk drive to protect against a single disk drive failure. If one disk fails, the system still has access to the mirrored disk. RAID 5 protection is similar to OS/400's checksum; however, the disk subsystem handles all the read/write operations. RAID 5 stores parity information on additional disk space and uses that parity information to reconstruct the data in the event that one of the disks in a RAID 5 set fails. The point of this discussion is that you need to plan ahead and decide which type of disk protection you will employ so you can be ready to implement your plan when the system is first delivered, when the disk drives are not yet full of information you would have to save before making any storage configuration changes. For more information about save/restore, and an introduction to a working save/restore plan, see Chapters 15 and 16, 'AS/400 Save and Restore Basics,' and 'Backup Without Downtime.'
Establish Naming Conventions Naming conventions vary greatly from one MIS department to the next. The conventions you choose should result in names that are syntactically correct and consistent, yet easily remembered and understood by end users and programmers alike. A good standard does more than simply help you name files, programs, and other objects; it also helps you efficiently locate and identify objects and devices on your system. If your naming conventions are in place before you install your system, they will help installation and migration go smoothly and quickly. The naming convention you choose should be meaningful and should allow for growth of your enterprise. Let's look at an example:
• • •
You have three locations for order entry: Orlando, Florida; Atlanta, Georgia; and Montgomery, Alabama. You have five order entry clerks at each location. You have one printer at each location.
You could let the AS/400 automatically configure all your workstations and printers, which would result in names such as W1, W2, and P1, or DSP02, DSP03, and PRT02. But, by configuring the devices yourself and assigning meaningful names, your devices can have names such as GADSP01, GADSP02, ALDSP01, ALDSP02, FLPRT01. Because these names contain a two-letter abbreviation for the state, they are more meaningful and useful than the names the AS/400 would assign automatically. But this convention would pose a problem if you had two offices in the same state. So instead, to allow for growth of the enterprise, you might incorporate the branch office number into the names, resulting in names such as C01DSP01 to identify a control unit for branch office 01, display station 01. Such a naming convention would help your operations personnel locate and control devices in multiple locations. You will also need a standard for naming user profiles. There are those who believe that a user profile name should be as similar as possible to the name of the person to whom it belongs (e.g., WMADDEN, MJONES, MARYM, JOHNZ). This method can work well when there are only a few end users. Under such a strategy, only one profile is needed per user, which simplifies design and administration of the security system and lets operations personnel identify employees by their user profiles. The drawback to this method is that it results in profiles that are easily guessed and thus provides a door for unauthorized sign-ons, leaving only the password to guess. A friend of mine was bragging about his new LAN one evening and wanted to show me how it worked, but he did not know his user profile or password. We were sitting at his secretary's desk, so I asked him what her name was. Within one minute we were signed on using her first name as the profile and her initials as the password. Good guess? No. Bad profile and password.
Another opinion holds that user-profile names should be completely meaningless (e.g., SYS23431, [email protected], 2LR50M3ZT4) and should be maintained in some type of user information file. The use of meaningless names makes profiles difficult to guess and does not link the name to a department or location that might change as the employee moves in the company. The user information file documents security-related information such as the individual to whom the profile belongs and the department in which the user works. This method is the most secure; but it often meets with resistance from the users, who find their profiles difficult to remember. A third approach is to use a naming standard that aids system administration. Under this strategy, each user profile name identifies the user's location and perhaps function in order to sharpen the ability to audit the system security plan. For instance, if you monitor the history log or use the security journal for auditing, this approach enables you to quickly identify users and the jobs they're doing. To implement this strategy, your naming convention should incorporate the user's location or department and a unique identifier for the user's name. For example, if John Smith works as one of the order entry clerks at the Georgia location, you might assign one of the following profiles: GAJSMITH In this profile, the first two letters represent the location (GA for Georgia), and the remainder consists of the first letter of the user's first name followed by as much of the last name as will fit in the remaining seven characters. GAOEJES This example is similar, but the branch is followed by the department (OE) and the user's initials. This method provides more departmental information while reducing the unique name identifier to initials. B12OEJES This example is identical to the second, but the Georgia branch is numbered (B12). When profile names provide this type of information, programs in your system that supply user menus or functions can resolve them at run time based on location, department, or group. As a result, both your security plan and your initial program drivers can be dynamic, flexible, and easily maintained. In addition, auditing is more effective because you can easily spot departmental trends; and user profile organization and maintenance are enhanced by having a naming standard to follow. However, such profiles are less secure than meaningless profiles because they are easy to guess once someone understands the naming scheme. This leaves only the password to guess, thus rendering the system less secure. As you will discover in Chapter 3, I also believe in maintaining user profiles in a user information file. Such a file makes it easy to maintain up-to-date user-profile information such as initial menus, initial values for programs (e.g., initial branch number, department number), and the user's full name formatted for use in outgoing invoices or order confirmations. When a user transfers to another location or moves to a new department, you should deactivate the old profile and assign a new one to maintain a security history. A user information file helps you keep what amounts to a user profile audit trail. Furthermore, your applications can retrieve information from the file and use it to establish the work environment, library list, and initial menu for a user. A final consideration in choosing a naming convention for user profiles is whether or not your users will have access to multiple systems. If they will, you can simplify Display Station Passthrough functions by using the same name for each user's profile on all systems. To do this, you must consider any limitations the other systems in the network place on user profile names and apply those limitations in creating the user profiles for your system. For instance, another platform in your network may limit the number of characters allowed for user profile names. To allow your user profiles to be valid across the network, you will have to abide by that limitation. You need to determine what user profile naming convention will work best for your environment. For the most secure environment, a 'meaningless' profile name is best. User profiles that consist of the end user's name are the least secure and are often used in small shops where everyone knows (and is on good terms with) everyone else. A convention that incorporates the user's location and function is a compromise between security and system management and implementation that suits many shops.
What Next? Okay, you have made it this far. You have planned and prepared, and then planned some more. You have planned education, scheduled classes, and started to prepare your users for the differences they will encounter with the AS/400. You have planned for migration, security, and backup and recovery, and you know how you will name the objects on your system. You feel ready to begin the installation. But after your vendor helps you install
the hardware, how do you go about implementing all those carefully made plans? In the next chapter, I'll go into what happens once the power is on.
Chapter 2 - That Important First Session Your shiny new AS/400 is out of the box. The microcode is all there, the operating system is installed, and all your program products are loaded on the system. The vendor has finished installation and is packing up the tools. Up to this point (if you have done your homework) you have committed, planned, and planned some more for your new AS/400. Planning is a significant portion of the total installation process, but it isn't nearly as much fun as that moment when you turn on the power, watch the little lights start blinking, hear the low hum of the disk drives, and bring the magical screen to life -- giving you access to your new toy (I mean business machine). That's the moment you live for as a midrange MIS professional!
Signing On for the First Time Once the power is on, you might think your previous S/3X experience would let you just feel your way around the system menus and functions. But that's not the case. My experiences with AS/400 installation have taught me that you should take some immediate steps (Figure 2.1) to put your carefully made plans into action. User ASPs and checksum configuration. First, examine your backup and recovery plan to see whether you have decided to use Auxiliary Storage Pools (ASPs) or checksum. If so, grab your vendor installation team before they leave because the preloaded software on your system is about to be destroyed! As I discussed in Chapter 1, the AS/400 has a S/38-like single-level storage architecture that spreads objects (i.e., programs and data) in auxiliary storage equally over the disk to increase performance during retrieval. When you create a user ASP, you remove a segment of a disk or one or more disk units from the single-level storage area. Therefore, you lose a portion of your objects, and the system must re-initialize the system ASP and start from scratch. This same situation exists when you reserve storage on your disk unit for checksum operations. Thus, after creating a user ASP or checksum, you must reload the microcode, the operating system, and each program product. Work with the installation team to create user ASP(s), to implement checksum, and to reload everything afterward. (Make sure you have all the software product tapes you need. With the advent of preloaded software, the software media may not have been shipped to you with the system.) Reconfiguring your storage and reloading your software may be a pain, but it is much easier during installation than when your machine is working in its production environment. And if ASPs or checksum are part of your backup plans, you can begin breathing easier knowing you are already prepared for disasters. Verify software installation and PTF levels. Next, verify that the program products you ordered are installed on the system. The vendor should assist you in loading these program products if they are not already preloaded on the system. (If you don't have your program products and manuals, make sure you follow up on their delivery.) Then determine whether or not the latest available cumulative Program Temporary Fix (PTF) release is installed on your system. The vendor should know which is the latest PTF level available and can help you determine whether or not that level exists on your system. If you don't have the latest release, order the tape now so you can apply the PTFs before you move your AS/400 into the production phase of installation. For more information about PTFs and installing PTFs, see Chapter 6, 'Introduction to PTFs.' Signing on. With ASPs and checksum configured and the latest PTFs installed, you are now ready to sign on to your AS/400. Use the user profile QSECOFR to sign on, and enter QSECOFR -- the preset password for that profile. But don't start playing with your new system yet! You have some important chores to do during your first session. Set the security level. Your AS/400 is shipped with the security level set at 10. With level 10, anyone who turns on a workstation, receives a sign-on screen, and presses the Enter key has full access to all system objects and functions. Obviously, you need to reset the security level as the first step in implementing your security plan. In the previous chapter, I strongly suggested that you operate your machine at a minimum of security level 30. Don't wait until you move into a production environment; by then, switching levels will be too much trouble for you and a pain for your users. Change the security level now by keying in the command
CHGSYSVAL SYSVAL(QSECURITY) VALUE(XX) where XX is either 30, 40, or 50. The change will take effect when you IPL the system. Because you must perform IPLs to implement a number of settings on your AS/400, you might as well practice one now to put level 30 into action. Make sure the key is in the AUTO position and then power down the system with an automatic restart by keying in
PWRDWNSYS OPTION(*IMMED) RESTART(*YES) When the system is re-IPLed, you can feel confident your AS/400 will operate in a secure environment. Enforce password format rules. The next important step in implementing your security plan is setting the system values that control password generation. You should already have decided on the password rules, and changing the system values to enforce those rules is relatively easy. In Chapter 1, I recommended five rules to guarantee the use of secure passwords on your system. To implement Rule 1, passwords must be a minimum of seven characters and a maximum of 10 characters, enter the code
CHGSYSVAL SYSVAL(QPWDMINLEN) VALUE(7) CHGSYSVAL SYSVAL(QPWDMAXLEN) VALUE(10) The system value QPWDMINLEN (Password Minimum Length) sets the minimum length of passwords used on the system, and system value QPWDMAXLEN (Password Maximum Length) specifies the maximum length of passwords used on the system. To implement Rule 2, passwords must have at least one digit, enter
CHGSYSVAL SYSVAL(QPWDRQDDGT) VALUE('1') Setting the system value QPWDRQDDGT to 1 requires all passwords to include at least one digit. For Rule 3, passwords cannot use the same character more than once, enter
CHGSYSVAL SYSVAL(QPWDLMTREP) VALUE('1') Setting the system value QPWDLMTREP (Limit Character Repetition) to 1 prevents characters from being repeated in immediate succession within a password. For Rule 4, passwords cannot use adjacent digits, enter
CHGSYSVAL SYSVAL(QPWDLMTAJC) VALUE('1') This prevents users from creating passwords with adjacent numbers, such as their social security number or phone number. Implement Rule 5, passwords should be assigned a time frame for expiration, by entering the command
CHGSYSVAL SYSVAL(QPWDEXPITV) VALUE(60) System value QPWDEXPITV (Password Expiration Interval) specifies the length of time in days that a user's password remains valid before the system instructs the user to change passwords. The value can range from 1 to 366. The password expiration interval can also be set individually for user profiles using the PWDEXPITV parameter of the user profile. This is helpful because there are certain profiles, such as the QSECOFR profile, that are particularly sensitive and should require a password change more often for additional security. Change system-supplied passwords. OS/400 provides several user profiles that serve various system functions. Some of these profiles do not have passwords, which means you cannot sign on as that user profile. For example, the default-owner user profile QDFTOWN doesn't have a password because the profile receives ownership of
objects when no other owner is available. However, every AS/400 is shipped with passwords for the systemsupplied profiles listed below, and these passwords are preset to the profile name (e.g., the preset password for the QSECOFR profile is QSECOFR). Therefore, you must change the passwords for these profiles:
• • • • • •
QSECOFR (security officer) QPGMR (programmer) QUSER (user) QSYSOPR (system operator) QSRVBAS (basic service representative) QSRV (service representative)
To enter new passwords, sign on as the QSECOFR profile and execute the following command for each of the above user profiles:
CHGUSRPRF USRPRF(user_profile) PASSWORD(new_password) This can also be accomplished using the SETUP menu provided in OS/400. Type GO SETUP and then select the 'Change Passwords for IBM-supplied Users' option (option 11) to work with the panel shown in Figure 2.2. You can assign a password of *NONE (you cannot change QSECOFR password to *NONE), or you can assign new passwords that conform to the password rules you have just implemented. After changing the passwords for the system-supplied profiles, it would be wise to write the new passwords down and store them in a safe place for future reference. Set auto-configuration control. After you have taken steps to secure your system, the next important action concerns the system value QAUTOCFG, which controls device auto-configuration and helps you establish your naming convention. When your system is delivered, the system value QAUTOCFG is preset to 1, which allows the system to configure devices (e.g., terminals) automatically when the power is turned on. The system identifies the device type, creates a description for that device, and assigns a name to the device. Having QAUTOCFG set to 1 is necessary because the AS/400 then configures itself for your initial sign-on session. When the QAUTOCFG system value is set at its default value of 1, auto-configured devices are named according to the standard specified in the system value QDEVNAMING. The possible values for QDEVNAMING are *STD or *S36. If the system value is left at the default value of *STD, the AS/400 assigns device names according to its own standard (e.g., DSP01 and DSP02 for workstations; PRT01 and PRT02 for printers). If the option *S36 is specified, the AS/400 automatically names devices according to S/36 naming conventions (e.g., W1 and W2 for workstations; P1 and P2 for printers). Although automatic configuration gives you an easy way to configure new devices (you can plug in a new terminal, attach the cable, and -- 'Poof!' -- the system configures it), it can frustrate your efforts to establish a helpful naming convention for your new machine. Therefore, after the system has been IPLed and the initial configuration is complete, you should reset the value of QAUTOCFG to 0, which instructs the system not to auto-configure devices. You can reset auto-configuration by executing the command
CHGSYSVAL SYSVAL(QAUTOCFG) VALUE('0') This change takes effect when you re-IPL the system. (If you haven't done so already, you should re-IPL the system now to put into effect the changes you have made for security level, password rules, and autoconfiguration.) You must now configure devices yourself when needed. Admittedly, configuring devices is much more of a pain than letting the system configure for you. But I recommend this approach because it usually requires more planning, better logic, better structure, a better naming convention, and better documentation. Configuring devices is beyond the scope of this chapter, but the subject is well documented in IBM's AS/400 Device Configuration Guide (SC41-8106). Setting general system values. Several times now, you have set AS/400 system values. A system value is an object type found in library QSYS, and the AS/400 has many of these useful objects to control basic system functions. To further familiarize you with your new system, let's take a look at a few of the most significant system
values. (You can use the WRKSYSVAL (Work with System Values) command to examine and modify system values.) QABNORMSW is not a value that you modify; the system itself maintains the proper value. When your system IPLs, this system value contains a 0 if the previous end of system was NORMAL (meaning you powered the system down and there was no error). However, if the previous end of system was ABNORMAL (meaning there was a power outage that caused system failure, some hardware error that stopped the system, or any other abnormal termination of the system), this system value will be 1. The benefit of this system value is that during IPL, your initial start-up program can check this value. If the value is 1, meaning the previous end of system was ABNORMAL, you might want to handle the IPL and the start-up of the user subsystems differently. QCMNRCYLMT controls the limits for automatic communications recovery. This system value is composed of two numbers. The first number controls how many attempts will be made at error recovery. The second number indicates how many seconds will expire between attempts at recovery. The initial values are '0' '0'. This instructs the system to perform no error recovery when a communications line or control unit fails. If left in this mode, the operator will be prompted with a system message asking whether error recovery should be attempted. The values '5' '5' would instruct the system to attempt recovery five times and wait five seconds between those attempts. Only at the end of those attempts would the operator be prompted with a system message if recovery has not been established. A word about the use of QCMNRCYLMT: If you decide to use the system error recovery by setting this system value, you will add some work overhead to the system, because error recovery has a high priority on the AS/400. In other words, if a communications line or control unit fails and error recovery kicks in, you will see a spike in your response time. If you experience severe communications difficulties, reset this system value to the initial value of '0' '0' and respond manually to the failure messages. QMAXSIGN specifies the number of invalid sign-on attempts to allow before varying that device off. The initial value is 15, but I recommend a value of 3 for tighter security. Setting QMAXSIGN to 3 means that after three unsuccessful attempts at signing onto the system (because of using an invalid user profile or password), the system will disable either the device or user profile being used (the action performed depends upon the value of the QMAXSGNACN system value). You will have to enable the device or user profile again to make it available. QPRTDEV specifies which printer device is the default system printer. When a user profile is created, the output will default to this printer (unless a particular output queue or printer device is specified). The initial value is PRT01. If you have a printer device named SYSPRINT, you can change the value of QPRTDEV to SYSPRINT. These are just a few of the system values available on the AS/400. For a list of system values and their initial values, consult IBM's AS/400 Programming: System Reference Summary (SX41-0028), or its AS/400 Programming: Work Management Guide (SC41-8078). It is worth your time to read about each of these values and determine which ones need to be modified for your particular installation.
Establishing Your Work Environment Okay, you have covered a lot of ground so far. You've made the system secure, reset the auto-configuration value, and looked at some general system values. But it's not time for fun and games yet. Now you should establish your work environment. When the system is shipped, your work environment is simple. Memory is divided into the machine pool, subsystem QBASE, and subsystem QSPL. The system uses the machine pool to interface with the hardware. Subsystem QBASE is a memory pool used to execute all the interactive, batch, and communications jobs. QSPL is the spooling subsystem that provides the operating environment (memory and processing priorities and parameters) for programs that read jobs onto job queues to wait for processing and write files from an output queue to an output device. While this simple arrangement is functional, it may not be effective or efficient. For example, if the system value setting the machine pool size is too low, performance is slow; if the value is too high, you waste memory. Thus, you need to customize your work environment for your organization. Let's look at the most important work management objects. QMCHPOOL is the system value that specifies the amount of memory allocated to the machine pool. Examine this value and compare it with the calculated value you arrive at based on the configuration you are operating. Figure 2.3 shows the formula for calculating the machine pool size, and Figure 2.4 shows a sample calculation that
assumes you have an AS/400 with a main storage size of 32 MB, an estimated 150 active jobs, four SDLC communications lines, two controllers on each line, save/restore operations, and one Token-Ring adapter. The resulting machine pool size is 4,918 KB, which you might round off to 5,000 KB. Fudging a little on the calculations won't hurt if you monitor the performance of this pool under normal work loads and adjust either way. (For basic pool performance tuning information, see Chapter 13 of the Work Management Guide).
QBASPOOL specifies the minimum size of the base storage pool. Memory not allocated to any other storage pool stays in the base storage pool. This pool supports system jobs (e.g., SCPF, QSYSARB, QSYSWRK, QSPLMAINT, and subsystem monitors) and system transients (such as file OPEN/CLOSE operations). Enter the WRKSYSSTS (Work System Status) command to see the amount of storage the machine has reserved for these functions (the reserved value will appear on the display as RESERVED). You can use this value as a minimum value for QBASPOOL, but I recommend being a little more generous. For example, if the reserved size is 1,600 KB, you should set the QBASPOOL value higher (a good rule of thumb is to add 400 KB for each activity level QBASPOOL supports) because many more system jobs will be active under normal working conditions. As with QMCHPOOL, monitor the value of QBASPOOL to make sure it remains adequate. QBASACTLVL sets the maximum activity level of the base storage pool. The initial value for QBASACTLVL depends on your AS/400 model. This default value should be adequate; however, if you elect to run batch jobs in this pool (instead of creating a separate private pool for batch processing), you should make sure that you adjust this value to allow for one activity level for each batch job that you will allow to process simultaneously. Monitor the performance of the base pool to determine whether additional memory or another activity level is required. QMAXACTLVL sets the maximum activity level of the system by specifying the number of jobs that can compete at the same time for main storage and processor resources. By examining each subsystem, you can establish the total number of activity levels; this value must at least equal that number or be set higher. I suggest you set the QMAXACTLVL value to five above the total number of activity levels allowed in all subsystems, which will let you increase activity levels for individual subsystems for tuning purposes without having to increase QMAXACTLVL. However, if the number of subsystem activity levels exceeds the value in QMAXACTLVL, the system executes only the number of levels specified in QMAXACTLVL, resulting in unnecessary waiting for your users. Therefore, you must increase QMAXACTLVL if you increase the total number of activity levels in your subsystems or if you add subsystems. QACTJOB is the system value that specifies the initial number of active jobs for which the system should allocate storage during IPL. The amount of storage allocated for each active job is approximately 110 KB (this is in addition to the auxiliary storage allocated due to the QTOTJOB system value, discussed below.) I suggest you set this number to approximately 10 percent above the average number of active jobs (i.e., any user or system job that has started executing but has not ended) that you expect to have on the system. For example, if you have an average of 50 active jobs, set the QACTJOB value at 55. Setting QACTJOB and QTOTJOB to values that closely match your requirements helps the AS/400 correctly allocate resources for your users at the system start-up time instead of continually having to allocate more work space (e.g., for jobs or workstations) and provides more efficient performance. QTOTJOB specifies the initial number of jobs for which the system should allocate auxiliary storage during IPL. The number of jobs is the total possible jobs on the system at any one time (e.g., jobs in the job queue, active jobs, and jobs having spooled output in an output queue). QADLACTJ specifies the additional number of active jobs for which the system should allocate storage when the number of active jobs in the QACTJOB system value is exceeded. Setting this value too low may result in delays if your system needs additional jobs, and setting it too high increases the time needed to add the additional jobs.
QADLTOTJ specifies the additional number of jobs for which the system should allocate auxiliary storage when the initial value in QTOTJOB is exceeded. As with QADLACTJ, setting this value too low may result in delays and interruptions when your system needs additional jobs, and setting it too high slows the system when new jobs are added. You will need to document changes to these objects. I suggest you record any commands that change the work management system values (or any other IBM-supplied objects) by keying the same commands into a CL program that can be run each time a new release of the operating system is loaded. This ensures that your system's configuration remains consistent. Establishing your subsystems. Selecting your controlling subsystem is the next task in establishing your work environment. When your system is shipped, the controlling subsystem for operations is QBASE. It supports interactive, batch, and communications jobs in the same memory storage pool. When the system IPLs, QBASE is started and an autostart job also starts the spool subsystem QSPL. This default configuration is simple to manage because these two subsystems are used apart from the machine pool and the base pool. However, I recommend implementing separate subsystems for each type of job to provide separate memory pools for each activity. One memory pool can support all activities. But when long-running batch jobs run with interactive workstations that compete for the same memory, system performance is poor and the fight for activity levels and priority becomes hard to manage. My experience with AS/400s has taught me that establishing separate subsystems for batch, interactive, and communications jobs gives you much more control. Using QCTL as the controlling subsystem establishes separate subsystems for batch, interactive, and communications jobs and can be the basis for various customized subsystems. Use the following command to change the controlling subsystem from QBASE to QCTL:
CHGSYSVAL SYSVAL(QCTLSBSD) VALUE('QCTL QGPL') or you can use the WRKSYSVAL command to modify the system value. (The above CHGSYSVAL command changes the value of QCTLSBSD, which is the system value that specifies what the controlling subsystem will be.) This will be effective after the next IPL. Although the QCTL subsystem only supports sign-on at the console, QCTL also begins an auto-start job at IPL. The auto-start job then starts four system-supplied subsystems: QINTER, QBATCH, QCMN, and QSPL (the descriptions for these subsystems are in the QGPL library). The QINTER subsystem supports interactive jobs, QBATCH supports batch jobs, QCMN supports communication jobs, and QSPL still supports its normal functions as the spooling subsystem. You can thus allocate memory to each subsystem based on the need for each type of job and set appropriate activity levels for each subsystem. No system values control the memory pools and activity levels for individual subsystems, but the subsystem description contains the parameters that control these functions. For example, when you create a subsystem description with the CRTSBSD (Create Subsystem Description) command, you must specify the memory allocation and the number of activity levels. You can find more information about subsystem descriptions in Chapters 17, 18, and 19, and in the Work Management Guide, and more information about the CRTSBSD command in the Control Language Reference (SC41-0030). Making QCTL the controlling subsystem will also help if you decide to create your own subsystems. For instance, if your system supports large numbers of remote and local users, you may want to further divide the QINTER subsystem into one subsystem for remote interactive jobs and another for local interactive jobs. Thus, you can establish appropriate execution priorities, time slices, and memory allocations for each type of job and greatly improve performance consistency. Retrieving and modifying the start-up program QSTRUP. When you IPL your system, the controlling subsystem QBASE or QCTL, whichever you decide to use, submits an auto-start job that runs the program specified in the system value QSTRUPPGM. The initial value for that system value is QSTRUP QSYS. This program starts the appropriate subsystems and starts the print writers on your system. However, you may want to modify QSTRUP to perform custom functions. For instance, you may have created additional subsystems that need to be started at IPL, or you may want to run a job that checks the QABNORMSW system value each time the system is started. Retrieve the CL source code for QSTRUP (Figure 2.5) by executing the command
RTVCLSRC PGM(QSYS/QSTRUP) SRCFIL(QGPL/QCLSRC)
After retrieving the source, use the SEU editor to change QSTRUP to perform other start-up functions for you. Figure 2.6 shows a sample user-modified start-up program that uses QCTL as the controlling subsystem for the additional subsystems of QPGMR, QREMOTE, and QLOCAL. The sample program also checks the status of the QABNORMSW system value. Once you have modified QSTRUP, recompile the program into library QSYS under a different name or to a different library. (I suggest you leave the program in library QSYS, just in case someone deletes the library that contains your new start-up program.) Then change QSTRUPPGM to use your new program. Make sure you test your new start-up program before replacing the original program.
Now What?
Chapter 3 - Access Made Easy If you have followed my recommendations about AS/400 setup to this point, you've carefully planned for installation, education, migration, security, backup, and recovery before you ever received your system. You've established consistent and meaningful naming conventions for system objects and have established your work environment. Now that you have powered on the AS/400, it's time to start thinking about putting it to work. The next step is to set up user profiles. IBM supplies a few user profiles with which to maintain the AS/400, such as QSECOFR (Security Officer), QDFTOWN (Default Owner), and QSRV (Service Profile used by the Customer Engineer). In addition to these profiles, you need profiles for your users so they can sign on to the system and access their programs and data. For this aspect of setting up your AS/400, you first need to understand user profiles and their attributes. With that knowledge you can, if you wish, turn over to a program the job of creating profiles for your users.
What Is a User Profile? To the AS/400, a user profile is an object. While the object's name (e.g., WDAVIS or PGMR0234) is what you normally think of as the user profile, a user profile is much more than a name. The attributes of a user profile object define the user to the system, enabling it to establish a custom initial session (i.e., job) for that user at signon. To make the best use of user profiles, you must understand those attributes and how they can help you control access to your system. You create a user profile using the CRTUSRPRF (Create User Profile) command. Only the security officer profile (QSECOFR) or another profile that has *SECADM (security administrator) special authority can create, change, or delete user profiles. You should restrict authority to the CRTUSRPRF (Create User Profile), CHGUSRPRF (Change User Profile), and DLTUSRPRF (Delete User Profile) commands to those responsible for the creation and maintenance of user profiles on your system. The CRTUSRPRF and CHGUSRPRF commands have a parameter for each user profile attribute. If you prompt the CRTUSRPRF command and then press F10, the system will display the command's parameters (Figure 3.1). But before you create any user profiles, you should first decide how to name them. In Chapter 1, I stressed the importance of developing a strategic naming convention for user profiles. Once you have performed this task, you are ready to create a user profile for each person who needs access to your system.
Creating User Profiles Figure 3.1 represents all the available parameters for creating a user profile. Except for the user profile name (USRPRF) parameter, each parameter has a default value that will be accepted unless you supply a specific value to override that default. Following are the key user profile parameters that you will frequently change to customize a user profile. USRPRF (User Profile) The first parameter is USRPRF, which contains the user profile name you decided on. This is a required parameter and you will enter the name of the user profile you are creating. PASSWORD (User Password)
As I mentioned in Chapter 1, passwords should be secret, hard to guess, and regularly changed. You cannot ensure that users keep their passwords secret, but you can help make them hard to guess by controlling password format, and you can make sure passwords are changed regularly. This discussion assumes you allow users to select and maintain their own passwords. No one in MIS needs to know user passwords. The AS/400 does not allow even the security officer to view existing passwords. This would violate the first rule of passwords -- that they be secret! The PASSWORD parameter lets you specify a value of *NONE, a value of *USRPRF, or the password itself. *NONE, which means that the user profile cannot sign on to the system, is recommended for group profiles, profiles of users who are on vacation and do not need access for a period of time, users who have been terminated but cannot be deleted at the time of termination, and for other situations in which you want to ensure that a profile is not used. The default value, *USRPRF, dictates that the password be the same as the user profile name. You should not use PASSWORD(*USRPRF); otherwise, you will forfeit the layer of security provided by having a password that differs from the user profile name. You can control the format of passwords by using one or more of the password-related system values discussed in Chapter 2 or by creating your own password validation program (see the discussion of the QPWDVLDPGM in IBM's Security Reference manual (SC41-8083). The format you impose should encourage users to create hard-toguess passwords but should not result in passwords that are so cryptic users can't remember them without writing them down within arm's reach of the keyboard. As I said in Chapter 1, I suggest the following guidelines:
• • • •
Enforce a minimum length of at least seven characters (use the QPWDMINLEN system value). Require at least one digit (use the QPWDRQDDGT system value). Do not allow adjacent numbers in a password (use the QPWDLMTAJC system value). Do not allow an alphabetic character to be repeated in a password (use the QPWDLMTREP system value).
To ensure that users change their passwords regularly, use system value QPWDEXPITV to specify the maximum number of days a password will remain valid before requiring a change. A good value for QPWDEXPITV is 60 or 90 days, which would require all users system-wide to change passwords every two or three months. You can specify a different password expiration interval for selected individual profiles using CRTUSRPRF's PWDEXPITV parameter, which I'll discuss later in this chapter. PWDEXP (Set Password to Expired) The PWDEXP parameter lets you set the password for a specific user profile to the expired state. When you create new user profiles, you may want to specify PWDEXP(*YES) to prompt new users to choose a secret password the first time they sign on. The same is true when you reset passwords for a users who forgot theirs. STATUS (Profile Status) This parameter specifies whether a user profile is enabled or disabled for sign-on. When the value of STATUS is *ENABLED, the system allows the user to sign on to the system. If the value is *DISABLED, the system does not allow the user to sign on until an authorized user re-enables it (changes the value to *ENABLED). The primary use of this parameter is in conjunction with the QMAXSGNACN system value. If QMAXSGNACN is set to 2 or 3, the system will disable a profile that exceeds the maximum number of invalid sign-on attempts (the QMAXSIGN system value determines the maximum number of sign-on attempts allowed). When a user is disabled, the system changes the value of status to *DISABLED. An authorized user must reset the value to *ENABLED before the user profile can be used again. USRCLS (User Class) and SPCAUT (Special Authority) These two parameters work together to specify the special authorities granted to the user. Special authorities allow users to perform certain system functions, such as save/restore functions, job manipulation, spool file manipulation, and user profile administration (see the discussion of user classes and special authorities in Chapter 1). The USRCLS parameter lets you classify users by type. Figure 3.2 shows the five classes of user recognized on the AS/400: *SECOFR (security officer), *SECADM (security administrator), *PGMR (programmer), *SYSOPR
(system operator), and *USER (user). These classes represent the groups of users that are typical for an installation. By specifying a user class for each user profile, you can classify users based upon their role on the system. When you assign user profiles to classes, the profiles inherit the special authorities associated with their class. Figure 3.2 also shows the default special authorities associated with each user class under security levels 30, 40, and 50. While you can override these special authorities using the SPCAUT (Special Authority) parameter, often the default authorities are sufficient. The default for the SPCAUT parameter is *USRCLS, which instructs the system to refer to the user class parameter and assign the predetermined set of special authorities that appear in Figure 3.2. You can override this default by typing from one to five individual special authorities you want to assign to the user profile. After sending a message that the special authorities assigned do not match the user class, the system will create the user profile as you requested. Here are two examples:
CRTUSRPRF USRPRF(B12ICJES) PASSWORD(password) USRCLS(*PGMR) User profile B12ICJES will have *SAVSYS and *JOBCTL special authorities.
CRTUSRPRF USRPRF(B12ICJES) PASSWORD(password)USRCLS(*PGMR)+ SPCAUT(*NONE) In this case, user profile B12ICJES will be in the *PGMR class but will have no special authorities. Figure 3.3 lists the values allowed for the SPCAUT parameter and what each means. Special authorities should be given to only a limited number of user profiles because some of the functions provided are powerful and exceed normal object authority. For instance, *ALLOBJ special authority gives the user unlimited access to and control over any object on the system -- a user with *ALLOBJ special authority can perform any function on any object on your system. The danger in letting that power get into the wrong hands is clear. Generally speaking, no profile other than QSECOFR should have *ALLOBJ authority. This is why the security level of any development or production machine should be at least 30, where resource security and *ALLOBJ special authority can be controlled with confidence. Your security implementation should be designed so it does not require *ALLOBJ authority to administer most functions. Reserve this special authority for QSECOFR, and use that profile to make any changes that require that level of authority. The *SECADM special authority is helpful in designing a security system that gives users no more authority than they need to do their job. *SECADM special authority enables the user profile to create and maintain the system user profiles and to perform various administrative functions in OfficeVision/400. Using *SECADM, you can assign an individual to perform these functions without having to assign that profile to the *SECOFR user class. The *SAVSYS special authority lets a user profile perform save/restore operations on any object on the system without having the authority to access or manipulate those objects. *SAVSYS shows clearly how the AS/400 lets you grant only the authority a user needs to do a job. What would it do to your system security if your operations staff needed *ALLOBJ special authority to perform save/restore operations? If that were the case, system operators could access such sensitive information as payroll and master files. *SAVSYS avoids that authorization problem while providing operators with the functional authority to perform save/restore operations. *SERVICE is another special authority that should be guarded. Having *SERVICE special authority enables a user profile to use the System Service Tools. These tools provide the capability to trace data on communications lines and actually view user profiles and passwords being transferred down the line when someone signs on to the system. These tools also provide the capability to display or alter any object on your system. So be stingy with *SERVICE special authority. The QSRV, QBASSRV, and QSECOFR profiles provided with OS/400 have *SERVICE authority. You should check whether or not your systems still have the default passwords for the system profile QSRV or QBASSRV. If
they do, change the passwords to *NONE, and assign a password only when a Customer Engineer needs to use one of these profiles.
Initial Sign-On Options CURLIB (Current Library) INLPGM (Initial Program) INLMNU (Initial Menu) LMTCPB (Limit Capabilities) Three user profile parameters work together to determine the user's initial sign-on options. The CURLIB, INLPGM, and INLMNU parameters determine the user profile's current library, initial program, and initial menu, respectively. Why are these parameters significant to security? They establish how the user interacts with the system initially, and the menu or program executed at sign-on determines the menus and programs available to that user. Let's look at a couple of examples: Example 1 Consider the user profile USER, which has the following values:
Current library Initial program Library . . . Initial menu . Library . . .
. . . . to call . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. CURLIB . INLPGM . . INLMNU .
ICLIB *NONE ICMENU ICLIB
When USER signs on to the system, the current library is set to ICLIB and the user receives menu ICMENU in library ICLIB. Any other menus or programs that can be accessed through ICMENU and to which USER is authorized are also available. Example 2
Current library Initial program Library . . . Initial menu . Library . . .
. . . . to call . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. CURLIB . INLPGM . . INLMNU .
ICLIB ICUSERON SYSLIB *SIGNOFF
When USER signs on to the system, ICLIB is the current library in the library list, and program ICUSERON in library SYSLIB is executed. Again, any other menus or programs accessible through ICUSERON and to which the user is authorized are also available. The value of *SIGNOFF for the INLMNU parameter is worth some discussion. When a user signs on, OS/400 executes the program, if any, specified in the INLPGM parameter. If the user or user program has not actually executed the SIGNOFF command when the initial program ends, the system executes the menu, if any, specified in parameter INLMNU. Thus, if the default value MAIN were given for INLMNU and program SYSLIB/ICUSERON were to end without signing the user off, the system would present the main menu. When *SIGNOFF is the value for INLMNU, OS/400 signs the user off the system. The CURLIB, INLPGM, and INLMNU parameters are significant to security because users can modify their value at sign-on. Users can also execute OS/400 commands from the command line provided on AS/400 menus. Obviously, allowing all users these capabilities is not a good idea from a security point of view, and this is where the LMTCPB parameter enters the picture. LMTCPB controls the user's ability to
• •
define (using the CHGUSRPRF command) or change (at sign-on) his own initial program, define (using the CHGUSRPRF command) or change (at sign-on) his own initial menu,
•
• •
define (using the CHGUSRPRF command) or change (at sign-on) his own current library, define (using the CHGUSRPRF command) or change (at sign-on) his own attention key program, execute OS/400 or user-defined commands from the command line on AS/400 native menus.
Figure 3.4 shows the effect of the possible values for the LMTCPB parameter. You will notice that LMTCPB(YES) prevents changing any of these values or executing any commands.
Production systems usually enforce LMTCPB(*YES) for most user profiles. The profiles that typically need LMTCPB(*NO) are MIS personnel who frequently use the command line from OS/400 menus. These user profiles can still be secured from sensitive data using resource security. Although you could specify LMTCPB(*PARTIAL) for those MIS personnel and thus ensure that they cannot change their initial program, they could still change their initial menu, which would be executed at the conclusion of the initial program.
System Value Overrides DSPSGNINF (Display Sign-on Information) PWDEXPITV (Password Expiration Interval) LMTDEVSSN (Limit Device Sessions) The system values QDSPSGNINF, QPWDEXPITV, and QLMTDEVSSN can be overridden through user profile parameters that control these functions. You'll notice that each of these parameters has a default value of *SYSVAL. The default lets the system value control these functions. To override the system values, specify the desired values in the user profile parameters. The available choices are the same as those for the system values themselves.
Group Profiles GRPPRF (Group Profile) OWNER (Owner) GRPAUT (Group Authority) All the parameters discussed to this point are used to define profiles for individual users. The GRPPRF, OWNER, and GRPAUT parameters let you associate an individual with a group of user profiles via a group profile. When you authorize a group profile to objects on the system, the authorization applies to all profiles in the group. How is this accomplished? You create a user profile for the group. The group profile should specify PASSWORD(*NONE) to prevent it from actually being used to sign on to the system -- all members of the group should sign on using their own individual profiles. For instance, you might create a profile called DEVPGMR to be the group profile for your programming staff. Then for each user profile belonging to a member of the staff, use the CHGUSRPRF command and the GRPPRF, OWNER, and GRPAUT parameters to place them in the DEVPGMR group. The GRPPRF parameter names the group profile with which this user profile will be associated. If you create the group profile DEVPGMR, you would specify DEVPGMR as the GRPPRF value for the user profiles you put into that group. The OWNER parameter specifies who owns the objects created by the group profile. The parameter value determines whether the user profile or the group profile will own the objects created by profiles that belong to the group. There is an advantage to having the group profile own all objects created by its constituent user profiles. When the group profile owns the objects, then every member of the group has *ALL authority to the objects. This
is helpful, for instance, in a programming environment where more than one programmer works on the same projects. However, there is a way to provide authority to group members without giving them *ALL authority. If you specify OWNER(*USRPRF), individual user profiles own the objects they create. If a user profile owns an object, the group profile and other members in the group have only the authority specified in the GRPAUT parameter to the object. The GRPAUT parameter specifies the authority to be granted to the group profile and to members of the group when *USRPRF is specified as the owner of the objects created. Valid values are *ALL, *CHANGE, *USE, *EXCLUDE, and *NONE. The first four of these values are authority classes, each of which represents a set of specific object and data authorities that will be granted; these values are discussed in detail in Chapter 4 as part of the discussion of specific authorities. If you specify one of the authority class values for the GRPAUT parameter, the individual user profile that creates an object owns it, and the other members of the group, including the group profile, have the specified set of authorities to the object. *NONE is the value required when *GRPPRF is specified as the owner of objects created by the user. Because the group profile automatically owns the object, all members of the group will share that authority. JOBD (Job Description) The JOBD parameter on the CRTUSRPRF command determines the job description associated with the user profile. The job description specifies a set of attributes that determine how the system will process the job. Not only is the job description you specify used when the user profile submits a batch job to the system, but values in the job description determine the attributes of the user profile's workstation session. For instance, the initial library list that you specify for the job description becomes the user portion of the library list for the workstation session. If you don't specify a particular job description for the user profile on the JOBD parameter, the system defaults to JOBD(QDFTJOBD), an IBM-supplied job description that uses the QUSRLIBL system value to determine the user portion of the library list. The JOBD parameter does not affect any other portion of the library list. After the user profile signs on, the initial program can manipulate the library list. One way to manage the user portion of the library list is to use QUSRLIBL to establish all user libraries. Then when someone signs on to the system, QUSRLIBL supplies all possible libraries, and users can always find the programs and data they need. However, this approach disregards security because it lets all users access all libraries, even those they don't need. Another approach to setting up user libraries is to create a job description for each user type on the system. Then when you create the user profile, you can specify the appropriate job description for the JOBD parameter, and that job description's library list becomes the user library list when that profile signs on to the system. The approach I recommend is to specify only general-purpose user libraries in QUSRLIBL. These libraries should contain only general utility programs (e.g., date routines, extended math functions, a random number generator). Each profile's initial program should then add only the application libraries needed by that particular user profile. You can use department name or some other trigger kept in a database file to determine library need. SPCENV (Special Environment) CRTUSRPRF's SPCENV parameter determines which operating environment the user profile is in after signing on. The values for SPCENV are *SYSVAL, *36, or *NONE. The value *SYSVAL indicates that the system value QSPCENV will be referenced to retrieve the operating environment. If you specify *S36, the user profile will enter the S/36 environment at sign-on. If you specify *NONE, the user profile will be in the native environment at sign-on and the user will have to enter either a STRS36E or CALL QCL command to enter the S/36 or S/38 environment.
Message Handling MSGQ (Message queue) DLVRY (Delivery) SEV (Severity code filter) When you create a user profile, the system automatically creates a message queue by the same name in library QUSRSYS. The user receives job completion messages, system messages, and messages from other system
users via this message queue. Three CRTUSRPRF parameters relate to handling user messages. The MSGQ parameter specifies the message queue for the user. Except in very unusual circumstances, you should use the default value (*USRPRF) for this parameter. If you keep the message queue name the same as the user profile name, system operators and other users can more easily remember the message queue name when sending messages. The DLVRY parameter specifies how the system should deliver messages to the user. The value *BREAK specifies that the message will interrupt the user's job upon arrival. This interruption may annoy users, but it does help to ensure that they notice messages. The value *HOLD causes the queue to hold messages until a user or program requests them. The value *NOTIFY specifies that the system will notify the job of a message by sounding the alarm and displaying the message-waiting light. Users can then view messages at their convenience. The value *DFT specifies that the system will answer with the default reply any message that requires a response; information messages are ignored. The last parameter of the message group, SEV, specifies the lowest severity code of a message that the system will deliver when the message queue is in *BREAK or *NOTIFY mode. Messages of lower severity are delivered to the user profile's message queue but do not sound the alarm or turn on the message-waiting light. The default severity code is 00, meaning that the user will receive all messages. You should usually leave the SEV value at 00. But if you do not want certain users, because of their operational responsibilities, for instance, to be bothered by a lot of low-severity messages, you can assign another value (up to 99).
Printed Output Handling PRTDEV (Print Device) OUTQ (Output Queue) The PRTDEV and OUTQ parameters are important to a basic understanding of directing printed output on the AS/400. If the user does not specifically direct a particular spooled output file to an output queue or device via an override statement (i.e., with an OVRPRTF (Override with Printer File) command; the S/36 environment procedure statements PRINT or SET; the OS/400 CHGJOB (Change Job) command; or by naming a specific output queue in a job description or print file), the system directs printed output according to the values of these two parameters. PRTDEV specifies the name of the printer to which output is directed. This might be an actual printer name or the default value of *WRKSTN that instructs the system to get the name of the printer from the workstation device description. Although PRTDEV refers to a specific device, an output queue with the same name as the device specified for PRTDEV must exist on the system. If the device specified does not exist (and thus no output queue exists for that device), and if no output queue is specified in the OUTQ parameter of the user profile, then spooled output is sent to the default system printer specified in the system value QPRTDEV. If the value of PRTDEV is *SYSVAL, output also goes to the default system printer. The OUTQ parameter specifies the qualified name of the output queue the profile will use. Here again the default value of *WRKSTN instructs the system to get the name of the output queue from the workstation device description. The OUTQ parameter takes precedence over the PRTDEV parameter. In other words, if the OUTQ parameter contains the name of a valid output queue (or *WRKSTN refers to an actual output queue), the system ignores the parameter PRTDEV for this user profile and places into the specified output queue any printed output not specifically directed (via an OVRPRTF or CHGJOB command during job execution) to another output queue or printer. When the OUTQ parameter has the value *DEV, the printed output file is placed on the output queue named in the DEV attribute of the current printer file (this attribute is determined by the DEV parameter of the CRTPRTF (Create Printer File), CHGPRTF (Change Printer File), or OVRPRTF (Override Printer File) command). I follow two basic rules to determine who is on the system and to direct printed output. First, I use the user profile to determine who is on the system and the resources (e.g., libraries, menus, programs, authority) that user needs. Regardless of where users sign on to the system, they need to see their own menus, work with their usual objects, and have the same authority they always do. Those resources relate directly to the user's function. Second, I don't direct spooled output by user profile, but by the workstation being used. If a user signs on to a terminal in another department because his or her workstation is broken, spooled output should print according to the user's location. These two rules are good standards for setting up your system, yet they give you the flexibility to handle special cases, such as sending output to a printer that can handle a special form.
Documenting User Profiles TEXT (Text Description) The last parameter we will look at on the CRTUSRPRF command is TEXT. TEXT gives you 50 characters in which to meaningfully describe the user profile. The information you include and its format should be consistent for each user profile to ensure readability and usability. You can retrieve, print, or display this text to identify who requests a report or uses a program. Before you actually create any user profiles, consider each parameter and develop a plan to best use it. Once you determine your company's needs, devise standards for creating your user profiles. Figure 3.5 creates a sample user profile for an order-entry clerk at branch location 01. Notice that I specified an output queue for the user profile in spite of my rule that the user's location at sign-on should control spooled output. I specified the output queue in this example so the directory will know where to send output when the user is using directory functions such as E-mail, network spooled files, or network messages. With minor changes in the user profile name, output queue, and text, I could use the same code to create user profiles for all order-entry clerks. Before you create your user profiles, it helps to chart the various profile types and the parameter values you will use to create them. Figure 3.6 is a sample table that lists values you could use if your company had order entry, inventory control, accounting, purchasing, MIS operations, and MIS programming departments. A table such as this serves as part of your security strategy and as a reference document for creating user profiles.
Maintaining User Profiles After you set up your user profiles, you will need to maintain them as users come and go or as their responsibilities change. You can change a user profile with the CHGUSRPRF (Change User Profile) command. As with CRTUSRPRF, you must have *SECADM special authority to use CHGUSRPRF. The CHGUSRPRF command is the same as the CRTUSRPRF command, except that the CHGUSRPRF command does not have an AUT (authority) parameter, and the parameter default values for CHGUSRPRF are the parameter values you assigned when you executed the CRTUSRPRF command. Typically, you might employ CHGUSRPRF when a user forgets a password. Because the system won't display a password, you would need to use CHGUSRPRF to change the forgetful user's password temporarily and then require the user to choose a new password at the next sign-on. To accomplish this, execute the command
CHGUSRPRF USRPRF(profile_name) PASSWORD(password) PWDEXP(*YES) This command resets the password to a known value and sets the password expiration to *YES, so that the system prompts the user to choose a new secret password at the next sign-on. It is not uncommon to delete a user profile. When an employee leaves, the security administrator should promptly remove the employee's user profile from the system, or at least set the password to *NONE. To delete a user profile, use the DLTUSRPRF (Delete User Profile) command. This command has been much improved since its introduction in S/38 CPF. Many S/38 shops share a mutual problem when a user leaves, especially an MIS staff member: The user profile cannot be deleted if it owns any objects. If there are no automated methods for deleting or transferring objects owned by the former user profile, this cleanup process can take several hours. The OS/400 version of the DLTUSRPRF command has a parameter OWNOBJOPT that tells the system how to handle any objects owned by the user profile you asked to delete. The system will not delete a profile that owns objects if you specify the default *NODLT for OWNOBJOPT. However, you can specify *DLT to delete those objects. Avoid the option *DLT unless you have used the DSPUSRPRF (Display User Profile) command to identify the owned objects and are sure you want to delete them. Remember: A backup of these objects is an easy way to cover yourself in case of error. The remaining option for OWNOBJOPT is *CHGOWN, which instructs the system to transfer ownership of any objects owned by the profile you want to delete. You must specify the new owner of these objects in the second part of this parameter. For instance, if a programmer owns some objects privately and you want to delete that programmer's profile, you might specify
DLTUSRPRF USRPRF(profile_name) OWNOBJOPT(*CHGOWN MIS) to transfer ownership of the objects to your MIS group profile. If you write a program to help you maintain user profiles, you may find the RTVUSRPRF (Retrieve User Profile) command helpful. You can use RTVUSRPRF to retrieve into a CL variable one or more of the parameter values associated with a user profile. (See IBM's AS/400 Programming: Control Language Reference (SC41-0030) for details about this command's parameters. You can also prompt this command on your screen and then use the help text to learn more about each variable you can retrieve.) Figure 3.7 shows the prompt screen for RTVUSRPRF. The prompt lists the length of each variable next to the parameter whose value is retrieved in that variable. This command is valid only within a CL program because the parameters actually return variables to the program, and return variables cannot be accepted when you enter a command from an interactive command line. You might use this command to retrieve the user's actual user profile name for testing. For example, the code segment in Figure 3.8 retrieves the current user profile name into the variable &USRPRF and tests the first character to see whether or not it is the letter B. When this condition is met, the code might display a certain menu. Or you could use the test to determine what application libraries to put in the user's library list, based on user location or department.
Flexibility: The CRTUSR Command One option for retaining certain information about past and current user profiles is to use a database file for that purpose. By modifying this database file, you can also use it to automate user profile creation and establish a session when a user signs on to the system. Figure 3.9 shows sample Data Description Specifications for file USRINF. You can use the information in this file not only for audit purposes, but to track authorized users and to establish initial values for programs (such as providing the correct branch location number in an inquiry program) or to identify the user requesting printed output (the name can then be placed on the report). The USRPRF and AUTEDT fields together serve as the primary key. As a result, you can maintain one or more records for every system user profile. Figure 3.10 shows the source for a user-written CRTUSR command. The command processing program (CPP), CRTUSRCL, actually creates the user profile on the AS/400 and calls RPG program CRTUSRR to write a record to file USRINF. The CPP (Figure 3.11 — 32 KB) begins by deriving the user's initials from the user's name and putting them into variable &INITS. Then the CPP uses variable &INITS and the variables &LEVEL (user's company level) and &LOCATION to create the user profile name, which is stored in variable &USRPRF. For example, if Jane P. Doe is a branch employee at the Kalamazoo branch office (office number 12), her user profile name becomes B12JPD. For regional employee Jack J. Jones at the Sacramento (20) office, the user profile name is R20JJJ.
After concatenating the user profile name, the program concatenates the user's first name, middle initial, and last name and stores the value in variable &NAME, which will be used to create the TEXT parameter for the CRTUSRPRF command. The CPP sets up the TEXT parameter by combining the values from variables &LEVEL (branch, regional, or corporate), &LOCATION, and &NAME, thus providing consistent text for every user profile and making it easy to identify one particular user profile from a list. The next three variables -- &GRPPRF, &LMTCPB, and &USRCLS -- are all determined from the user's department. If the user works in the MIS department, the group profile becomes MIS and variable &LMTCPB is assigned the value *NO. The program further determines an MIS user's class by testing whether the user works in
operations (OP) or on the programming staff (PG) and then assigning the appropriate value (*SYSOPR or *PGMR) to variable &USRCLS. The CPP assigns non-MIS personnel to the USERS group profile and to the *USER user class. Next, if &DEPT is equal to OP or PG, the CPP checks whether or not a personal library and output queue already exist for the user profile being created. If these objects do not exist, the CPP creates them and transfers their ownership to the group profile. The program then creates the user profile by executing the CRTUSRPRF command, substituting the variables established in the program. The CPP requires the user to have *SECADM special authority and authority to the CRTUSRPRF command. If the user does not have these authorities, the program must be compiled with the attribute USRPRF(*OWNER) to adopt the authority of the owner, who does have the proper authorities. If an error occurs during the execution of the CRTUSRPRF command, the global message monitor passes control to label DIAG. When the command is successful, the program calls the RPG program CRTUSRR (Figure 3.12), which establishes the correct date for variable AUTBDT (authorization beginning date) and writes the record to disk. If an error occurs on the WRITE statement, the program sets field OFLAG (output flag) to 1, and control returns to the CPP. The system sends the appropriate message to the requester based on the value of the field &FLAG. Then the diagnostic routine reads any messages from the program queue and sends them to the requester, and the program ends. The CRTUSR command ensures that you create each user profile similarly, according to shop standards. You can create your own CHGUSR and DLTUSR commands and programs to maintain the records in USRINF and to change or delete the user profile on the system. Keep in mind there will be exceptions you will have to handle individually. You should usually use these commands to create and maintain user profiles. Only in an exceptional case should you directly use the OS/400-supplied commands.
Making User Profiles Work for You Whether you create user profiles with CL commands or employ user-written commands, it is important to plan. Careful planning saves literally hundreds of hours during the system's lifetime. If you maintain a database file like USRINF with the appropriate user information, it provides essential historical data for auditing and a way to extract significant information about the user profile during a workstation session. You will have a consistent method for creating and maintaining user profiles, and you can easily train others to create and maintain user profiles for their departments. Moreover, you will be able to retrieve information from file USRINF via a high-level language program; and you can use that information in applications to establish the work environment, library list, and initial menu for a user profile. When you set up your AS/400, take the time to examine your current standards for establishing user profiles, and make your user profiles work for you!
Chapter 4 - The Facts About Public Authorities by Gary Guthrie and Wayne Madden High among the many strengths of the AS/400 and iSeries 400 is a robust resource security mechanism. Resource security defines users’ authority to objects. There are three categories of authority to an object:
• • •
Object authority defines the operations that can be performed on an object. Figure 1A describes object authorities. Data authority defines the operations that can be performed on the object’s contents. Figure 1B describes data authorities. Field authority defines the operations that can be performed on data fields. Figure 1C describes field authorities.
Figure 1A – Object authorities Authority
Description
Allowed operations
*ObjOpr
Object operational
Examine object description Use object as determined by data authorities
*ObjMgt
Object management
Specify security for object Move or rename object All operations allowed by *ObjAlter and *ObjRef
*ObjExist
Object existence
Delete object Free storage for object Save and restore object Transfer object ownership
*ObjAlter
Object alter
Add, clear, initialize, and reorganize database file members Alter and add database file attributes Add and remove triggers Change SQL package attributes
*ObjRef
Object reference
Specify referential constraint parent
*AutLMgt
Authorization list management
Add and remove users and their authorities from authorization lists
Figure 1B – Data authorities Authority
Description
Allowed operations
*Read
Read
Display object’s contents
*Add
Add
Add entries to object
*Upd
Update
Modify object’s entries
*Dlt
Delete
Remove object’s entries
*Execute
Execute
Run a program, service program, or SQL package Locate object in library or directory
Figure 1C – Field authorities Authority
Description
Allowed operations
*Mgt
Management
Specify field’s security
*Alter
Alter
Change field’s attributes
*Ref
Reference
Specify field as part of parent key in referential constraint
*Read
Read
Access field’s contents
*Add
Add
Add entries to data
*Update
Update
Modify field’s existing entries
Because of the number of options available, resource security is reasonably complex. It’s important to examine the potential risks — as well as the benefits — of resource security’s default public authority to ensure you maintain a secure production environment.
What Are Public Authorities? Public authority to an object is that default authority given to users who have no specific, or private, authority to the object. That is, the users have no specific authority granted for their user profiles, are not on an authorization list that supplies specific authority, and are not part of a group profile with specific authority. When you create an object, either by restoring an object to the system or by using one of the many CrtXxx (Create) commands, public authorities are established. If an object is restored to the system, the public authorities
stored with that object are the ones granted to the object. If a CrtXxx command is used to create an object, the Aut (Authority) parameter of that command establishes the public authorities that will be granted to the object. Public authority is granted to users in one of several standard authority sets described by the special values *All, *Change, *Use, and *Exclude. Following is a description of each of these values:
•
•
•
•
*All — The user can perform all operations on the object except those limited to the owner or controlled by authorization list management authority. The user can control the object’s existence, grant and revoke authorities for the object, change the object, and use the object. However, unless the user is also the owner of the object, he or she can’t transfer ownership of the object. *Change — The user can perform all operations on the object except those limited to the owner or controlled by object management authority, object existence authority, object alter authority, and object reference authority. The user can perform basic functions on the object; however, he or she cannot change the attributes of the object. Change authority provides object operational authority and all data authority when the object has associated data. *Use — The user can perform basic operations on the object (e.g., open a file, read the records, and execute a program). However, although the user can read and add associated data records or entries, he or she will be prevented from updating or deleting data records or entries. This authority provides object operational authority, read data authority, add data authority, and execute data authority. *Exclude — The user is specifically denied any access to the object.
Figure 2A shows the individual object authorities defined by the above authority sets. Figure 2B shows the individual data authorities.
Figure 2A – Individual object authorities Authority set
Object authorities *ObjOpr
*ObjMgt
*ObjExist
*ObjAlter
*All
X
X
X
X
*Change
X
*Use
X
*ObjRef X
*Exclude
Figure 2B – Individual data authorities Authority set
Data authorities *Read
*Add
*Upd
*Dlt
*Execute
*All
X
X
X
X
X
*Change
X
X
X
X
X
*Use
X
X
X
*Exclude
Creating Public Authority by Default When your system arrives, OS/400 offers a means of creating public authorities. This default implementation uses the QCrtAut (Create default public authority) system value, the CrtAut (Create authority) attribute of each library, and the Aut (Public authority) parameter on each of the CrtXxx commands that exist in OS/400. System value QCrtAut provides a vehicle for systemwide default public authority. It can have the value *All, *Change, *Use, or *Exclude. *Change is the default for system value QCrtAut when OS/400 is loaded onto your system. QCrtAut alone, though, doesn’t control the public authority of objects created on the system.
The library attribute CrtAut found on the CrtLib (Create Library) and ChgLib (Change Library) commands defines the default public authority for all objects created in that library. Although the possible values for CrtAut include *All, *Change, *Use, *Exclude, and an authorization list name, the default for CrtAut is *SysVal, which references the value specified in system value QCrtAut. Therefore, when you create a library and don’t specify a value for parameter CrtAut, the system uses the default value *SysVal. The value found in system value QCrtAut is then used to set the default public authority for objects created in the library. You should note, however, that the CrtAut value of the library isn’t used when you create a duplicate object or move or restore an object in the library. Instead, the public authority of the existing object is used. The Aut parameter of the CrtXxx commands accepts the values *All, *Change, *Use, *Exclude, and an authorization list name, as well as the special value *LibCrtAut, which is the default value for most of the CrtXxx commands. *LibCrtAut instructs OS/400 to use the default public authority defined by the CrtAut attribute of the library in which the object will exist. In turn, the CrtAut attribute might have a specific value defined at the library level, or it might simply reference system value QCrtAut to get the value. Figure 3 shows the effect of the new default values provided for the CrtAut library attribute and the Aut object attribute. The lines and arrows on the right show how each object’s Aut attribute references, by default, the CrtAut attribute of the library in which the object exists. The lines and arrows on the left show how each CrtAut attribute references, by default, the QCrtAut system value. The values specified in Figure 3 for the QCrtAut system value, the CrtAut library attribute, and the Aut parameter are the shipped default values. Unless you change those defaults, every object you create on the system with the default value of Aut(*LibCrtAut) will automatically grant *Change authority to the public. (If you use the Replace(*Yes) parameter on the CrtXxx command, the authority of the existing object is used rather than the CrtAut value of the library.) If you look closely at Figure 3, you’ll see that although this method may seem to make object authority easier to manage, it’s a little tricky to grasp. First of all, consider that all libraries are defined by a library description that resides in library QSys (even the description of library QSys itself must reside in library QSys). Therefore, the QSys definition of the CrtAut attribute controls the default public authority for every library on the system (not the objects in the libraries, just the library objects themselves) as long as each library uses the default value Aut(*LibCrtAut). Executing the command
DspLibD QSys displays the library description of QSys, which reveals that *SysVal is the value for CrtAut. Therefore, if you create a new library using the CrtLib command and specify Aut(*LibCrtAut), users will have the default public authority defined originally in the QCrtAut system value. Remember, at this point the Aut parameter on the CrtLib command is defining only the public authority to the library object. As you can see in Figure 3, for each new object created in a library, the Aut(*LibCrtAut) value tells the system to use the default public authority defined by the CrtAut attribute of the library in which the object will exist. When implementing default public authorities, consider these facts:
• • • •
You can use the CrtAut library attribute to determine the default public authority for any object created in a given library, provided the object being created specifies *LibCrtAut as the value for the Aut parameter of the CrtXxx command. You can elect to override the *LibCrtAut value on the CrtXxx command and still define the public authority using *All, *Change, *Use, *Exclude, or an authorization list name. The default value for the CrtAut library attribute for new libraries will be *SysVal, instructing the system to use the value found in system value QCrtAut (in effect, controlling new object default public authority at the system level). You can choose to replace the default value *SysVal with a specific default public authority value for that library (i.e., *All, *Change, *Use, *Exclude, or an authorization list name).
Limiting Public Authority
The fact that public authority can be created by certain default values brings us to an interesting point. The existence of default values indicates that they are the “suggested” or “normal” values for parameters. In terms of security, you may want to look at default values differently. Default values that define the public authority for objects created on your system are effective only if planned as part of your overall security implementation. Your first inclination may be to change QCrtAut to *Use or even *Exclude to reduce the amount of public authority given to new libraries and objects. However, let us warn you that doing so could cause problems with some IBMsupplied functions. Another tendency might be to change this system value to *All, hoping that every system object can then be “easily” accessed. Unfortunately, this would be like opening Pandora’s box! Here are a few suggestions for effectively planning and implementing object security for your libraries and the objects in those libraries.
Public Authority by Design The most significant threat of OS/400’s default public authority implementation is the possible misuse of the QCrtAut system value. There is no doubt that changing this system value to *All would simplify security, but doing so would simply eliminate security for new libraries and objects — an unacceptable situation for any production machine. Therefore, leave this system value as *Change. The first step in effectively implementing public authorities is to examine your user-defined libraries and determine whether the current public authorities are appropriate for the libraries and the objects within those libraries. Then, modify the CrtAut attribute of your libraries to reflect the default public authority that should be used for objects created in each library. By doing so, you’re providing the public authority at the library level instead of using the CrtAut(*SysVal) default, which references the QCrtAut system value. As a general rule, use the level of public authority given to the library object (the Aut library attribute) as the default value for the CrtAut library attribute. This is a good starting point for that library. Consider this example. Perhaps a library contains only utility program objects that are used by various applications on your system (e.g., date-conversion programs, a binary-to-decimal conversion program, a check object or check authority program). Because all the programs should be available for execution, it’s logical that the CrtAut attribute of this library be set to *Use so that any new objects created in the library will have *Use default public authority. Suppose the library you’re working with contains all the payroll and employee data files. You probably want to restrict access to this library and secure it by user profile, group profile, or an authorization list. Any new objects created in this library should probably also have *Exclude public authority unless the program or person creating the object specifically selects a public authority by using the object’s Aut attribute. In this case, you would change the CrtAut attribute to *Exclude. The point is this: Public authority at the library level and public authority for objects created in that library must be specifically planned and implemented — not just implemented by default via the QCrtAut system value.
Object-Level Public Authority If you follow the suggestions above concerning the QCrtAut system value and the CrtAut library attribute, Aut(*LibCrtAut) will work well as the default for each object you create. In many cases, the level of public authority at the object level coincides with the public authorities established at the library level. However, it’s important to plan this rather than simply use the default value to save time. We hope you now recognize the significance of public authorities and understand the process of establishing them. If you’ve already installed OS/400, examine your user-defined libraries and objects to determine which, if any, changes to public authority are needed.
Chapter 5 - Installing a New Release One task you’ll perform at some time on your AS/400 is installing a new release of OS/400 and your IBM licensed program products. The good news is that this process is “a piece of cake” today compared with the effort it
required back when IBM first announced and delivered the AS/400 product family. No longer must you IPL the system more than a dozen times to complete the installation. When you load a new operating-system release today, you can have the system perform an automatic installation or you can perform a manual installation — and either method normally requires only one machine IPL. To prepare you for today’s approach, here’s a step-by-step guide to planning for and installing a new release of OS/400 and new IBM licensed program products. I cover the essential planning tasks you should accomplish before the installation, as well as the installation process itself.
Planning Is Preventive Medicine Just as planning is important when you install your AS/400 system the first time, planning for the installation of a new release offers the benefits of any preventive medicine — and it’s painless! You’ll no doubt be on a tight upgrade schedule, with little time for unexpected problems. By planning ahead and following the suggestions in this chapter, you can avoid having to tell your manager that the AS/400 will be down longer than expected while you recover the operating system because something was missing or damaged and prevented completing the installation. Before I describe the specific steps that will ensure a successful system upgrade, there’s one other important preventive measure to note: Unless it’s impossible, you should avoid mixing a hardware upgrade and a software upgrade — don’t perform both tasks at the same time. If a new AS/400 model requires a particular release of OS/400 and that release is compatible with your older hardware, first install the new release on your older hardware, and then upgrade your hardware at another time to avoid compounding any problems you might encounter.
The Planning Checklist Every good plan needs a checklist, and the list of steps in Figure 1, below, is your guide in this case. You can find a similar list in IBM’s AS/400 Software Installation (SC41-5120).
Figure 1 Installation planning checklist Pre-installation-day tasks Step 1:
When you receive the new release, verify your order (make sure you have the correct release, the right products on the media, and software keys for any locked licensed programs), and review the appropriate installation documents shipped with the release. If these documents weren't shipped with the release, you should order them; they may contain additional items you'll need to order before the installation.
Step 2:
Determine whether you'll perform the automatic or manual installation.
Step 3:
Permanently apply any temporarily applied PTFs.
Step 4:
A few days before installing the new release, remove unused objects from the system.
Step 5:
Verify disk storage requirements.
Step 6:
A few days before installation, document or save changes to IBM-supplied objects.
Step 7:
A few days before installation, order the latest cumulative PTF package if you don't have the latest. You should also order the latest appropriate group packages, particularly the HIPER PTF group package.
Step 8:
A day before or on the same day as the installation, save the system.
Installation-day tasks Step 9:
If your system participates in a network, resolve any pending database resynchronizations. If your system uses a 3995 optical library, check for and resolve any held optical files.
Step 10: If your system has an active Integrated Netfinity Server for AS/400, deactivate the server. Step 11: Verify the integrity of system objects (user profiles QSECOFR and QLPINSTALL, as well as the database cross-reference files).
Step 12: Verify and set appropriate system values.
Because IBM makes minor changes and improvements to the installation process for each release of the operating system, each new release means a new edition of the Software Installation manual. To ensure you have the latest information about installing a new release, you should read this chapter along with the manual. Read the chapter entirely to get a complete overview of the process before performing the items on the checklist. Note: If IBM’s instructions conflict with those given here, follow IBM’s instructions.
Step 1: Is Your Order Complete? One of the first things you’ll do is check the materials IBM shipped to you to make sure you have all the pieces you need for the installation. As of this writing, you should receive these items:
• • • • • • •
distribution media (normally CD-ROM) Media Distribution Report Read This First Memo to Users for OS/400 AS/400 PTF Shipping Information Letter individual product documentation AS/400 Software Installation
Don’t underestimate the importance of each of these items. Examine the CD-ROMs to make sure they’re not physically damaged, and then use the Media Distribution Report to determine whether all listed volumes are actually present. For each item on the CD-ROMs, the Media Distribution Report identifies the version, release, and modification level; licensed program name; feature number (e.g., 5769SS1, 5769RG1); and language feature code. For V4R5, you’ll find the version number listed as V4 (Version 4) in the product name; the release number and modification level are represented as R05M00 (Release 5, Modification Level 0) on the report. Note that the Media Distribution Report lists only priced features. Some features, such as licensed internal code and base OS/400, are shipped with no additional charge. The report contains no entries for these items, nor does it contain entries for locked products. The Read This First document is just what it sounds like: a document IBM wants you to read before you install the release, and preferably as soon as possible. This document contains any last-minute information that may not have been available for publication in the Memo to Users for OS/400 or in any manual. The Memo to Users for OS/400 describes any significant changes in the new release that could affect your programs or system operations. You can use this memo to prepare for changes in the release. You’ll find a specific section pertaining to licensed programs that you have installed or plan to install on your system. You’ll want to read the AS/400 PTF Shipping Information Letter for instructions on applying the cumulative program temporary fix (PTF) package. You also may receive additional documentation for some individual products; you should review any such documents because they may contain information unique to a product that could affect its installation. In addition to reviewing the deliverables listed above, you may want to review pertinent information found in the AS/400 Preventive Service Planning Information document. This document lists additional preventive service planning documents you may want to order. To obtain it, order PTF SF98 vrm, where v = version, r = release, and m = modification level for the new release. (For information about PTF ordering options, see Chapter 6, “Introduction to PTFs.”) After reviewing this information, you should verify not only that you can read the CD-ROMs but also that they contain all necessary features. An automated procedure, Prepare for Install (available through an option on the Work with Licensed Programs panel), greatly simplifies this verification process compared with earlier releases, which involved considerable manual effort. The panel in Figure 2, below, shows the installation-preparation procedures supported by Prepare for Install. One of the panel’s options compares the programs installed on your system with those on the CD-ROMs, generating a
list of preselected programs that will be replaced during installation. You can inspect this list to determine whether you have all the necessary features.
Figure 2 Prepare for Install screen Prepare for Install
System:
AS400
Type option, press Enter. 1=Select Opt _ _ _ _ _ _ _
F3=Exit
Description Work with user profiles Work with licensed programs for target release Display licensed programs for target release Work with licensed programs to delete List licensed programs not found on media Verify system objects Estimated storage requirements for system ASP
F9=Command line
F10=Display job log
Bottom F12=Cancel
To perform this verification, take these steps:
1.
Arrange the CD-ROMs in the proper order. Chapter 3 of AS/400 Software Installation contains a table specifying the correct order. You should refer to this table not only for sequencing information but also for any potential special instructions.
2.
From the command line, execute the following CHGMSGQ (Change Message Queue) command to put your message queue in break mode:
CHGMSGQ QSYSOPR *BREAK SEV(95) 3.
From the command line, execute
GO LICPGM 4.
You’ll see the Work with Licensed Programs panel. Select option 5 (Prepare for install), and press Enter.
5.
Select the option “Work with licensed programs for target release,” and press Enter.
6.
You’ll see the Work with Licensed Programs for Target Release panel. You should a. b. c. d. e.
load the first CD-ROM specify 1 (Distribution media) for the Generate list from prompt specify the appropriate value for the Optical device prompt specify the appropriate value for the Target release prompt press Enter
When the system has read the CD-ROM, you’ll receive a message asking you to load the next volume. If you have more CD-ROMs, load the next volume and reply G to the message to continue processing; otherwise, reply X to indicate that all CD-ROMs have been processed. 7.
Once you’ve processed all the CD-ROMs, the Work with Licensed Programs for Target Release panel will display a list of the licensed programs that are on the distribution media and installed on your system.
Preselected licensed programs (those with a 1 in the option column) indicate that a product on the distribution media can replace a product installed on your system. You can use F11 to display alternate views that provide more detail and use option 5 (Display release-to-release mapping) to see what installed products can be replaced. 8.
Press Enter until the Prepare for Install panel appears.
9.
Select the option “List licensed programs not found on media,” and press Enter.
10. You’ll see the Licensed Programs Not Found on Media panel. If no products appear in the panel’s list, you have all the media necessary to replace your existing products. If products do appear in the list, you must determine whether they’re necessary. If they’re not, you can delete them (I describe this procedure later when I talk about cleaning up your system). If the products are necessary, you must obtain them before installation. Make sure you didn’t omit any CD-ROMs during the verification process. If you didn’t omit any CD-ROMs, compare your media labels with the product tables in AS/400 Software Installation and check the Media Distribution Report to determine whether the products were shipped (or should have been shipped) with your order.
11.
Exit the procedure.
Step 2: Manual or Automatic? Before installing the new release, you need to determine whether you’ll perform an automated or a manual installation. The automatic installation process is the recommended method and the one that minimizes the time required for installation. However, if you’re performing any of the tasks listed below, you should use the manual installation process instead.
• • • • •
adding a disk device using device parity protection, mirrored protection, or user auxiliary storage pools (ASPs) changing the primary language that the operating system and programs support (e.g., changing from English to French) creating logical partitions during the installation using tapes created with the SAVSYS (Save System) command changing the environment (AS/400 or System/36), system values, or configuration values. These changes differ from the others listed here because you can make them either during or after the new-release installation. To simplify the installation, it’s best to automatically install the release and then manually make these changes.
The automatic installation will install the new release of the operating system and any currently installed licensed program products.
Step 3: Permanently Apply PTFs One step that will save you time later is to permanently apply any PTFs that remain temporarily applied on your system. Doing so cleans up the disk space occupied by the temporarily applied PTFs. That disk space may not be much, but now is an opportune time to perform cleanup tasks. For more specific information about applying PTFs, see Chapter 6.
Step 4: Clean Up Your System In addition to permanently applying PTFs, you should complete several other cleanup procedures. These tasks not only promote overall tidiness but also help ensure you have enough disk space for the installation. Consider these tasks:
•
Delete PTF save files and cover letters. To delete these items, you’ll use command DLTPTF. Typically, you’ll issue this command only for products 5769999 (licensed internal code) and 5769SS1 (OS/400).
•
Delete unnecessary spooled files, and reclaim associated storage. Check all output queues for unnecessary spooled files. A prime candidate for housing unnecessary spooled files is output queue QEZJOBLOG. After deleting these spooled files, reclaim spool storage using command RCLSPLSTG.
•
Have each user delete any unnecessary objects he or she owns. You’d be surprised just how much storage some users can unnecessarily consume. If at all possible, have users perform a bit of personal housekeeping by deleting spooled files and owned objects they no longer need.
•
Delete unnecessary licensed programs or optional parts. Some licensed programs may be unnecessary for reasons such as lack of support at the target release. To review candidates for deletion, you can use the Prepare for Install panel’s “Work with licensed programs to delete” option. To reach this option, display menu LICPGM (GO LICPGM) and select option 5 (Prepare for install). The “Work with licensed programs to delete” option preselects licensed programs to delete. You can use F11 (Display reasons) to determine why licensed programs are selected for deletion. I rarely see a system that doesn’t contain unused licensed programs or licensed program parts. For instance, it’s not uncommon to see systems with many unused language dictionaries or unnecessary double-byte character set options. Prepare for Install’s “Work with licensed programs to delete” option won’t preselect such unnecessary options because they are valid options. If for any reason you’re unable to use this procedure to delete licensed programs, you can use option 12 (Delete licensed programs) from menu LICPGM.
•
Delete unnecessary user profiles. It’s rarely necessary to delete user profiles as part of installation cleanup, but if this action is appropriate in your environment, consider taking care of it now. The Prepare for install option on menu LICPGM also offers procedures for cleaning up user profiles.
•
Use the automatic cleanup options in Operational Assistant. These options provide a general method for tidying your system on a periodic basis.
Step 5: Is There Enough Room? Once you’ve cleaned up your system, you should verify that you have enough storage to complete the installation. Like most installation-related tasks today, this one is much easier than in earlier releases. To determine whether you have adequate storage, perform these steps: 1.
From the command line, execute
GO LICPGM 2.
You’ll see the Work with Licensed Programs panel. Select option 5 (Prepare for install), and press Enter.
3.
Select the option “Estimated storage requirements for system ASP,” and press Enter.
4.
You’ll see the Estimated Storage Requirements for System ASP panel. At the Additional storage required prompt, enter storage requirements for any additional software (e.g., third-party vendor software) that you’ll be installing. Include storage requirements only for software that will be stored in the system ASP. Press Enter to continue.
5.
You’ll see the second Estimated Storage Requirements for System ASP panel. This panel displays information you can use to determine whether enough storage is available. Compare the value shown for “Storage required to install target release” with the value shown for “Current supported system capacity.” If the value for “Current supported system capacity” is greater than the value for “Storage required to install target release,” you can continue with the installation. Otherwise, you must make additional storage available by removing items from your system or by adding DASD to your system.
6.
Exit the procedure.
If you make changes to your system that affect the available storage, you should repeat these steps.
Step 6: Document System Changes When you load a new release of the operating system, all IBM-supplied objects are replaced on the system. The installation procedure saves any changes you’ve made in libraries QUSRSYS (e.g., message queues, output
queues) and QGPL (e.g., subsystem descriptions, job queue descriptions, other work management–related objects). However, any changes you make to objects in library QSYS are lost because all those objects are replaced. To minimize the possible loss of modified system objects, you should document any changes you make to these objects so that you can reimplement them after installing the new release. I strongly suggest maintaining a CL program that contains code to reinstate customized changes, such as command defaults; you can then execute this program with each release update. When possible, implement these customizations in a user-created library rather than in QSYS. Although the installation won’t replace the user-created library’s contents, you should regenerate the custom objects it contains to avoid potential problems. Such problems might occur, for example, if IBM adds a parameter to a command. Unless you duplicate the new command and then apply your customization, you’ll be operating with an outdated command structure. In some cases, this difference could be critical. The CL program that customizes IBM-shipped objects should therefore first duplicate each object (when appropriate) and then change the newly created copy.
Step 7: Get the Latest Fixes Normally, some time passes between the time you order and receive a new release and the date when you actually install it. During this elapsed time, PTFs to the operating system and licensed program products usually become available. To ensure you have the latest of these PTFs during installation, order PTFs for the new release the week before you install the release. Obtain the latest cumulative PTF package and appropriate group packages. Of the group packages, you should at least order the HIPER group package. (IBM releases HIPER, or High-Impact PERvasive, PTFs regularly — often daily — as necessary to correct high-risk problems.) For more information about ordering PTFs, see Chapter 6.
Step 8: Save Your System Just before installing the new release (either on installation day or the day before), you should save your system. To be safe, I recommend performing a complete system save (option 21 from the SAVE menu), but this isn’t a requirement. I advise performing at least these two types of saves:
• •
SAVSYS — saves OS/400 and configuration and security information SAVLIB LIB(*IBM) — saves all IBM product libraries
It’s also wise to schedule the installation so that it immediately follows your normally scheduled backup of data and programs. This approach guarantees that you have a current copy of all your most critical information in case any problems with the new installation require you to reinstall the old data and programs.
Installation-Day Tasks Once you’ve completed step 8, you’re nearly ready to start installing the new AS/400 release. The remaining steps (9 through 12) are best performed on the day of the installation (if they apply in your environment). They, together with the installation process itself, are the focus of the remainder of this chapter. (If you’ll be using a tape drive on installation day, see “Installing from Tape?” (below) for some additional tips.)
Step 9: Resolve Pending Operations First, if your system participates in a network and runs applications that use two-phase commit support, you should resolve any pending database resynchronizations before starting the installation. Two-phase commit support, used when an application updates database files on more than one system, ensures that the databases remain synchronized. To determine whether your system uses two-phase commit support, issue the following WRKCMTDFN (Work with Commitment Definitions) command:
WRKCMTDFN JOB(*ALL) STATUS(*RESYNC)
If the system responds with a message indicating that no commitment definitions are active, you need do nothing further. Because the typical AS/400 environment isn’t concerned with two-phase commit support, I don’t provide details about database resynchronization here. For this information, refer to AS/400 Software Installation (SC415120). Next, if your system has a 3995 optical library, check for and resolve any held optical files — that is, files that haven’t yet been successfully written to media. Use the WRKHLDOPTF (Work with Held Optical Files) command to check for such files and either save or release the files.
Step 10: Shut Down the INS If your system has an active Integrated Netfinity Server for AS/400 (INS), the installation may fail. You should therefore deactivate this server before starting the installation. To do so, access the Network Server Administration menu (GO NWSADM) and select option 3.
Step 11: Verify System Integrity You should also verify the integrity of system objects required by the installation process. Among the requirements for the installation process are
• • •
System distribution directory entries must exist for user profiles QSECOFR and QLPINSTALL. Database cross-reference files can’t be in error. User profile QSECOFR can’t contain secondary language libraries or alternate initial menus.
To verify the integrity of these objects, you can use the Prepare for install option on menu LICPGM. This option adds user profiles QSECOFR and QLPINSTALL to the system distribution directory if necessary and checks for errors in the database cross-reference files. To use the option, follow these steps: 1.
From the command line, execute command GO LICPGM.
2.
The Work with Licensed Programs panel will appear. Select option 5 (Prepare for install), and press Enter.
3.
From the resulting panel (Figure 3, below), select the Verify system objects option, and press Enter.
4.
If errors exist in the database cross-reference files, the system will issue message “CPI3DA3 Database cross-reference files are in error.” Follow the instructions provided by this message to resolve the errors before continuing.
5.
Exit the procedure.
Figure 3 Prepare for Install screen Prepare for Install Type option, press Enter. 1=Select Opt _ _ _ _ _ _ _
System:
Description Work with user profiles Work with licensed programs for target release Display licensed programs for target release Work with licensed programs to delete List licensed programs not found on media Verify system objects Estimated storage requirements for system ASP
AS400
Botto
m F3=Exit
F9=Command line
F10=Display job log
F12=Cancel
A couple of items remain to check before you’re finished with this step. If you’re operating in the System/36 environment, check to see whether user profile QSECOFR has a menu or program specified. If so, you must remove the menu or program from the user profile before installing licensed programs. Also, user profile QSECOFR can’t have a secondary language library (named QSYS29 xx) at a previous release in its library list when you install a new release. If QSECOFR has an initial program, ensure that the program doesn’t add a secondary language library to the system library list.
Step 12: Check System Values Your next step is to check and set certain system values. Remove from system values QSYSLIBL (System Library List) and QUSRLIBL (User Library List) any licensed program libraries and any secondary language libraries (QSYS29xx). Do not remove library QSYS, QUSRSYS, QGPL, or QTEMP from either of these system values. In addition, set system value QALWOBJRST (Allow Object Restore) to *ALL. Once the installation is complete, reset the QALWOBJRST value to ensure system security.
Ready, Set, Go! With the planning behind you, you’re ready to install your new release! The rest of this chapter provides basic instructions for the automatic installation procedure, which is the recommended method. If you must use the manual method (based on the criteria stated in planning step 2), see AS/400 Software Installation for detailed instructions about this process. When you perform an automatic installation of a new release of the operating system and licensed program products, the process retains the current operating environment (AS/400 or System/36), system values, and configuration while replacing these items:
• • • •
IBM licensed internal code OS/400 operating system licensed programs and optional parts of licensed programs currently installed on your system language feature code on the distribution media that’s installed as the primary language on the system
If, during the installation process, the System Attention light on the control panel appears, you should refer to Chapter 5 of AS/400 Software Installation for a list of system reference codes (SRCs) and instructions about how to continue. The only exception is if the attention light comes on and the SRC begins with A6. The A6 codes indicate that the system is waiting for you to do something, such as reply to a message or make a device ready. To install the new release, take the following steps. Step 1. Arrange the CD-ROMs in the order you’ll use them. Step 2. Load the CD-ROM that contains the licensed internal code. Wait for the CD-ROM In-Use indicator to go out. Step 3. At the control panel, set the mode to Normal. Step 4. Execute the following PWRDWNSYS (Power Down System) command:
PWRDWNSYS *IMMED RESTART(*YES) IPLSRC(D)
This command will start an IPL process. Note that SRC codes will continue to appear in the display area of the control panel. Step 5. You’ll see the Licensed Internal Code – Status panel. Upon 100 percent completion of the install, the display may be blank for approximately five minutes and the IPL in Progress panel may appear. You needn’t respond to any of these panels. Step 6. Load the next volume when prompted to do so. You’ll receive this prompt several times during the installation process. After loading the volume, you must respond to the prompt to continue processing. The response value you specify depends on whether you have more volumes to process: A response of G instructs the installation process to continue with the next volume, and a response of X indicates that no more volumes exist. Step 7. Next, the installation process loads the operating system followed by licensed programs. During this process, you may see panels with status information. One of these panels, Licensed Internal Code IPL in Progress, lists several IPL steps, some of which can take a long time (two hours or more). The amount of time needed depends on the amount of recovery your system requires. As the installation process proceeds, you needn’t respond to the status information panels you see. Once all your CD-ROMs have been read, be prepared to wait for quite some time while the installation process continues. The process is hands-free until the Sign On panel appears. Step 8. When installation is complete, you’ll see the Sign On panel. If you receive the message “Automatic installation not complete,” you should sign on using the QSECOFR user profile and refer to Appendix A, “Recovery Procedures,” in AS/400 Software Installation for instructions about how to proceed. If the automatic installation process was completed normally, sign on using user profile QSECOFR and continue by verifying the installation, loading additional products, loading PTFs, and updating software license keys. Verify the installation. To verify the installation, execute the GO LICPGM command. On the Work with Licensed Programs display, choose option 50 (Display log for messages). The Display Install History panel (Figure 4, below) will appear. Press Enter on this panel, and scan the messages found on the History Log Contents display. If any messages indicate a failure or a partially installed product, refer to “Recovery Procedures” in AS/400 Software Installation.
Figure 4 Display Install History screen Display Install History Type choices, press Enter. Start date . . . . . .
07/17/00
MM/DD/YY
Start time . . . . . .
09:32:35
HH:MM:SS
Output . . . . . . . .
*______
*, *PRINT
F3=Exit F12=Cancel (C) COPYRIGHT IBM CORP. 1980, 1998.
Next, verify the status and check the compatibility of the installed licensed programs. To do so, use option 10 (Display licensed programs) from menu LICPGM to display the release and installed status values of the licensed programs. A status of *COMPATIBLE indicates a licensed program is ready to use. If you see a different status value for any licensed program, refer to the “Installed Status Values” section of Appendix E in AS/400 Software Installation.
Load additional products. You’re now ready to load any additional licensed programs and secondary languages. Return to the Work with Licensed Programs menu, and select option 11 (Install licensed programs). You’ll see the Install Licensed Programs display that appears in Figure 5, below. The installation steps for loading additional products are similar to the steps you’ve already taken. Select a licensed program to install, and continue. If you don’t see a desired product in the list, follow the specific instructions delivered with the distribution media containing the new product.
Figure 5 Install Licensed Programs screen Install Licensed Programs System:
Type options, press Enter. 1=Install Licensed Option Program _ 5769SS1 _ 5769SS1 _ 5769SS1 _ 5769SS1 _ 5769SS1 Support _ 5769SS1 _ 5769SS1 _ 5769SS1 _ 5769SS1 _ 5769SS1 _ 5769SS1 Support _ 5769SS1 Assistant _ 5769SS1
AS400
Installed Status *COMPATIBLE *COMPATIBLE *COMPATIBLE *COMPATIBLE *COMPATIBLE
Description OS/400 - Library QGPL OS/400 - Library QUSRSYS OS/400 - Extended Base Support OS/400 - Online Information OS/400 - Extended Base Directory
*COMPATIBLE *COMPATIBLE *COMPATIBLE
OS/400 OS/400 OS/400 OS/400 OS/400 OS/400
-
S/36 and S/38 Migration System/36 Environment System/38 Environment Example Tools Library AFP Compatibility Fonts *PRV CL Compiler
OS/400 - S/36 Migration *COMPATIBLE
e... F3=Exit F11=Display release trademarks
OS/400 - Host Servers Mor F12=Cancel
F19=Display
(C) COPYRIGHT IBM CORP. 1980, 1998. Load PTFs. Next, install the cumulative PTF package (either the one that arrived with the new release or a new one you ordered, as suggested in the planning steps above). The shipping letter that accompanies the PTF tape will have specific instructions about how to install the PTF package. Note: To complete the installation process, you must install a cumulative PTF package or perform an IPL. An IPL is required to start the Initialize System (INZSYS) process (the INZSYS process can take two hours or more on some systems, but for most systems it’s completed in a few minutes.) In addition to installing a cumulative PTF package, you should install any group PTFs you have — particularly the HIPER, or High-Impact PERvasive, PTFs group package. (For information about installing PTFs, see Chapter 6.) After the IPL is completed, sign on as QSECOFR and check the install history (using option 50 on menu LICPGM) for status messages relating to the INZSYS process. You should look for a message indicating that INZSYS has started or a message indicating its completion. If you see neither message, wait a few minutes and try option 50 again. Continue checking the install history until you see the message indicating INZSYS completion. If the message doesn’t appear in a reasonable amount of time, refer to the “INZSYS Recovery Information” section of Appendix A in AS/400 Software Installation. Update software license keys. To install software license keys, use the WRKLICINF (Work with License Information) command. For each product, update the license key and the usage limit to match the usage limit you ordered. The license information is part of the upgrade media. You must install license keys within 70 days of your release installation.
Step 9. The installation of your new release is now complete! The only thing left to do before restarting production activities is to perform another SAVSYS to save the new release and the new IBM program products. Just think how much trouble it would be if you had a disk crash soon after loading the new release and, with no current SAVSYS, were forced to restore the old release and repeat the installation process. To make sure you don’t suffer this fate, perform the SAVSYS and the SAVLIB LIB(*IBM) operations now. Before starting the save, determine whether system jobs that decompress objects are running. You should start your save only if these jobs are in an inactive state. To make this determination, use the WRKACTJOB (Work with Active Jobs) command and check the status of QDCPOBJx jobs (more than one may exist). You can ensure these jobs are inactive by placing the system in restricted state. Don’t worry — the QDCPOBJ I. x jobs will become active again when the system is no longer in restricted state.
Final Advice The only risk you take when installing a new release is not being prepared for failure. It’s rare that a new-release installation must be aborted midway through, but it does happen. If you take the precautions mentioned in the planning suggestions and turn to “Recovery Procedures” in AS/400 Software Installation in the event of trouble, you won’t find yourself losing anything but time should you encounter an unrecoverable error. For the most part, installing new releases is only an inconvenience in time.
Chapter 6 - Introduction to PTFs Updated June 2000 Here's a step-by-step guide to ordering and installing PTFs — and to knowing when you need them Much as we'd like to think the AS/400 is invincible, from time to time even the best of systems needs a little repair. IBM provides such assistance for the AS/400 in the form of PTFs. A PTF, or program temporary fix, is one or more objects (most often program code) that IBM creates to correct a problem in the IBM licensed internal code, in the OS/400 operating system, or in an IBM licensed program product. In addition to issuing PTFs to correct problems, IBM uses PTFs to add function or enhance existing function in these products. The fixes are called 'temporary' because a PTF fixes a problem or adds an enhancement only until the next release of that code or product becomes available; at that time, the fix becomes part of the base product itself, or 'permanent.' Hardware and software service providers distribute PTFs. Your hardware maintenance vendor is typically responsible for providing microcode PTFs, while your software service provider furnishes system software PTFs. Because IBM is both the hardware and the software provider for most shops, the focus here is on IBM distribution of PTFs. In this introduction to PTFs, you'll learn the necessary information to determine when PTFs are required on your system, what PTFs you need, how to order PTFs, and how to install and apply those PTFs.
When Do You Need a PTF? Perhaps the most difficult hurdle to get over in understanding PTFs is knowing when you need one. Basically, there are three ways to determine when you need one or more PTFs. The first way is simple: You should regularly order and install the latest cumulative PTF package, group PTFs, Client Access service pack, and necessary individual HIPER PTFs. A cumulative PTF package is an ever-growing collection of significant PTFs. You might wonder what criteria IBM uses to determine whether a PTF is significant. In general, a PTF is deemed significant, and therefore included in a cumulative package, when it has a large audience or is critical to operations. IBM releases cumulative packages on a regular basis, and you should stay up-to-date with them, loading each package fairly soon after it becomes available. You should also load the latest cumulative package any time you load a new release of OS/400. To order the latest cumulative PTF package, you use the special PTF identifier SF99vrm, where v = OS/400 version, r = release, and m = modification. A group PTF is a logical grouping of PTFs related to a specific function, such as database or Java. Each group
has a single PTF identifier assigned to it so that you can download all PTFs for the group by specifying only one identifier. Client Access service packs are important if you access your system using Client Access. Like a group PTF, a service pack is a logical grouping of multiple PTFs available under a single PTF identifier for easy download. HIPER, or High-Impact PERvasive, PTFs are released regularly (often daily) as necessary to correct high-risk problems. Ignore these important PTFs, and you chance catastrophic consequences, such as data loss or a system outage. A second way you may discover you need a PTF is by encountering a problem. To identify and analyze the problem, you might use the ANZPRB (Analyze Problem) command, or you might investigate error messages issued by the system. If you report a system problem to IBM based on your analysis, you may receive a PTF immediately if someone else has already reported the problem and IBM has issued a PTF to resolve it. The third way to discover you might need particular PTFs is by regularly examining the latest Preventive Service Planning (PSP) information. You can download PSP information by ordering special PTFs. (To learn more about PSP documents and for helpful guidelines for managing PTFs, see the section 'Developing a Proactive PTF Management Strategy' near the end of this article.)
How Do You Order a PTF? You can order individual PTFs, a set of PTFs (e.g., a cumulative PTF package, a group PTF), and PSP information from IBM by mail, telephone, fax, or electronic communications. Each PTF you receive has two parts: a cover letter that describes both the PTF and any prerequisites for loading the PTF, and the actual fix. You have two choices when ordering PTFs electronically. You can use Electronic Customer Support (ECS) and the CL SNDPTFORD (Send PTF Order) command, or you can order PTFs on the Internet. Electronically ordered PTFs are delivered electronically only when they're small enough that they can be transmitted within a reasonable connect time. When electronic means are not practical, IBM send the PTFs via mail on selected media, as it does for PTFs ordered by non-electronic means.
SNDPTFORD Basics The SNDPTFORD command is a simple command to use; however, a brief introduction here may point out a couple of the command's finer points to simplify its use. Figure 1 shows the prompted SNDPTFORD command. For parameter PTFID, you enter one, or up to 20, PTF identifiers (e.g., SF98440, MF98440). The parameter actually has three elements or parts. First is the actual PTF identifier, a required entry. The second element is the Product identifier, which determines whether the PTF order is for a specific product or for all products installed on your system. The default value you see in Figure 1, *ONLYPRD, indicates that the order is for all products installed or supported on your system. Instead of this value, you can enter a specific product ID (e.g., 5769RG1, 5769PW1) to limit your order to PTFs specific to that product. The third PTFID element, Release, determines whether the PTF order is for the current release levels of products on your system or a specific release level, which may or may not be the current release level installed for your products. For example, you might order a different release-level PTF for products you support on remote AS/400s. A Release value of *ONLYRLS indicates that the order is for the release levels of the products installed or supported on your system. If you prefer, you can enter a specific release identifier (e.g., V4R4M0, V4R3M0) to limit the PTF order to that release. Two restrictions apply to the Product and Release elements of the PTFID parameter. First, if you specify a particular product, you also must specify a particular release level. Second, if you specify *ONLYPRD for the product element, you also must specify *ONLYRLS for the release element. From time to time, you may want to download only a cover letter to determine whether a particular PTF is necessary for your system. The next SNDPTFORD parameter, PTFPART (PTF parts), makes this possible. Use
value *ALL to request both PTF(s) and cover letter(s) or value *CVRLTR to request cover letter(s) only. The next two parameters, RMTCPNAME (Remote control point) and RMTNETID (Remote network identifier), identify the remote service provider and the remote service provider network. You should change parameter RMTCPNAME (default value *IBMSRV) only if you are using a service provider other than IBM or are temporarily accessing another service provider to obtain application-specific PTFs. Parameter RMTNETID must correctly identify the remote service provider network. The value *NETATR causes the system to refer to the network attributes to retrieve the local network identifier (you can view the network attributes using the DSPNETA, or Display Network Attributes, command). If you change the local network identifier in the network attributes, you may then have to override this default value when you order PTFs. Your network provider can give you the correct RMTNETID if the default does not work. SNDPTFORD's DELIVERY parameter determines how PTFs are delivered to you. A value of *LINKONLY tells ECS to deliver PTFs only via the electronic link. The value *ANY specifies that the PTFs should be delivered by any available method. Most PTFs ordered using SNDPTFORD are downloaded immediately using ECS; however, PTFs that are too large are instead shipped via mail. The next parameter, ORDER, specifies whether only the PTFs ordered are sent or also any requisite PTFs that you must apply before, or along with, applying the PTFs you're ordering. Value *REQUIRED requests the PTFs you're ordering as well as any other required PTFs that accompany the ordered PTFs. Value *PTFID specifies that only those PTFs you are ordering are to be sent. The last parameter, REORDER, specifies whether you want to reorder a PTF that is currently installed or currently ordered. Valid values are *NO and *YES. Note that REORDER(*YES) is necessary if you've previously sent for the cover letter only and now want to order the PTF itself. If you permit REORDER to default to *NO, OS/400 won't order the PTF because it thinks it has already ordered it when, in fact, you've received only the cover letter.
Ordering PTFs on the Internet IBM provides a detailed overview of the Internet PTF download process, along with detailed instructions, at the IBM Service Web site, http://www.as400service.ibm.com. The service is free and available to all AS/400 owners. When you visit the site, expand the 'Fixes Downloads and Updates' branch and then select 'Internet PTF Downloads' to reach the AS/400 Internet PTF Downloads (iPTF) page. Then simply complete the following few steps, and you're ready to download PTFs: 1.
Register for the service.
2.
Configure your AS/400, and start the appropriate services.
3.
Test your PC's Internet browser to ensure it supports the JavaScript programs used in the download process.
4.
Log on, identify the PTFs you want to download, and begin the download.
5.
After you've downloaded the PTFs, you simply continue normal PTF application procedures.
For a more detailed description of the Internet PTF process, see 'Working with the AS/400 iPTF Function'.
How Do You Install and Apply a PTF? Installing a PTF includes two basic steps: loading the PTF and applying the PTF. The process outlined here performs both the loading and the application of the PTF. Note one caution concerning the process of loading and applying PTFs: You must not interrupt any step in this process. Interrupting a step can cause problems significant enough to require reloading the current version of the licensed internal code or the operating system. Make sure, for example, that your electrical power is protected with a UPS. Also note that for systems with logical partitions, the PTF process differs in some critical ways; if you have such a system, be sure to read
'PTFs and Logical Partitioning (LPAR)' (below) for more information. First, we'll look at loading and applying PTFs for the IBM licensed internal code. Then we'll examine the process for loading and applying PTFs for licensed program products.
Installing Licensed Internal Code PTFs Step 1. Print and review any cover letters that accompany the PTFs. Look especially for any specific preinstallation instructions. You can do this by entering the DSPPTF (Display Program Temporary Fix) command and specifying the parameters COVERONLY(*YES) and either OUT PUT(*) or OUTPUT(*PRINT), depending on whether you want to view the cover letter on your workstation or print the cover letter. For example, to print the cover letter for PTF MF12345, you would enter the following DSPPTF command:
DSPPTF LICPGM(5769999) SELECT(MF12345) COVERONLY(*YES) OUTPUT(*PRINT)
+ + +
Note: You can also access cover letters at the IBM Service Web site by selecting from the Tech Info & Databases branch. Step 2. Determine which storage area your machine is currently using. The system maintains two copies of all the IBM licensed internal code on your system. This lets your system maintain one permanent copy while you temporarily apply changes (PTFs) to the other area. Only when you're certain you want to keep the changes are those changes permanently applied to the control copy of the licensed internal code. The permanent copy is stored in system storage area A, and the copy considered temporary is stored in system storage area B. When the system is running, it uses the copy you selected on the control panel before the last IPL. Except for rare circumstances, such as when serious operating system problems occur, the system should always run using storage area B. If you currently see a B in the Data portion of the control-panel display, this means that the next system IPL will use storage area B for the licensed internal code. To apply PTFs to the B storage area, the system must actually IPL from the A storage area and then IPL again on the B storage area to begin using those applied PTFs. On older releases of OS/400, you had to manually IPL to the A side, apply PTFs, and then manually IPL to the B side again. The system now handles this IPL process automatically during the PTF install and apply process. To determine which storage area you're currently using, execute the command
DSPPTF 5769999 and check the IPL source field to determine which storage area is current. You will see either ##MACH#A or ##MACH#B, which tells you whether you are running on storage area A or B, respectively. If you are not running on the B storage area, execute the following PWRDWNSYS (Power Down System) command before continuing with your PTF installation:
PWRDWNSYS OPTION(*IMMED) + RESTART(*YES) + IPLSRC(B) Step 3. Enter GO PTF and press Enter to reach the Program Temporary Fix (PTF) panel. Select the 'Install program temporary fix package' option. Step 4. Supply the correct value for the Device parameter, depending on whether you received the PTF(s) on media or electronically. If you received the PTF(s) on media, enter the name of the device you're using. If you received the PTF(s) electronically, enter the value *SERVICE. Then press Enter.
Step 5. The system then performs the necessary steps to temporarily apply the PTFs and re-IPL to the B storage area. Once the IPL is complete, verify the PTF installation (see the section 'Verifying Your PTF Installation').
Installing Licensed Program Product PTFs Installing PTFs for licensed program products is almost identical to installing licensed internal code PTFs except that you don't have to determine the storage area on which you're currently running. The separate storage areas apply only to licensed internal code. The abbreviated process for licensed program products is as follows: Step 1. Review any cover letters that accompany the PTFs. Look especially for any specific pre-installation instructions. Step 2. Enter GO PTF and press Enter to reach the Program Temporary Fix (PTF) panel. Select the 'Install program temporary fix package' option. Step 3. Supply the correct value for the Device parameter, depending on whether you received the PTF(s) on media or electronically. If you received the PTF(s) on media, enter the name of the device you're using. If you received the PTF(s) electronically, enter the value *SERVICE. Then press Enter. Step 4. After the IPL is complete, verify the PTF installation (see 'Verifying Your PTF Installation').
Verifying Your PTF Installation After installing one or more PTFs, you should verify the installation process before resuming either normal system operations or use of the affected product. Use the system-supplied history log to verify PTF installations by executing the DSPLOG (Display Log) command, specifying the time and date you want to start with in the log:
DSPLOG LOG(QHST) + PERIOD((start_time start_date)) Be sure to specify a starting time early enough to include your PTF installation information. On the Display Log panel, look for any messages regarding PTF installation. If you have messages that describe problems, see AS/400 Basic System Operation, Administration, and Problem Handling (SC41-5206) for more information about what to do when your PTF installation fails. When installing a cumulative PTF package, you can also use option 50, 'Display log for messages,' on the Work with Licensed Programs panel (to reach this panel, issue the command GO LICPGM). The message log will display messages that indicate whether the install was successful.
How Current Are You? One last thing that will help you stay current with your PTFs is knowing what cumulative PTF package you currently have installed. To determine your current cumulative PTF package level, execute the command
DSPPTF LICPGM(5769SS1) The ensuing display panel shows the identifiers for PTFs on your system. The panel lists PTFs in decreasing sequence, showing cumulative package information first, before individual PTFs. Cumulative packages start with the letters TC or TA and end with five digits that represent the Julian date (in yyddd format) for the particular package. PTF identifiers that start with TC indicate that the entire cumulative package has been applied; those starting with TA indicate that HIPER PTFs and HIPER licensed internal code fixes have been applied. To determine the level of licensed internal code fixes on your system, execute the command
DSPPTF LICPGM(5769999)
Identifiers beginning with the letters TL and ending with the five-digit Julian date indicate the cumulative level. Typically, you want the levels for TC, TA, and TL packages to match. This circumstance indicates that you've applied the cumulative package to licensed program products as well as to licensed internal code.
Developing a Proactive PTF Management Strategy The importance of developing sound PTF management processes cannot be overstated. A proactive PTF management strategy lessens the impact to your organization that can result from program failures by avoiding those failures, ensuring optimal performance, and maximizing availability. Because environments vary, no single strategy applies to all scenarios. However, you should be aware of certain guidelines when evaluating your environment and establishing scheduled maintenance procedures. Your PTF maintenance strategy should include provisions for preventive service planning, preventive service, and corrective service.
Preventive Service Planning Planning your preventive measures is the first step to effective PTF management. To help you with planning, IBM publishes several Preventive Service Planning documents in the form of informational PTFs. (The easiest and fastest way to obtain these documents is from the IBM Service Web site.) Following are some minimum recommendations for PSP review. You should start with the software and hardware PSP information documents by ordering SF98vrm (Current Cumulative PTF Package) and MF98vrm (Hardware Licensed Internal Code Information), respectively. These documents contain service recommendations concerning critical PTFs or PTFs that are most likely to affect your system, as well as a list of the other PSP documents from which you can choose. You should order and review SF98vrm and MF98vrm at least monthly. Between releases of cumulative PTF packages, you may need to order individual PTFs critical to sound operations. If you review no other additional PSP documents, review the information for HIPER PTFs and Defective PTFs. These documents contain information about critical PTFs. At a minimum, review this information weekly. In years past, PSP documents contained enough detail to let you determine the nature of the problems that PTFs fixed. Unfortunately, that's no longer the case. With problem descriptions such as 'Data Integrity' and 'Usability of a Product's MAJOR Function,' you often must do a little more work to determine the nature of problems described in the PSP documents by referring to PTF cover letters. In addition to reviewing PSP documents, consider subscribing to IBM's AS/400 Alert offering. This service notifies you weekly about HIPER problems, defective PTFs, and the latest cumulative PTF package. You can receive this information by fax or mail. To learn more about this service, go to http://www.ibm.com/services.
Preventive Service Preventive measures are instrumental to your system's health. Remember the old adage 'An ounce of prevention ...'? Suffice it to say I've seen situations where PTFs would have saved tens of thousands of dollars. Avoid problems, and you avoid their associated high costs. Preventive maintenance includes regular application of cumulative and group PTF packages and Client Access service packs. Because all of these are collections of PTFs, your work is actually quite easy. There's no need to wade through thousands of PTFs to determine those you need. Instead, simply order and apply the packages. Cumulative PTF packages are your primary preventive maintenance aid. Released on a periodic basis, they should be applied soon after they become available -- usually every three to four months. This rule of thumb is especially true if you're using the latest hardware or software releases or making significant changes to your environment. In conjunction with cumulative PTF packages, you should stay current with any group PTF packages applicable to your environment, as well as with Client Access service packs if appropriate. You can find Client Access
service pack information and download service packs by following the links at http://www.as400.ibm.com/clientaccess.
Corrective Service Even the most robust and aggressive scheduled maintenance efforts can't thwart all possible problems. When you experience problems, you need to find the corrective PTFs. Ferreting out PSP information about individual problems and fixes is without a doubt the most detailed of the tasks in managing PTFs. However, if you take the time to learn your way around PSP information and PTF cover letters, you'll be able to find timely resolution to your problems. Your goal should be to minimize the corrective measures required. In doing so, your environment will be dramatically more stable operationally. With robust preventive service planning and preventive service measures, your corrective service issues will be minimal. This article is excerpted from a new edition of Wayne Madden's Starter Kit for the AS/400, to be published in the spring of 2001 by NEWS/400 Books. Gary Guthrie is a technical editor for NEWS/400. You can reach him by e-mail at [email protected] as400network.com.
PTFs and Logical Partitioning (LPAR) Although the basic steps of installing PTFs are the same for a system with logical partitions, some important differences exist. Fail to account for these differences when you apply PTFs, and you could find yourself with an inoperable system requiring lengthy recovery procedures. For systems with logical partitions, heed the following warnings: When you load PTFs to a primary partition, shut down all secondary partitions before installing the PTFs. When using the GO PTF command on the primary partition, change the automatic IPL parameter from its default value of *YES to *NO unless the secondary partitions are powered down. These warnings, however, are only the beginning with respect to the differences imposed by logical partitioning. There are also partition-sensitive PTFs that apply specifically to the lowest-level code that controls logical partitions. These PTFs have special instructions that you must follow exactly. These instructions include the following steps:
1.
Permanently apply any PTFs superseded by the new PTFs.
2.
Perform an IPL of all partitions from the A side.
3.
Load the PTFs on all logical partitions using the LODPTF (Load PTF) command. Do not use the GO PTF command.
4.
Apply the PTFs temporarily on all logical partitions using the APYPTF (Apply PTF) command.
5.
Power down all secondary partitions.
6.
Perform a power down and IPL of the primary partition from side B in normal mode.
7.
Perform normal-mode IPLs of all secondary partitions from side B.
8.
Apply all the PTFs permanently using command APYPTF.
When you receive partition-sensitive PTFs, always refer to any accompanying special instructions before loading the PTFs onto your system.
— G.G.
Chapter 7 - Getting Your Message Across: User to User Sooner or later, you will want to use messages on the AS/400. For instance, you might need to have a program communicate to a user or workstation to request input, report a problem, or simply update the user or system operator on the status of the program (e.g., 'Processing today's invoices'). Another time, your application might need to communicate with another program. Program-to-program messages can include informational, notification, status, diagnostic, escape, and request messages, each of which aids in developing program function, problem determination, or application auditing. 'File YOURLIB/YOUROBJ not found' is an example of a diagnostic program-to-program message. You or your users can also send messages to one or more users or workstations on the spur of the moment. Sometimes called impromptu messages, user-to-user messages are not predefined in a message file to system users. They might simply convey information, or they might require a response (e.g., 'Joe, aliens have just landed and taken the programming manager hostage. What should we do???'). User-to-user messages can serve as a good introduction to AS/400 messaging.
Sending Messages 101 To send user-to-user messages, you use one of three commands: SNDMSG (Send Message), SNDBRKMSG (Send Break Message), or SNDNETMSG (Send Network Message). SNDMSG is the most commonly used (you can use it even if LMTCPB(*YES) is specified on your user profile) and the easiest to learn. The SNDMSG prompt screen is shown in Figure 7.1. To access the SNDMSG command, you can
• • • •
key SNDMSG on a command line, select option 5 on the System Request menu, select option 3 on the User Task menu, or select option 4 on the Operational Assistant menu. (This option may be best for end users because Operational Assistant provides the most user-friendly interface to the SNDMSG command.)
The message string you enter in the MSG parameter can be up to 512 characters long. To specify the message destination, you can enter a user profile name in the TOUSR parameter. TOUSR can have any of the following values:
• • • •
*SYSOPR -- to request that the message be sent to the system operator's message queue (QSYS/QSYSOPR). *REQUESTER -- to request that the message be sent to the interactive user's external message queue or to the system operator's message queue when the command is executed from within a program. *ALLACT -- to request that the message be sent to the message queue of every user currently signed on to the system. (*ALLACT is not valid when MSGTYPE(*INQ) is also specified.) User_profile_name -- to request that the message be sent to the user's message queue (which may or may not have the same name as the user profile).
For example, if you simply want to inform John, a co-worker, of a meeting, you could enter
SNDMSG MSG('John - Our meeting today will
+
be at 4:00. Jim')
+
TOUSR(JSMITH) Another way to specify the message destination is to enter up to 50 message queue names in the TOMSGQ parameter. The specified message queue can be any external message queue on your system, including the workstation, user profile, or system history log (QHST) message queue (for more about sending messages to QHST, see 'Sending Messages into History,'). Specifying more than one message queue is valid only for informational messages. The MSGTYPE parameter lets you specify whether the message you are sending is an *INFO (informational, the default) or *INQ (inquiry) message. Like the informational message, an inquiry message appears on the destination message queue as text. However, an inquiry message supplies a response line and waits for a reply. If you want to schedule a meeting with John and be sure he receives your message, you could enter
SNDMSG MSG('John - Will 4:00 be a good time for our meeting today? Jim')
+ +
TOUSR(JSMITH) MSGTYPE(*INQ) The RPYMSGQ parameter on the SNDMSG command specifies which message queue should receive the response to the inquiry message. Because the default for RPYMSGQ is *WRKSTN, John's reply will return to your (the sender's) workstation message queue. As you can see, the SNDMSG command provides a simple way to send a message or inquiry to someone else on the local system. However, it has one quirk. Although SNDMSG can send a message to a message queue, it is the message queue attributes that define how that message will be received. If the message queue delivery mode is *BREAK and no break-handling program is specified, the message is presented as soon as the message queue receives it. A delivery mode of *NOTIFY causes a workstation alarm to sound and illuminates the 'message wait' block on the screen. A delivery mode of *HOLD does not notify the user or workstation about a message received.
I Break for Messages The SNDBRKMSG command offers a solution for messages that must get through regardless of the message queue's delivery mode or break-handling program or the message's severity. Although SNDBRKMSG provides the same function as the SNDMSG command, the message queue receiving the command handles messages in break mode, regardless of the message queue's delivery mode. Figure 7.2 shows the SNDBRKMSG prompt screen.
There are two other differences between the SNDBRKMSG command and the SNDMSG command. First, the SNDBRKMSG command has only the TOMSGQ parameter on which to specify a destination (i.e., only workstation message queues can be named as destinations). Second, the SNDBRKMSG command lets you specify the value *ALLWS (all workstations) in the TOMSGQ parameter to send a message to all workstation message queues.
The following is a sample message intended for all workstations on the system:
SNDBRKMSG MSG('Please sign off the system + The system
immediately.
+
will be unavailable 30 minutes.')
+ for the next +
TOMSGQ(*ALLWS) This message will go immediately to all workstation message queues and be displayed on all active workstations. If a workstation is not active, the message simply will be added to the queue and displayed when the workstation becomes active and the message queue is allocated.
Casting Network Messages The third command you can use to send a message to another user is SNDNETMSG (Figure 7.3). As with SNDMSG and SNDBRKMSG, you can type an impromptu message up to 512 characters long in the MSG parameter. The distinguishing feature of the SNDNETMSG command is the destination parameter, TOUSRID. The value you specify must be either a valid network user ID or a valid distribution list name (i.e., a list of network user IDs). If necessary, you can add network user IDs to the system network directory using the WRKDIR (Work with Directory) command. Each network user ID is associated with a user profile on a local or remote system in the network.
There are two situations for which the SNDNETMSG command is more appropriate than SNDMSG or SNDBRKMSG. First, you might need this command if your system is in a network because SNDMSG and SNDBRKMSG can't send messages to a remote system. Second, you can use SNDNETMSG to send messages to groups of users on a network -- including users on your local system -- using a distribution list. You can create a distribution list using the CRTDSTL (Create Distribution List) command and add the appropriate network user IDs to the list using the ADDDSTLE (Add Distribution List Entry) command. When you specify a distribution list as the message destination, the message is distributed to the message queue of each network user on the list. For example, if distribution list PGMRS consists of network user IDs for Bob, Sue, Jim, and Linda, you could send the same message to each of them (and give them reason to remember you on Bosses' Day) by executing the following command:
SNDNETMSG MSG('Thanks for your hard work
+
on the order entry project. Go home early today and
+
+
enjoy a little time off.')
+
TOUSRID(PGMRS) The only requirements for this method are that user profiles have valid network user IDs on the network directory and that System Network Architecture Distribution Services (SNADS) be active. (You can start SNADS by starting the QSNADS subsystem.)
As you can see, you have more than one option when sending user-to-user messages on the AS/400. Now you're ready to move on to program-to-user and program-to-program messages, but these are topics for another day. This introduction to messages should get you started and whet your appetite for learning more.
Chapter 8 - Secrets of a Message Shortstop What makes the OS/400 operating system tick? You could argue that messages are really at the heart of the AS/400. The system uses messages to communicate between processes. It sends messages noting the completion of jobs or updating the status of ongoing jobs. Messages tell when a job needs some attention or intervention. The computer dispatches messages to a problem log so the operator can analyze any problems the system may be experiencing. You send requests in the form of messages to the command processor when you execute AS/400 commands. OfficeVision uses a message to sound an alarm when a calendar event is imminent. You can design screens and reports that use messages instead of constants, thus enabling multilingual support. And, of course, users can send impromptu messages to and receive them from other workstation users on the system. With hundreds of messages flying around your computer at any given moment, it's important to have some means of catching those that relate to you -- and that might require some action. IBM provides several facilities to organize and handle messages, and you can create programs to further define how to process messages. In this chapter, I'll explore three methods of message processing: the system reply list, break handling programs, and default replies. The system reply list lets you specify that the operating system is to respond automatically to certain predefined inquiry messages without requiring that the user reply to them. A break handling program lets you receive messages and process them according to their content. The reply list and the break handling program have similar functions and can, under some conditions, accomplish the same result. The reply list tends to be easier to implement, while a break handling program can be much more flexible in the way it handles different kinds of messages. The third message handling technique, the default reply, lets you predefine an action that the computer will take when it encounters a specific message; the reply becomes a built-in part of the message description.
Return Reply Requested The general concept of the system reply list is quite simple. The reply list primarily consists of message identifiers and reply values for each message. There is only one reply list on the system (hence the official name: system reply list). When a job using the reply list encounters a predefined inquiry message, OS/400 searches the reply list for an entry that matches the message ID (and the comparison data, which we'll cover later). When a matching entry exists, the system sends the listed reply without intervention from the user or the system operator. When the system finds no match, it sends the message to the user (for interactive jobs) or to the system operator (for batch jobs). A job does not automatically use the system reply list -- you must specify that the reply list will handle inquiry messages. To do this, indicate INQMSGRPY(*SYSRPYL) within any of the following CL commands:
• • • • •
BCHJOB (Batch Job) SBMJOB (Submit Job) CHGJOB (Change Job) CRTJOBD (Create Job Description) CHGJOBD (Change Job Description)
IBM ships the AS/400 with the system reply list already defined as illustrated in Figure 8.1. This predefined reply list issues a 'D' (job dump) reply for inquiry messages that indicate a program failure. Note that the reply list uses the same convention as the MONMSG (Monitor Message) CL command for indicating generic ranges of messages; for example, 'RPG0000' matches all messages that begin with the letters 'RPG,' from RPG0001 through RPG9999. You can modify the supplied reply list by adding your own entries using the following CL commands:
• • • •
WRKRPYLE (Work with Reply List Entries) ADDRPYLE (Add Reply List Entry) CHGRPYLE (Change Reply List Entry) RMVRPYLE (Remove Reply List Entry)
Figure 8.2 lists some possibilities to consider for your own reply list. Each entry consists of a unique sequence number (SEQNBR), a message identifier (MSGID), optional comparison data (CMPDTA) and starting position (START), a reply value (RPY), and a dump attribute (DUMP). Let's look at each component individually.
A Table of Matches The system searches the reply list in ascending sequence number order. Therefore, if you have two list entries that would satisfy a match condition, the system uses the one with the lowest sequence number. The message identifier can indicate a specific message (e.g., RPG1241) or a range of messages (e.g., RPG1200 for any RPG messages from RPG1201 through RPG1299), or you can use *ANY as the message identifier for an entry that will match any inquiry message, regardless of its identifier. The reply list message identifiers are independent of the message files. If you have two message files with a message ID USR9876, for example (usually not a good idea), the system reply list treats both messages the same. Use the *ANY message identifier with great care. It is a catch-all entry that ensures the system reply list handles all messages, regardless of their message identifier. If you use it, it should be at the end of your reply list, with sequence number 9999. You should also be confident that the reply in the entry will be appropriate for any error condition that might occur. If the system reply list gets control of any message other than the listed ones, it performs a dump and then replies to the message with the default reply from the message description. If you don't use *ANY, the system sends unmonitored messages to the operator. The comparison data is an optional component of the reply list. You use comparison values when you want to send different replies for the same message, according to the contents of the message data. The format of the message data is defined when you or IBM creates the message. To look at the format, use the DSPMSGD (Display Message Descriptions) command. When a reply list entry contains comparison values, the system compares the values with the message data from the inquiry message. If you indicate a starting position in the system reply list, the comparison begins at that position in the message data. If the message data comparison value matches the list entry comparison value, the system uses the list entry to reply to the message; otherwise, it continues to search the list. For example, Figure 8.2 shows three list entries for the CPA4002 (Align forms) message. When the system encounters this message, it checks the message data for the name of the printer device. If the device name matches either the 'PRT3816' or 'PRTHPLASER' comparison data, the system automatically replies with the 'I' (Ignore) response; otherwise, it requires the user or the system operator to respond to the message. You use the reply value portion of the list entry to indicate how the system should handle the message in this entry. Your three choices are:
• • •
Indicate a specific reply (up to 32 characters) that the system automatically sends back to the job in response to the message (e.g., I, R, D, and G in Figure 8.2). Use *DFT (Default) to have the system send the message default reply from the message description. Use *RQD (Required) to require the user or system operator to respond to the message, just as if the job were not using the reply list.
The dump attribute in the system reply list tells the system whether or not to perform a job dump when it encounters the message matching this entry. Specify DUMP(*YES) or DUMP(*NO) for the list entry. You may request a job dump no matter what you specified for a reply value. The system dumps the job before it replies to the message and returns control to the program that originated the message. The dump then serves as a snapshot of the conditions that caused a particular inquiry message to appear. Although the reply list is a system-wide entity, you can use it with a narrower focus. Figure 8.3 shows portions of a CL program that temporarily changes the system reply list and then uses the changed list for message handling, checking for certain inquiry messages, and issuing replies appropriate to the program. At the end, the program returns the system reply list to its original condition. You should probably limit this approach to programs run on a dedicated or at least a fairly quiet system. Since the program temporarily changes the system reply list, any other jobs that use the reply list may use the changed reply list while this program is active. However, this technique does work well for such tasks as software installation and nighttime unattended operations.
Give Me a Break Message Another means of processing messages is to use a break handling program, which processes messages arriving at a message queue in *BREAK mode. IBM supplies a default break handling program; it's the same command processing program used by the DSPMSG (Display Messages) command. But you can write your own break handling program if you want break messages to do more than just interrupt your normal work with the Display Messages screen. Both the system reply list and a break handling program customize your shop's method of handling messages that arrive on a message queue, but there are several differences. The system reply list handles only inquiry messages, while a break handler can process any type of message, such as a completion message or an informational message. The system reply list has a specific purpose: to send a reply back to a job in response to a specific message. The break handler's function, on the other hand, is limited only by your programming ability. It can send customized replies for inquiry messages, it can convert messages to status messages, it can process command request messages, it can initiate a conversational mode of messaging between workstations, it can redirect messages to another message queue -- it can perform any number of functions. Unlike the system reply list, the break handler interrupts the job in which the message occurs and processes the message; it then returns control to the job. The interruption can, however, be transparent to the user. Like the reply list, a break handler does not take control of break messages unless you first tell it to do so. To turn control over to a break handling program, use the following CL command:
CHGMSGQ MSGQ(library/msgq_name) + DLVRY(*BREAK)
+
PGM(program_name)
+
SEV(severity_code) OS/400 calls the break handler if a message of high enough severity reaches the message queue. If you use a break handler in a job that is already using the system reply list, the reply list will get control of the messages first, and it will pass to the break handler only those messages it cannot process.
Take a Break Figure 8.4 shows a sample break handling program. To make the break handler work, OS/400 passes it three arguments:
• • •
the name of the message queue the library containing the message queue the reference key of the received message
The only requirement of the break handler is that it must receive the referenced message with the RCVMSG (Receive Message) command. You can then do nearly anything you want with the message before you end the break handler and let the original program resume. The example in Figure 8.4 displays any notify or inquiry messages, allowing you to send a reply, if appropriate. It also checks for any calendar alarms sent by OfficeVision and displays them. In addition, it monitors for and displays messages that could indicate potentially severe conditions, such as running out of DASD space. For any other messages, it simply resends the message as a status message, which appears quietly at the bottom of the user's display without interrupting work (unless display of status messages is suppressed in the user profile, the job, or the system value QSTSMSG). Figure 8.5 shows a portion of an initial program that puts a break handler into action. The initial program first displays all messages that exist in a user's message queue, and then it clears all but unanswered messages from the queue and activates the break handling program. Note that the initial program also checks whether the user is the system operator; if so, it activates the break handler for the system operator message queue.
It's Your Own Default One of the easiest methods of processing message replies automatically is also one of the most often overlooked. The message descriptions for inquiry or notify messages can contain default replies, which you can tell the system to use when the message occurs. The default reply must be among the valid replies for the message. You specify the message's default reply using either the ADDMSGD (Add Message Description) or CHGMSGD (Change Message Description) command. You can display a message's default reply using the DSPMSGD command. You can also use WRKMSGD (Work with Message Descriptions) to manage message descriptions. The default reply is used under the following circumstances:
• •
when you use the system reply list and the list entry's reply for the message is *DFT when you have changed the delivery mode of the receiving message queue to *DFT, using the CHGMSGQ (Change Message Queue) command
No messages are put in a message queue when the queue is in *DFT delivery mode; informational messages are ignored. Messages will be logged, however, in the system history log (QHST). You can easily set up an unattended environment for your computer to use every night by having your system operator execute the following command daily when signing off:
CHGMSGQ MSGQ(QSYSOPR) DLVRY(*DFT) Your system will then use default replies instead of sending messages to an absent system operator. This technique may prevent your overnight batch processing from hanging up because of an unexpected error condition. You should be careful, however, to ensure the suitability of the default replies for any messages that might be sent to the queue. You might also consider including the CHGMSGQ command within key CL programs, such as unattended backup procedures or program installation procedures, for which default replies may be appropriate. Another good use for default replies is to have one message queue handle all printer messages. By defining default replies to these messages and placing that queue in *DFT delivery mode, you can have the system automatically respond to forms loading and alignment messages.
Chapter 9 - Print Files and Job Logs There is certainly nothing mysterious about printing on your AS/400; however, you must understand a few basic concepts about print files to make printing operations run more smoothly. In this chapter, I cover two items concerning print files: modifying attributes of print files and handling a specific type of print file -- the systemgenerated job log. This basic understanding of how to define print files and job logs and of the functions they provide will increase your power to customize your system by controlling output. These tips are especially helpful if you have migrated from the S/36 or equipment other than the S/38.
How Do You Make It Print Like This? The AS/400 does support direct printing (i.e., output directly to the printer, which ties up a workstation or job while the printer device completes the task); however, almost 100 percent of the time you will use OS/400 print files to format and direct output. IBM ships the system with many print files, such as QSYSPRT, which the system uses when you compile a CL program; QSUPPRT, which the system uses when you print a listing from the source file; and QQRYPRT, which the system uses when you run a query. These print files have predefined attributes that control such features as lines per inch (LPI), characters per inch (CPI), form size, overflow line number, and output queue. In addition to the print files IBM provides, you can create two types of print files within your applications. The first type uses the CRTPRTF (Create Print File) command to define a print file that has no external definition (i.e., the print file has a set of defined attributes from the CRTPRTF command but only one record format). Any program using this type of print file must contain output specifications that describe the fields, positions, and edit codes used for printing. The second type of print file is externally described: When you use the CRTPRTF command, you specify a source member that describes the various record formats your program will use for printing. (For specifications you can
make in DDS, refer to IBM's Data Management Guide (SC41-9658).) Whether you create an externally described print file or a print file that must be used with programs that internally describe the printing, you define certain print file attributes (e.g., those controlling LPI, CPI, and form size) as part of the print file object definition. Let's examine a problem that often occurs when an AS/400 installation is complete. All the IBM-supplied print files are predefined for use with paper that is 11 inches long. If you have been using paper that is shorter (e.g., the 14 1/2-by-8 1/2-inch size) and generate output (using DSPLIB OUTPUT(*PRINT) or a QUERY/400 report) with a system-supplied print file, the system will print the report through the page perforations. On your previous system, the overflow worked just right, but you weren't around when someone set the system up. So how do you instruct the AS/400 to print correctly on the short, wide paper? First, you need to find out what the default values for printing are. To do so, you type in the DSPFD (Display File Description) command for the print file QSYSPRT:
DSPFD QSYSPRT When you execute that command, you see the display represented in Figure 9.1. Notice the page size parameter, PAGESIZE(66 132); the LPI parameter, LPI(6); and the overflow parameter, OVRFLW(60). These default parameters combine to determine the number of inches (i.e., 11) the system considers to be a single page on the system-supplied objects. But in this example, your paper is only 8 1/2 inches long, so you need to modify the form size and overflow of each print file (including all system-supplied print files and those you create yourself) that generates reports on this short-stock paper. You can accomplish this task by identifying each print file that needs to be modified and executing the following command for each:
CHGPRTF FILE(library_name/file_name) PAGESIZE(51 132) OVRFLW(45)
+
If you need to change all print files on the system, you can execute the same command, but place the value *ALL in the parameter FILE:
CHGPRTF FILE(*ALL/*ALL) PAGESIZE(51 132) OVRFLW(45) Another approach is to change the LPI parameter to match a valid number of lines per inch for the configured printer and then calculate the new form size and overflow parameters based on the new LPI you specified. The page size can vary from one form type to the next, but you can easily compensate for differences by modifying the appropriate print files. Remember that changing the LPI, the page length, and the overflow line number does not require programming changes for programs that let the system check for overflow status (i.e., you do not need to have program logic count lines to control page breaks). Such programs use the new attributes of the print file at the next execution. Once you have set up the page size you want and determined how a given job will print, you can start thinking about controlling when that job will print. The two parameters you can use to ensure that spooled data is printed at the time you designate are SCHEDULE and HOLD. The SCHEDULE parameter specifies when to make the spooled output file available to a writer for printing. If the system finds the *IMMED value for SCHEDULE, the file is available for a writer to begin printing the data as soon as the records arrive in the spooled file. This approach can be advantageous for short print items, such as invoices, receipts, or other output that is printed quickly. However, when you generate long reports, allocating the writer as soon as data is available can tie up a single writer for a long time. Entering a *FILEEND value for SCHEDULE specifies that the spooled output file is available to the writer as soon as the print file is closed in the program. Selecting this value can be useful for long reports you want available for printing only after the entire report is generated. The *JOBEND value for SCHEDULE makes the spooled output file available only after the entire job (not just a program) is completed. One benefit of selecting this value is that you can ensure that all reports one job generates will be available at the same time and therefore will be printed in succession (unless the operator intervenes).
The HOLD parameter works the way the name sounds. Selecting a value of *YES specifies that when the system generates spooled output for a print file, the output file stays on the output queue with a *HLD status until an operator releases the file to a writer. Selecting the *NO value for HOLD specifies that the system should not hold the spooled print file on the output queue and should make the output available to a writer at the time the SCHEDULE parameter indicates. For example, when a program generates a spooled file with the attributes of SCHEDULE(*FILEEND) and HOLD(*NO), the spooled file is available to the writer as soon as the file is closed. As with the PAGESIZE and OVRFLW parameters, you can modify the SCHEDULE and HOLD parameters for print files by using the CHGPRTF command. Remember that you can also override these parameters at execution time using the CL OVRPRTF (Override with Print File) command. You can also change some print file attributes at print time using the CHGSPLFA (Change Spool File Attributes) command or option 2 on the Work with Output Queue display. You should examine the various attributes associated with the CRTPRTF (Create Print File), CHGPRTF, and OVRPRTF commands to see whether or not you need to make other changes to customize your printed output needs. For further reading on these parameters, see the discussion of the CRTPRTF command in IBM's Programming: Control Language Reference (SC41-0030).
Where Have All the Job Logs Gone? After you have your print files under control, the next step in customizing your system can prick a nasty thorn in the flesh of AS/400 newcomers: learning how to manage all those job logs the system generates as jobs are completed. A job log is a record of the job execution and contains informational, completion, diagnostic, and other messages. The reason these potentially useful job logs can be a pain is that the AS/400 generates a job log for each completed job on the system. But fortunately, you can manage job logs. The three methods for job-log management are controlling where the printed output for the job log is directed, deciding whether to generate a printed job log for jobs that are completed normally or only for jobs that are completed abnormally, and determining how much information to include in the job logs. Controlling where the printed output is directed. When your system is shipped, it is set up so that every job (interactive sessions as well as batch) generates a job log that records the job's activities and that can vary in content according to the particular job description. You can use the DSPJOB (Display Job) or the DSPJOBLOG (Display Job Log) command to view a job log as the system creates it during the job's execution. When a job is completed, the system spools the job log to the system printer unless you change print file QUSRSYS/QPJOBLOG (the print file the system uses to generate job logs) to redirect spool files to another output queue where they can stay for review or printing. You can elect to redirect this job log print file in one of two ways. The most popular method is to utilize the OS/400-supplied Operational Assistant, which will not only redirect your job logs to a single output queue, but also perform automatic cleanup of old job logs based on a number of retention days, which you supply. You can access the system cleanup option panel from the Operational Assistant main menu (type 'GO ASSIST'), from the SETUP menu (type 'GO SETUP'), or directly by typing 'GO CLEANUP,' which will present you with the Cleanup Menu panel that you see in Figure 9.2. Before starting cleanup, you need to define the appropriate cleanup options by selecting option 1, 'Change cleanup options.' Figure 9.3 presents the Change Cleanup Options panel, where you can enter the retention parameters for several automated cleanup functions as well as determine at what time you want the system to perform cleanup each day. You can find a complete discussion of this panel and automated cleanup in Chapter 12, 'AS/400 Disk Storage Cleanup.' For now, my only point is this: The first time you activate the automated cleanup function by typing a 'Y' in the 'Allow automatic cleanup' option on this panel (see Figure 9.3), OS/400 changes the job log print file so that all job logs are directed to the system-supplied output queue QEZJOBLOG. Even if you do not start the actual cleanup process, or if you elect to stop the cleanup function at a later date, the job logs will continue to accumulate in output queue QEZJOBLOG. The second method for redirecting job logs is to manually create an output queue called QJOBLOGS, JOBLOGS, or QPJOBLOG using the CRTOUTQ (Create Output Queue) command. After creating an output queue to hold the job logs, you can use the CHGPRTF command (with OUTQ identifying the output queue you created for this purpose) by typing
CHGPRTF FILE(QPJOBLOG) OUTQ(QUSRSYS/output queue_name) Now the job logs will be redirected to the specified output queue. You might also want to specify HOLD(*YES) to place the spool files on hold in your new output queue. However, if no printer is assigned to that queue, those spool files will not be printed. The job logs can now remain in that queue until you print or delete them. When you think about managing job logs, you should remember that if you let job logs accumulate, they can reduce the system's performance efficiency because of the overhead for each job on the system. If a job log exists, the system is maintaining information concerning that job. Therefore it is important either to utilize the automated cleanup options available in OS/400's Operational Assistant or to manually use the CLROUTQ (Clear Output Queue) command regularly to clear all the job logs from an output queue. Deciding whether or not to generate a printed job log for jobs that are completed normally. Another concern related to the overhead involved with job logs is how to control their content (size) and reduce the number of them the system generates. The job description you use for job initiation is the object that controls the creation and contents of the job log. This job description has a parameter with the keyword LOG, which has three elements -- the message level and the message severity, both of which control the number of messages the system writes to a job log; and the message text level, which controls the level (i.e., amount) of message text written to the job log when the first two values create an error message. Before discussing all three parameters, I should define the term 'message severity.' Every message generated on the AS/400 has an associated 'severity,' which you can think of as its priority. Messages that are absolutely essential to the system's operation (e.g., inquiry messages that must be answered) have a severity of 99. Messages that are informational (e.g., messages that tell you a function is in progress) have a severity of 00. (For a detailed description of severity codes, you can refer to IBM's Programming: Control Language Reference, Volume 1, Appendix A, 'Expanded Parameter Descriptions.') The first parameter, message level, specifies one of the following five logging levels (note that a high-level message is one sent to the program message queue of the program that received the request or commands being logged from a CL program):
0 No data is logged. 1 The only information logged is any message sent to the job's external message queue with a severity greater than or equal to the message severity specified in this LOG parameter.
2 In addition to the information logged at level 1 above, the following is logged: • •
Any requests or commands logged from a CL program that cause the system to issue a message with a severity level that exceeds or is equal to that specified in the LOG parameter. All messages associated with a request or commands being logged from a CL program and that result in a high-level message with a severity greater than or equal to the message severity specified in the LOG parameter.
3 The same as level 2, with the additional logging of any requests or commands being logged from a CL program: • •
All requests or commands being logged from a CL program. All messages associated with a request or commands being logged from a CL program and that result in a high-level message with a severity greater than or equal to the message severity specified.
4 The following information is logged: •
All requests or commands logged from a CL program and all messages with a severity greater than or equal to the severity specified, including trace messages.
The second element of the LOG parameter, message severity, determines which messages will be logged and which will be ignored. Messages with a severity greater than or equal to the one specified in this parameter will be logged in the job log according to the logging level specified in the previous parameter.
With the third element of the LOG parameter, the message text level, a value of *MSG specifies that the system write only first-level message text to the job log. A value of *SECLVL specifies that the system write both the message and help text of the error message to the job log. By setting the message text level value to *NOLIST, you ensure that any job initiated using that value in the job description does not generate a job log if the job is completed normally. Jobs that are completed abnormally will generate a job log with both message and help text present. Eliminating job logs for jobs that are completed normally can greatly reduce the number of job logs written into the output queue. Determining how much information to include in the job logs. You can cause any interactive or batch job initiated with QDFTJOBD to withhold spooling of a job log if the job terminates normally. You simply create your user profiles with the default -- i.e., QDFTJOBD (Default Job Description) -- for the parameter JOBD (Job Description) and enter the command
CHGJOBD JOBD(QDFTJOBD) LOG(*SAME *SAME *NOLIST) Is this approach wise? Interactive jobs almost always end normally. Therefore, changing the job description for such interactive sessions is effective. Do you need the information in those job logs? If you understand how your workstation sessions run (e.g., which menus are used and which programs called), you probably do not need the information from sessions that end normally. You might need the information when errors occur, but you can generally re-create the errors at a workstation. You can rest assured with this approach that jobs ending abnormally will still generate a job log and provide helpful diagnostic information. Note that for interactive jobs, the LOG parameter on the SIGNOFF command overrides the value you specify on the job description. For instance, if on the job description you enter the value of *NOLIST in the LOG parameter and use the SIGNOFF LOG(*LIST) command to sign off from the interactive job, the system will generate a job log. For batch jobs, the question of eliminating job logs is more complex than it is for interactive jobs. It is often helpful to have job logs from batch jobs that end normally as well as those that end abnormally, so someone can re-create events chronologically. When many types of batch jobs (e.g., nightly routines) run unattended, job log information can be useful. Remember, the job description controls job log generation, so you can use particular job descriptions when you want the system to generate a job log regardless of how the job ends. The job description includes the parameter LOGCLPGM (Log CL Program Commands). This parameter affects the job log in that a value of *YES instructs the system to write to the job log any logable CL commands (which can happen only if you specify LOG(*JOB) or LOG(*YES) as an attribute of the CL program being executed). A value of *NO specifies that commands in a CL program are not logged to the job log. A basic understanding of AS/400 print files will help you effectively and efficiently operate your system. Handling job logs is a simple, but essential, part of managing system resources. When you neglect to control the number of job logs on the system, the system is forced to maintain information for an excessive number of jobs, which can negatively affect system performance. And job logs are a valuable information source when a job fails to perform. Customize your system to handle job logs and other print files to optimize your operations.
Chapter 10 - Understanding Output Queues Printing. It's one of the most common things any computer does, and it's relatively easy with the AS/400. What complicates this basic task is that the AS/400 provides many functions you can tailor for your printing needs. For example, you can use multiple printers to handle various types of forms. You can use printers that exist anywhere in your configuration -- whether the printers are attached to local or remote machines or even to PCs on a LAN. You can let users view, hold, release, or cancel their own output; or you can design your system so their output simply prints on a printer in their area without any operator intervention except to change and align the forms. The cornerstone for all this capability is the AS/400 output queue. Understanding how to create and use output queues can help you master AS/400 print operations.
What Is an Output Queue? An output queue is an object containing a list of spooled files that you can display on a workstation or write to a printer device. (You can also use output queues to write spooled output to a diskette device, but this chapter does not cover that function.) The AS/400 object type identifier for the output queue is *OUTQ. Figure 10.1a shows the AS/400 display you get on a workstation when you enter the WRKOUTQ (Work with Output Queue) command for the output queue QPRINT
WRKOUTQ QPRINT As the figure shows, the Work with Output Queue display lists each spooled file that exists on the queue you specify. For each spooled file, the display also shows the spooled file name, the user of the job that created the spooled file, the user data identifier, the status of that spooled file on the queue, the number of pages in the spooled file, the number of copies requested, the form type, and that spooled file's output priority (which is defined in the job that generates the spooled file). You can use function key F11=View 2 to view additional information (e.g., job name and number) about each spooled file entry. The status of a spooled file can be any of the following:
OPN The spooled file is being written and cannot be printed at this time (i.e., the SCHEDULE parameter of the print file is *FILEEND or *JOBEND).
CLO The file is spooled but unavailable for printing (i.e., the SCHEDULE parameter's value for the print file is *JOBEND).
HLD The file is spooled and on hold in the output queue. You can use option 6 to release the spooled file for printing.
RDY The file is spooled and waiting to be printed when the writer is available. You can use option 3 to hold the spooled file.
SAV The spooled file has been printed and is now saved in the output queue. (The spooled file attribute SAVE has a value of *YES. In contrast, a spooled file with SAVE(*NO) will be removed from the queue after printing.)
WTR The spooled file is being printed. You can still use option 3 to hold the spooled file and stop the printing, and the spooled file will appear on the display as HLD. I have mentioned two options for spooled files -- option 3, which holds spooled files, and option 6, which releases them. The panel in Figure 10.1a shows all available options. Figure 10.1b explains each option.
How To Create Output Queues Now that we've seen that output queues contain spooled files and let you perform actions on those spooled files, we can focus on creating output queues. The most common way output queues are created is through a printer device description. Yes, you read correctly! When you create a printer device description using the CRTDEVPTR (Create Device Description (Printer)) command or through autoconfiguration, the system automatically creates an output queue in library QUSRSYS by the same name as that assigned to that printer. This output queue is the default for that printer. In fact, the system places 'Default output queue for PRINTER_NAME' in the output queue's TEXT attribute.
An alternative method is to use the CRTOUTQ (Create Output Queue) command. The parameter values for this command determine attributes for the output queue. When you use the CRTOUTQ command, after entering the name of the output queue and of the library in which you want that queue to exist, you are presented with two categories of parameters -- the procedural ones (i.e., SEQ, JOBSEP, and TEXT) and those with security implications (i.e., DSPDTA, OPRCTL, AUTCHK, and AUT). For a look at some of the parameters you can use, see the CRTOUTQ panel in Figure 10.2.
The first of the procedural parameters, SEQ, controls the order of the spooled files on the output queue. You can choose values of either *FIFO (first in, first out) or *JOBNBR. If you select *FIFO, the system places new spooled files on the queue following all other entries already on the queue that have the same output priority as the new spooled files (the job description you use during job execution determines the output priority). Using *FIFO can be tricky because the following changes to an output queue entry cause the system to reshuffle the queue's contents and place the spooled file behind all others of equal priority:
• • •
A change of output priority when you use the CHGJOB (Change Job) or CHGSPLFA (Change Spooled File Attributes) command; A change in status from HLD, CLO, or OPN to RDY; A change in status from RDY back to HLD, CLO, or OPN.
The other possible value for the SEQ parameter -- *JOBNBR -- specifies that the system sort queue entries according to their priorities, using the date and time the job that created the spooled file entered the system. I recommend using *JOBNBR instead of *FIFO, because with *JOBNBR you don't have to worry about changes to an output queue entry affecting the order of the queue's contents. The next procedural parameter is JOBSEP (job separator). You can specify a value from 0 through 9 to indicate the number of job separators (i.e., pages) the system should place at the beginning of each job's output. The job separator contains the job name, the job user's name, the job number, and the date and time the job is run. This information can help in identifying jobs. If you'd rather not use a lot of paper, you can lose the job separator by selecting a value of 0. Or you can enter *MSG for this value, and each time the end of a print job is reached, the system will send a message to the message queue for the writer. Don't confuse the JOBSEP parameter with the FILESEP (file separator) parameter, which is an attribute of print files. When creating or changing print files, you can specify a value for the FILESEP parameter to control the number of file separators at the beginning of each spooled file. The information on the file separators is similar to that printed on the job separator but includes information about the particular spooled file. When do you need the file separator, the job separator, or both? You need file separators to help operators separate the various printed reports within a single job. You need job separators to help separate the printed output of various jobs and to quickly identify the end of one report and the beginning of the next. However, if you program a header page for all your reports, job separators are probably wasteful. Another concern is that for output queues that handle only a specific type of form, such as invoices, a separator wastes an expensive form. In reality, a person looking for a printed report usually pays no attention to separator pages but looks at the first page of the report to identify the contents and destination of the report. And as you can imagine, a combination of file separators and job separators could quickly launch a major paper recycling campaign. Understand, I am not saying these separators have no function. I am saying you should think about how helpful the separators are and explicitly choose the number you need.
The security-related CRTOUTQ command parameters help control user access to particular output queues and particular spooled data. To appreciate the importance of controlling access, remember that you can use output queues not only for printing spooled files but also for displaying them. What good is it to prevent people from watching as payroll checks are printed, if they can simply display the spooled file in the output queue? The DSPDTA (display data) parameter specifies what kind of access to the output queue is allowed for users who have *READ authority. A value of *YES says that any user with *READ access to the output queue can display, copy, or send the data of any file on the queue. A value of *NO specifies that users with *READ authority to the output queue can display, copy, or send the output data only of their own spooled files unless they have some other special authority. (Special authorities that provide additional function are *SPLCTL and *JOBCTL.) The OPRCTL (operator control) parameter specifies whether or not a user who has *JOBCTL special authority can manage or control the files on an output queue. The values are *YES, which allows control of the queue and provides the ability to change queue entries, or *NO, which blocks this control for users with the *JOBCTL special authority. One problem you might face relating to security is how to allow users to start, change, and end writers without having to grant them *JOBCTL special authority, which also grants a user additional job-related authorities that might not be desirable (e.g, the ability to control any job on the system). An alternative is to write a program to perform such writer functions. You can specify that the program adopt the authority of its owner, and you would make sure that the owner has *JOBCTL special authority. During program execution, the current user adopts the special and object-specific authorities of the owner. When the program ends, the user has not adopted *JOBCTL authority and thus cannot take advantage of a security hole. If the user does not have *JOBCTL special authority or does not adopt this special authority, (s)he must have a minimum of *CHANGE authority to the output queue and *USE authority to the printer device. The AUTCHK (authority check) parameter specifies whether the commands that check the requester's authority to the output queue should check for ownership authority (*OWNER) or for just data authority (*DTAAUT). When the value is *OWNER, the requester must have ownership authority to the output queue to pass the output queue authorization test. When the value is *DTAAUT, the requester must have *READ, *ADD, and *DELETE authority to the output queue. Finally, the AUT parameter specifies the initial level of authority allowed for *PUBLIC users. You can modify this level of authority by using the EDTOBJAUT (Edit Object Authority), GRTOBJAUT (Grant Object Authority), or RVKOBJAUT (Revoke Object Authority) command. As you can see, creating output queues requires more than just selecting a name and pressing Enter. Given some appropriate attention, output queues can provide a proper level of procedural (e.g., finding print files and establishing the order of print files) and security (e.g., who can see what data) support.
Who Should Create Output Queues? Who should create output queues? Although this seems like a simple question, it is important for two reasons: First, the owner can modify the output queue attributes as well as grant/revoke authorities to the output queue, which means the owner controls who can view or work with spooled files on that queue. Second, the AUTCHK parameter checks the ownership of the output queue as part of the authorization test when the output queue is accessed. So ownership is a key to your ability to secure output queues. Here are a few suggestions. The system operator should be responsible for creating and controlling output queues that hold data considered public or nonsecure. With this ownership and the various authority parameters on the CRTOUTQ command, you can create an environment that lets users control their own print files and print on various printers in their area of work. For secure data (e.g., payroll, human resources, financial statements), the department supervisor profile (or a similar one) should own the output queue. The person who owns the output queue is responsible for maintaining the security of the output queue and can even explicitly deny access to DP personnel.
How Spooled Files Get on the Queue It is very important to understand that all spooled output generated on the AS/400 uses a print file. Whether you enter the DSPLIB (Display Library) command using the OUTPUT(*PRINT) parameter to direct your output to a
report, create and execute an AS/400 query, or write a report-generating program, you are going to use a print file to generate that output. A print file is the means to spool output to a file that can be stored on a queue and printed as needed. Also, a print file determines the attributes printed output will have. This means you can create a variety of print files on the system to accommodate various form requirements. Another essential fact to understand about spooling on the AS/400 is that normally all printed output is placed on an output queue to be printed. As mentioned in the previous chapter, the AS/400 is capable of bypassing the spool process to perform direct printing; but this is normally avoided because of performance and work management problems when implementing direct printing. With that said, we can examine the spooling process more closely. When a job generates a spooled file, that file is placed on an output queue. The output queue is determined by one of two methods -- if the print file has a specifically defined output queue or is overridden to a specific output queue, the output from that print file is placed on that specific queue; if the print file does not specifically direct the spool file, it is placed on the output queue currently defined as the output queue for that particular job. Figure 10.3 illustrates how one job can place spooled files on different output queues. The job first spools the nightly corporate A/R report to an output queue at the corporate office. Then the program creates a separate A/R report for each branch office and places the report on the appropriate output queue.
How Spooled Files Are Printed from the Queue So how do the spooled files get printed from the queue? The answer is no secret. You must start (assign) a writer to an output queue. You make spooled files available to the writer by releasing the spooled file, using option 6. You then use the STRPRTWTR (Start Printer Writer) command. The OUTQ parameter on that command determines the output queue to be read by that printer. When the writer is started to a specific output queue and you use the WRKOUTQ command for that specific output queue, the letters WTR appear in the Status field at the top of the Work with Output Queue display to indicate that a writer is assigned to print available entries in that queue. You can start a writer for any output queue (only one writer per output queue and only one output queue per writer). You don't have to worry about the name of the writer matching the name of the queue. For instance, to start printing the spooled files in output queue QPRINT, you can execute the STRPRTWTR command
STRPRTWTR WRITER(writer_name) OUTQ(QPRINT) (Messages for file control are sent to the message queue defined in the printer's device description unless you also specify the MSGQ parameter.) When you IPL your system, the program QSTRUP controls whether or not the writers on the system are started. When QSTRUP starts the writers, each printer's device description determines both its output queue and message queue. You can modify QSTRUP to start all writers, to start specific writers, or to control the output queues by using the STRPRTWTR command. After a writer is started, you can redirect the writer to another output queue by using the CHGWTR (Change Writer) command or by ending the writer and restarting it for a different output queue. To list the writers on your system and the output queues they are started to, type the WRKOUTQ command and press Enter. You will see a display similar to the one in Figure 10.4. You can also use the WRKWTR (Work with Writer) command by typing WRKWTR and pressing Enter to get a display like the one in Figure 10.5. It is important to understand that the output queue and the printer are independent objects, so output queues can exist with no printer assigned and can have entries. The Operational Assistant (OA) product illustrates some implications of this fact. OA lets you create two output queues (i.e., QUSRSYS/QEZJOBLOG and QUSRSYS/QEZDEBUG) to store job logs and problem-related output, respectively. These output queues are not default queues for any printers. Entries are stored in these queues, and the people who manage the system can decide to print, view, move, or delete them.
A Different View of Spooled Files The WRKOUTQ command allows you to work with all spooled files on a particular output queue. Another helpful command is the WRKSPLF (Work with Spooled Files) command. This command allows you to work with all spooled files generated by your job, even if those spooled files are on multiple output queues. Figure 10.6 represents the WRKSPLF command output for someone who works at the 'basic' OS/400 assistance level (one's assistance level is determined first at the user profile level by the ASTLVL parameter, then at the command level based on the last use of the command or what the user enters when prompting the ASTLVL parameter on the command). Notice that one spooled file is assigned to the printer 'CONTES3' while the other spooled files are 'unassigned.' They are definitely on an output queue; but since no printer is currently started for any of those output queues, the files are listed as 'unassigned.' This basic assistance level hides some of the technical details of spooled files and output queues unless you request more information by selecting option 8 'attributes' to display the spooled file detail information.
Figure 10.7 represents the WRKSPLF command output for someone who works at the 'intermediate' level (there is no 'advanced' assistance level for this command, so those at the advanced assistance level will also see this same panel). Now you can clearly see which output queue each spool file is assigned to, the number of pages, the status, and the user who created the spooled file. You cannot see other spooled files on those same output queues since this WRKSPLF command works only with the current user's spooled files. You then have two methods for working with spooled files. You will find that you use both in your daily operations, but that using the WRKOUTQ command is the most useful of the two for system operations, since you can see more than one job's spooled files.
How Output Queues Should Be Organized The organization of your output queues should be as simple as possible. To start, you can let the system create the default output queues for each printer you create. Of course, you may want to modify ownership and some output queue attributes. At this point, you can send output to an output queue and there will be a printer assigned to print from that queue. How can you use output queues effectively? Each installation must discover its own answer, but I can give you a few ideas. If your installation generates relatively few reports, having one output queue per available printer is the most efficient way to use output queues. Installations that generate large volumes of printed output need to control when and where these reports might be printed. For example, a staff of programmers might share a single printer. If you spool all compiled programs to the same queue and make them available to the writer, things could jam up fast; and important reports might get delayed behind compile listings being printed just because they were spooled to a queue with a writer. A better solution is to create an output queue for each programmer. Each programmer can then use a job description to route printed output to his or her own queue. When a programmer decides to print a spooled file, he or she moves that file to the output queue with the shared writer active. This means that the only reports printed are those specifically wanted. Also, you can better schedule printing of a large number of reports.
What about the operations department? Is it wise to have one output queue (e.g., QPRINT or PRT01) to hold all the spooled files that nightly, daily, and monthly jobs generate? You should probably spend a few minutes planning for a better implementation. I recommend you do not assign any specific output to PRT01. You should create specific output queues to hold specific types of spooled files. For instance, if you have a nightly job that generates sales, billing, and posting reports, you might consider having either one or three output queues to hold those specific files. When the operations staff is ready to print the spooled files in an output queue, they can use the CHGWTR command to make the writer available to that output queue. Another method is to move the spooled files into an output queue with a printer already available. This method lets you browse the queue to determine whether or not the reports were generated and lets you print these files at your convenience. For some end users, you may want to make the output queue invisible. You can direct requested printed output to an output queue with an available writer in the work area of the end user who made the request. Long reports should be generated and printed only at night. The only things the user should have to do are change or add paper and answer a few messages. What a moutain of information! And I've only discussed a few concepts for managing output queues. But this information should be enough to get you started and on your way to mastering output queues.
Chapter 11 - The V2R2 Output Queue Monitor In applications that must handle spooled files, you may need a way to determine when spooled files arrive on an output queue. For instance, your application may need to automatically transfer any spooled file that arrives on a particular local output queue to a user on a remote system. Or perhaps you want to automatically distribute copies of a particular spooled file to users in the network directory. You may even want to provide a simple function that transfers all spooled files from one output queue to another while one of your printers is being repaired. In any case, you must find a way to monitor an output queue for new entries.
The Old Solution If you are running pre-V2R2 OS/400, you can write a program that uses the following tried-and-true approach:
• • • • •
Wake up periodically and perform a WRKOUTQ (Work with Output Queue) command specifying OUTPUT(*PRINT) Copy the output to a database file Read the database file and look for spooled file entries Determine whether an entry is new on the queue (you must be creative here) Perform the appropriate action for any new spooled files
Another option is to use the CVTOUTQ tool from the QUSRTOOL library or the version offered in Chapter 24, 'CL: You're Stylin' Now!' Both of these utilities convert the entries on an output queue to a database file, which you can then read and search for new spooled file entries. If you simply want to take a snapshot of all the entries on an output queue at any given time, you can do so easily with the approach outlined above or with either of the CVTOUTQ tools. Such a capability is useful when you want to perform a function against some or all of the spooled files on a queue and then delete those spooled files before taking the next snapshot. However, all these methods lack one fundamental ability that some applications require: the ability to easily identify new spooled file entries as they arrive on the output queue.
A Better Solution With V2R2, you can easily determine when a new spooled file arrives on an output queue. The V2R2 versions of the CRTOUTQ (Create Output Queue) and CHGOUTQ (Change Output Queue) commands let you associate a data queue with an output queue. When you do, and a spooled file becomes ready (a 'RDY' status) on the output queue, OS/400 will send an entry to the associated data queue. The entry identifies the new spooled file, so your program can monitor the data queue and take appropriate action whenever a new spooled file appears.
A spooled file is always in one of several statuses on an output queue (e.g., RDY = ready, HLD = held). We are interested in the 'RDY' or 'ready' status. The 'ready' status signifies that a spooled file is ready to print. When a spooled file arrives on the output queue and is in the 'RDY' status, OS/400 sends an entry to the attached data queue (if one is attached). If you then hold that spooled file entry and again release the entry, another data queue entry is sent to the data queue. Each time a spooled file becomes ready to print on the output queue, an entry is sent to the data queue. Figure 11.1 shows the prompt screen for the CHGOUTQ (Change Output Queue) command. For the DTAQ keyword, a value of *NONE indicates that no data queue is associated with the output queue. If you enter the name of a data queue, OS/400 will send an entry to that data queue when a spooled file arrives on the associated output queue. The only requirement for entering a data queue name is that the data queue exist. The value *SAME for the DTAQ parameter indicates no change to the existing parameter value.
Figure 11.2 shows the prompt screen for the CRTDTAQ (Create Data Queue) command. A data queue associated with an output queue must have a MAXLEN value of at least 128. You can specify a longer MAXLEN, but the data queue entry that describes the spooled file will occupy only the first 128 positions. After you create the data queue and use the CHGOUTQ command to associate the data queue with an output queue, OS/400 will create a data queue entry for every spooled file that arrives on that output queue until you again execute the CHGOUTQ command and specify DTAQ(*NONE) to stop the function. Figure 11.3 represents the field layout of the spooled file data queue entry as documented in the Guide to Programming and Printing (SC41-8194). You can use a data queue defined longer than 128 bytes, but not shorter, since the entry uses 128 bytes for each entry.
The STRTFROUTQ Utility One way you could use this new feature is to automatically transfer spooled files arriving on one output queue to another output queue. Such a utility is useful when a printer breaks and you want to reroute the broken printer's output to another printer. Because a printer can have only one output queue, you can't simply have another printer print the broken printer's output queue as well as its own. Formerly, an operator would have had to monitor the output queue and manually transfer the spooled files to another output queue. Users waiting for reports (especially if they have to walk to a remote printer to get them) don't like having to wait for the operator to transfer the spooled files or having to transfer the files themselves. Figure 11.4 shows the source for the STRTFROUTQ command, a utility that incorporates the V2R2 data queue feature to automatically transfer spooled files from one output queue to another. Besides transferring spooled files, this simple, useful utility illustrates the use of the data queue capability.
To use STRTFROUTQ, you enter both a source and a target output queue. The source output queue is the one the program will monitor for new arrivals. The target output queue is the one to which the spooled files will be transferred.
Figure 11.5 (41 KB) is the source for command processing program (CPP) STRTFROTQC. This program is the workhorse that actually identifies and transfers the spooled files. STRTFROTQC first checks that both the source and target output queues exist. If either does not, the program sends a message to the program queue and then ends, which causes the error message to be sent to the calling program. (Because you would normally be running this job in batch, the message would then be forwarded to the external queue -- the system operator.) When both the source and target output queues exist, the program associates a data queue with the source output queue. If a data queue with the same name as the source output queue already exists in library QGPL, the program uses it. If such a data queue does not exist, the program creates one. I chose to put the data queue in library QGPL because all AS/400s have a library named QGPL, but you can use any other available library instead. After making sure the data queue exists, the program uses the CHGOUTQ command to associate the data queue with the source output queue. At this point, the program enters 'polling' mode. At B in Figure 11.5, the program executes the RCVDTAQE command, a front end I wrote for the QRCVDTAQ API (for the code for my front ends to the data queue APIs, see 'A Data Queue Interface Facelift' — 151 KB). There is no equivalent OS/400 command. The four parameters listed at B are required; five optional parameters also exist for RCVDTAQE, but we don't need them here. The required parameters are
• • • •
DTAQ, the qualified data queue name (20 alphanumeric) DTALEN, the length of the data queue entry (5,0 decimal) DATA, the data queue entry (i.e., the data) (n alphanumeric; length as defined in previous parameter) WAIT, how long the program should wait for an entry to arrive on the data queue (1,0 decimal; negative for a never-ending wait, n for number of seconds to wait, or 0 for no wait at all)
After receiving the data for an entry, STRTFROTQC extracts the needed fields. The first field it extracts is the &end_flag field. This field, which is used later to end the program, is not part of the OS/400-supplied spooled file data queue entry. I'll explain the use and significance of this field in a moment. The values for &job, &user, &jobnbr, and &splf are all extracted from the data queue contents and transferred to character variables using the CHGVAR (Change Variable) command. Because the spooled file number is stored in binary, the CHGVAR command that extracts it uses the V2R2 %BIN or %BINARY function (D) to extract the value into a decimal field. Once the field values are extracted, the program executes the CHGSPLFA (Change Spooled File Attributes) command to move the spooled file identified in the data queue entry to the target output queue. Now back to that &end_flag field. After you execute the STRTFROUTQ command, the job will wait indefinitely for new data queue entries because STRTFROTQC assigned variable &wait a negative value (A). You could use the ENDJOB (End Job) command to end the job, but this solution is messy: It doesn't clean up the data queue or the associated output queue. When you use data queues, you must be a careful housekeeper. Data queue storage accumulates constantly and is not freed until you delete the data queue. A more elegant ending to such an elegant solution is the ENDTFROUTQ command and its CPP, ENDTFROTQC, shown in Figure 11.6 and Figure 11.7 (20 KB), respectively. When you are ready to end the STRTFROUTQ job, just enter the ENDTFROUTQ command and specify the name of the source output queue for the SOUTQ parameter. The CPP then sends a special data queue entry to the associated data queue; this entry has the value *TFREND in the first seven positions. Program STRTFROTQC checks each received data queue entry for the value *TFREND (C in Figure 11.5). When it detects this value, the program ends gracefully after deleting the data queue and disassociating the output queue so that no more data queue entries are created (E).
If you collect utility programs, you will want to have the STRTFROUTQ and ENDTFROUTQ utilities in your toolkit. By taking advantage of a little-known OS/400 function, these commands make spooled file management a little easier and more efficient.
Chapter 12 - AS/400 Disk Storage Cleanup OS/400 is a sophisticated operating system that tracks almost everything that happens on the system. This tracking is good, but it results in a messy by-product of system-supplied database files, journal receivers, and message queues. Users add to the clutter with old messages, unused documents, out-of-date records, and unprinted spool files. If you do nothing about this disorder, it will eventually strangle your system. But you can implement a few simple automated and manual procedures to keep your disk storage free of unwanted debris.
Automatic Cleanup Procedures In August 1990, IBM introduced Operational Assistant (OA) as part of the operating system. Today's OA functions include automatic cleanup of some of the daily messes the AS/400 makes. OA's automatic cleanup is a good place to start when you're trying to clean up your AS/400's act. To access the OA Cleanup Tasks menu (Figure 12.1), you can type GO CLEANUP or select option 11 ('Customize your system, users, and devices') and then option 2 ('Cleanup tasks'), both from the OA main menu. You can use this menu to start and stop automatic cleanup and to change cleanup parameters. Option 1, 'Change cleanup options,' gives you the Change Cleanup Options display (Figure 12.2). (To bypass these menus, just prompt and execute the CHGCNLUP (Change Cleanup) command.) Note that you must have *ALLOBJ, *SECADM, and *JOBCTL authorities to change cleanup options. If option 1 does not appear on the Cleanup Tasks menu, you do not have the proper authorities.
Using the Change Cleanup Options screen, you can enable the automatic cleanup function and specify that cleanup should be run either at a specific time each day or as part of any scheduled system power-off. Specify *YES for the ALWCLNUP parameter to tell the system that you want to enable automatic cleanup. For STRTIME, you can enter a specific time (e.g., 23:00) for the cleanup to start, or you can enter *SCDPWROFF to tell the system to run cleanup during a system power-off that you've scheduled using OA's power scheduling function (the cleanup will not be run if you power off using the PWRDWNSYS (Power Down System) command or force a power-off using the control panel). Returning to the Cleanup Tasks menu, execute option 2, 'Start cleanup at scheduled time,' and your AS/400 will execute the cleanup each day at the specified time. Although it is ideal to run cleanup procedures when the system is relatively free of other tasks, it is not a requirement; and OA's cleanup will not conflict with application programs other than competing for CPU cycles.
The other parameters on the Change Cleanup Options screen let you control which objects the procedure will attempt to clean up. Each parameter allows a value of either *KEEP, which tells the system not to clean up the specified objects, or a number from 1 to 366 that indicates the number of days the objects or entries are allowed to stay on the system before the cleanup procedure removes them. The table in Figure 12.3 lists the cleanup options and the objects that they automatically clean up. Look closely at the list of objects cleaned up by the 'Job logs and other system output' option. When you activate this option, the system places all job logs into output queue QUSRSYS/QEZJOBLOG and all dumps (e.g., system and program dumps) into output queue QUSRSYS/QEZDEBUG. The cleanup procedure removes from these output queues any spool files that remain on the system beyond the maximum number of days. OS/400 uses a variety of database files and journals to manage operating system functions (e.g., job accounting, performance adjustment, SNADS, the problem log). Regular, hands-off cleanup of these journals and logs is the single most beneficial function of the automatic cleanup procedures; without this automatic cleanup, you have to locate the files and journals and write your own procedures to clean them up. This, along with the possibility that IBM could change or add to these objects in a future release of OS/400, makes this cleanup option the most helpful. For OfficeVision/400 users, the 'OfficeVision/400 calendar items' option is an effective way to manage the size of several OfficeVision production objects. This option cleans up old calendar items and reorganizes key database files to help maintain peak performance. If you ever want to stop the automatic daily cleanup, just select option 4, 'End cleanup,' to stop all automatic cleanup until you restart it using option 2.
Manual Cleanup Procedures OA's automatic cleanup won't do everything for you. Figure 12.4 lists cleanup tasks you must handle manually. By 'manually,' I mean you must manually execute commands that clear entries or reorganize files, or you must write a set of automated cleanup tools that you can run periodically or along with OA's daily cleanup operations.
Save security audit journal receivers. If you activate the security audit journaling process, the receiver associated with QAUDJRN (the security audit journal) will grow continuously as long as it's attached to QAUDJRN. In fact, if you select all possible auditing values, this receiver will grow rapidly. As with all journal receivers, you are responsible for receiver maintenance. Here are my recommendations. First, do not place audit journal receivers into library QSYS (QAUDJRN itself must be in QSYS, but receivers can be in any library and in any auxiliary storage pool). Place them in a library (e.g., one called AUDLIB) that you can save and maintain separately. Each week, use the CHGJRN (Change Journal) command to detach the old receiver from QAUDJRN and attach a new one. Make sure your regular backup procedure saves the security journal receivers (only detached receivers are fully saved). If you specify 'System journals and system logs,' OA's automated cleanup operation deletes old security audit journal receivers that are no longer attached to the journal. Your backup strategy should include provisions for retaining several months of security journal receivers in case you need to track down a security problem. Do an IPL regularly. Perform an IPL regularly (e.g., weekly or bimonthly). An IPL causes the system to delete temporary libraries, compress work control blocks, and free up unused addresses. The result is that more disk storage becomes available, and performance improves. During an IPL, the system also closes job logs and opens new ones. This housekeeping especially benefits system-supplied jobs (e.g., QSYSWRK, QSYSARB, QSPLMAINT), whose job logs can grow quite large between IPLs. After IPL, system jobs require less time to write to the end of the job log, giving performance a boost. The
more active your system, the more frequently you need to IPL -- on very active systems, you should IPL at least once a week. Reclaim spool file storage. Like the S/38, the AS/400 has an operating-system-managed database file that contains a member for every spool file (e.g., job log, user report, Print key output) on the system. When you or the system creates a spool file, OS/400 uses an empty member of the spool file database (which is maintained in library QSPL) if one is available; otherwise, OS/400 creates a member. Whenever a spool file is deleted or printed, the operating system clears that file's database member, readying it for reuse. But even empty database members occupy a significant amount of space. If you create many spool files, this database can grow like Jack's beanstalk (I have seen QSPL grow to 150 MB). Again like the S/38, the AS/400 checks all empty QSPL database members at every IPL and deletes those that have been on the system for seven or more IPLs. But since V1R3 of OS/400, the AS/400 has provided two additional methods of cleaning up these empty database members. The first method is to use system value QRCLSPLSTG, which lets you limit the number of days an empty member remains on the system. Valid values include whole numbers from 1 to 366; the default is 8 days. When an empty member reaches the specified limit, the system deletes the member. *NONE is also a valid value, but it is impractical because it causes the system to generate a new database member for each spool file you create, thus overburdening the system and hurting performance. A value of *NOMAX tells the system to ignore automatic spool storage cleanup. The second new housecleaning method for spool files is to execute the RCLSPLSTG (Reclaim Spool Storage) command. If you want to control spool file cleanup yourself rather than have the system do it, you can enter a value of *NOMAX for system value QRCLSPLSTG and then execute the RCLSPLSTG command whenever necessary. Reclaim storage. You should use the RCLSTG (Reclaim Storage) command periodically to find damaged or lost objects and to ensure that all auxiliary storage is either used properly or available for use. Unexpected power failures, device failures, or other abnormal job endings can create unusual conditions in storage, such as damaged objects, objects with no owners, or even objects that exist in no library (i.e., the library name is absent). During a reclaim of storage, the system puts any damaged and lost objects it encounters into the recovery library, QRCL. After storage is reclaimed, you should look in QRCL, move any objects you want to keep to another library, and delete any remaining objects. Also, normal operations use a portion of auxiliary storage for permanent and temporary addresses. The RCLSTG command recovers and recycles addresses that the system used but no longer needs. You should run RCLSTG every six months or whenever you encounter messages about damaged objects or authority problems with objects. You also should monitor the permanent and temporary addresses the system uses by executing the WRKSYSSTS (Work with System Status) command. When WRKSYSSTS shows that permanent and temporary addresses exceed 20 percent of the available addresses, execute the RCLSTG command. Keep in mind that you can execute RCLSTG only when the AS/400 is in restricted state (i.e., all subsystems must be ended, leaving only the console active). You can also use OA's disk analysis reports, which list the space taken up by damaged objects, objects without owners, and objects without libraries, to determine when you need to do a RCLSTG. For more information about the OS/400 RCLSTG function, see the Basic Backup and Recovery Guide (SC41-0036-01). Remove unused licensed software. Another way to reclaim disk storage is to remove unused licensed program products (e.g., product demos, old third-party products you no longer use, and IBM products such as the OS/400 migration aids, once you're done with them). After saving libraries and objects you no longer need, delete the products you no longer need (you can use the GO LICPGM command to access the IBM licensed products menu). Clean up user output queues. What about user-created spooled output? OA's cleanup addresses job logs and certain service and program dump output. But when users create spool files, these files also stay on the system until the user prints or deletes them. You need to either monitor user-created output queues or have users monitor their own. One tool some AS/400 customers find helpful is DLTOLDSPLF, a utility in library QUSRTOOL that finds and moves or deletes all spool files older than a specified number of days.
Reset message queue sizes. User-created messages can also add to the clutter on the AS/400. As messages accumulate, message queues grow to accommodate them; but queues don't become smaller as messages are removed. Although OA's automatic cleanup clears old messages from user and workstation message queues, it doesn't reset the message queue size. To reset the queue size, you must use the CLRMSGQ (Clear Message Queue) command to completely clear the message queue. Again, you can perform this task manually for specific message queues, or you can automate the process by writing a program. Clear save files. If you frequently use save files for ad hoc or regular backups, you may want to define a manual or automated procedure to periodically clear those save files and reclaim that storage. After you save a save file's data to tape or diskette, clear the file by executing the CLRSAVF (Clear Save File) command. Manage journal receivers. If you use journaling on your system, you need to manage the journals you create. As with the security audit journal receivers, detach and save receivers as part of your normal backup and recovery strategy. Then you can delete receivers you no longer need. For more information about journaling and managing journals and receivers, refer to the Programming: Backup and Recovery Guide (SC41-8079). Delete old and unused objects. Old and unused objects of various kinds can accumulate on your system, unnecessarily using up storage and degrading performance. You should evaluate objects that are not used regularly to determine whether or not they should remain on the system. Remember to check development and test libraries as well as production libraries. Since V1R3, the description of each object on the system includes a 'last used' date and time stamp, as well as a 'last used' days counter. The object description also contains the 'last changed' date and time as well as the 'last saved' date and time. Beginning with V2R2, you can use the Disk Space Tasks menu (Figure 12.5) to collect information about and analyze disk space utilization. You can call this menu directly by typing GO DISKTASKS, or you can access it through the main OA menu. As you can see, the menu options let you collect and print disk space information as well as actually work with libraries, folders, and objects. When you select option 1 to collect disk space information, you'll see the prompt in Figure 12.6. You can collect disk space information at a specified date and time by selecting option 1. Selecting 2 or 3 tells the system to collect information at the specified interval. Whichever option you choose, the system collects information about objects (e.g., database files, folders (including shared folders), programs, commands) and stores it in file QUSRSYS/QAEZDISK. You can then select option 2 on the Disk Space Tasks menu to print reports that analyze disk space usage by library, folder, owner, or specific object. Or you can print a disk information system summary report. Because the data is collected in a database file, you can also perform ad hoc interactive SQL queries, use Query/400, or write high-level language programs to get the information you need. Purge and reorganize physical files. An active database environment can contribute to the AS/400's sloppy habits. One problem is files in which records accumulate forever. You should examine your database to determine whether any files fit this description and then design a procedure to handle the 'death' of active records. In some situations, you can simply delete records that are no longer needed. In other situations, you might want to archive records before you delete them. In either case, you certainly won't want to delete or move records manually; instead, look for a public-domain or vendor-supplied file edit utility or tool. Deleting records does not increase your disk space, however. Deleted records continue to occupy disk space until you execute a RGZPFM (Reorganize Physical File Member) command. You could write a custom report to search for files with a high percentage of deleted records and then manually reorganize those files. Or you could go one step further and write a custom utility that would search for those files and automatically reorganize them using the RGZPFM OS/400 command. Clean up OfficeVision/400 objects. OfficeVision/400 can devour disk space unless you clean up after it religiously. Encourage OfficeVision/400 users to police their own documents and mail items and to delete items they no longer need. You can use the QRYDOCLIB (Query Document Library) command as a reporting tool to monitor document and folder maintenance. You might also want to limit the auxiliary storage available to each user by using the MAXSTG parameter on each user profile.
Figure 12.7 lists the OfficeVision/400 database files you should reorganize regularly (every little bit helps with the OfficeVision performance hog!). You will probably want to write a CL program that reorganizes these files and run that program when OfficeVision is not in use.
Enhancing Your Manual Procedures You can handle many of the manual tasks I've mentioned by using the QEZUSRCLNP job to incorporate your own cleanup programs and commands into OA's automatic cleanup function. QEZUSRCLNP is essentially an empty template that gives you a place to add your own cleanup code. Every time OA's automatic cleanup function is run, it calls QEZUSRCLNP and executes your code. To add your enhancements to QEZUSRCLNP, first use the RTVCLSRC (Retrieve CL Source) command to retrieve the source statements for QEZUSRCLNP (Figure 12.8) from library QSYS. Then insert your cleanup commands or calls to your cleanup programs into the QEZUSRCLNP source. Be sure to add your statements after the SNDPGMMSG (Send Program Message) command for message CPI1E91 to ensure that, after your cleanup job has ended, the system sends a completion message to the system operator message queue. Finally, compile your copy of QEZUSRCLNP into a library that appears before QSYS on the system library list. (You can modify the system library list by editing the QSYSLIBL system value.) I caution you against replacing the system-supplied version of the program by compiling your copy of QEZUSRCLNP into QSYS. By using a different library, you can preserve the original program and avoid losing your modified program the next time you load a new release of the operating system. In OA's automated cleanup function, the AS/400 gives you the services of a maid to solve some simple cleanup issues. Use the function. But your cleanup shouldn't stop there. You also need to develop and implement procedures to maintain system-supplied and user-defined objects, such as spool and save files.
Chapter 13 - All Aboard the OS/400 Job Scheduler! The job scheduling function, new with V2R2, lets you schedule jobs to run at dates and times you choose without performing any add-on programming. There are two V2R2 additions that let you control job scheduling:
• •
new parameters on the SBMJOB command the new job schedule object
The job schedule function was made possible by enhancing the operating system with QJOBSCD, a new system job that is started automatically when you IPL the system. This job monitors scheduled job requirements, then submits and releases scheduled jobs at the appropriate date and time.
Arriving on Time The SBMJOB command, of course, places a job on a job queue for batch processing, apart from an interactive workstation session. Starting with V2R2, the new SCDDATE and SCDTIME parameters let you specify a date and time for the job to be run. This scheduling method is a one-time shot; you use it for a job that you want to run only once, at a later date and/or time. If you want a job to run more than once, you'll have to remember to submit it each time (or use the job schedule object, as I discuss later). When you use the new parameters to indicate a schedule date and/or time, the SBMJOB command places the job on a job queue in a scheduled state (SCD) until the date and time you specified; then the system releases the job on the job queue and processes it just like any other submitted job. If you specify HOLD(*YES) on the SBMJOB command, at the appointed time the job's status on the queue will change from scheduled/held (SCD HLD) to held (HLD). You can then release the job when you choose. The default value for the SCDDATE and SCDTIME parameters is *CURRENT, which indicates that you want to submit the job immediately; so if you don't specify a value for these parameters, the SBMJOB command works just as it always has. Otherwise, you'll usually specify an exact date (in the same format as the job's date) and time for the job to run. There are, however, other possible special values that you may find useful for the SCDDATE parameter.
If you indicate SCDDATE(*MONTHSTR), the job will run at the scheduled time on the first day of the month. SCDDATE(*MONTHEND) will run the job on the last day of the month. (No more '30 days hath September...' or counting on your fingers!) Or you can specify SCDDATE(*MON) or *TUE, *WED, *THU, *FRI, *SAT, or *SUN, to run the job on the specified day of the week. During which month, on which Monday, and so on, will your job be run? That depends. For example, if today is the first day of the month and you specify SCDDATE(*MONTHSTR) and the current time is previous to the time in the SCDTIME parameter... it'll run today. Otherwise, it'll wait until next month. Similar logic applies for other SCDDATE and SCDTIME possibilities. If you remove a scheduled job from a job queue, the job will not run, even when the scheduled time and date occur. You can remove a job from the queue either by using the CLRJOBQ (Clear Job Queue) command or by using the WRKJOBQ (Work with Job Queue) command and ending the job. Holding a job queue that includes a scheduled job can delay execution of the job, but it will not prevent the job from running when you release the job queue, even if the scheduled time has passed.
Running on a Strict Schedule In addition to enhancing the SBMJOB command, V2R2 introduces a new type of AS/400 object, the job schedule, with a system identifier of *JOBSCD. (Sorry, Canadians and Brits, IBM didn't pick *JOBSHD.) The job schedule is a timetable that contains descriptive entries for jobs to be executed at a specific date, time, and/or frequency. It is most useful for jobs that you want to run repeatedly according to a set schedule. If a job is on the job schedule, you need not remember to submit it for every execution; the operating system takes care of that chore. The job schedule function is documented in the Work Management Guide (SC41-8078). One job schedule exists on the system: object QDFTJOBSCD in library QUSRSYS. Although its name indicates that this object is the default job schedule, it is the only one. The operating system offers no commands to create, change, or delete your own customized job schedules... yet. You can manipulate the entries in the job schedule using the following new commands:
• • • • • •
ADDJOBSCDE (Add Job Schedule Entry) CHGJOBSCDE (Change Job Schedule Entry) HLDJOBSCDE (Hold Job Schedule Entry) RLSJOBSCDE (Release Job Schedule Entry) RMVJOBSCDE (Remove Job Schedule Entry) WRKJOBSCDE (Work with Job Schedule Entries)
Figure 13.1 shows a sample list display that appears when you run the WRKJOBSCDE command. When you select option 5 (Display details) for an entry, you get a display such as that in Figure 13.2. This example shows the details of a job my system runs every weekday morning at 3:30.
Each job schedule entry is made up of many components that define the job to be run and describe the environment in which it will run. Figure 13.3 describes those components and lists the parameter keywords the job-scheduling CL commands use. With V2R3, you can print a list of your job schedule entries by entering the
WRKJOBSCDE command, followed by a space and OUTPUT(*PRINT). For detailed information on each job schedule entry on the list, follow the WRKJOBSCDE command with PRTFMT(*FULL). OS/400 gives each job schedule entry a sequence number to identify it uniquely. You usually refer to an entry by its job name, but if there are multiple entries with the same job name, you also have to specify the sequence number to correctly refer to the entry. For example, in Figure 13.1, there are three entries named VKEMBOSS. Displaying the details for each, however, would show that they each have a unique sequence number. The frequency component (FRQ) of a schedule entry may seem confusing at first. It's obvious that you can schedule a job to run *ONCE, *WEEKLY, or *MONTHLY; but what if you want to schedule a daily job? In that case, you need to use an additional schedule entry element, the scheduled day (SCDDAY). To run a job every day, specify FRQ(*WEEKLY) and SCDDAY(*ALL). You can also run the job only on weekdays, using FRQ(*WEEKLY) and SCDDAY(*MON *TUE *WED *THU *FRI). Just Thursdays? That's easy: FRQ(*WEEKLY) and SCDDAY(*THU). The scheduled date component (SCDDATE) of a schedule entry tells the system a specific date to run the job. If you use the SCDDAY parameter, you cannot use the SCDDATE parameter; the two don't make sense together. The combination of FRQ(*MONTHLY) and SCDDATE(*MONTHEND) will run a job on the last day of each month, regardless of how many days each month has. The relative day of the month parameter (RELDAYMON) gives the job schedule even more flexibility. For instance, if you want to run a job only on the first Tuesday of each month, you indicate values for three parameters: FRQ(*MONTHLY) SCDDAY(*TUE) RELDAYMON(1). Sometimes your computer can't run a job at the scheduled time; for example, your AS/400 may be powered off or in the restricted state at the time the job is to be submitted. In the recovery action component (RCYACN) of the schedule entry, you can tell the computer to take one of three actions. RCYACN(*SBMRLS) submits the job to be run as soon as possible. RCYACN(*SBMHLD) submits the job, but holds it until you explicitly release it for processing. RCYACN(*NOSBM) is the 'Snooze, you loose' option; the job scheduler will not attempt to submit the job after the scheduled time passes. Notice that this feature applies only to jobs scheduled from the job schedule, not to those you submit with SBMJOB.
Two Trains on the Same Track When I was setting up job schedule entries for my system, I discovered that many of the entries I made were similar. I found myself wanting to copy a job schedule entry to save myself from the drudgery of retyping long, error-prone command strings. Because the job schedule commands don't offer such a function, I decided to write a command that does. My command, CRTDUPSCDE (Create a Duplicate Job Schedule Entry), is easy to use. You simply supply the command with the job name of the existing job schedule entry you want to copy from and a name you want to give the copy: CRTDUPSCDE FROMJOB(job-name) NEWNAME(new-name) The NEWNAME parameter defaults to *FROMJOB, indicating that the new entry should have the same name as the original; the system will give the entry a unique sequence number.
Figure 13.4 provides the code for the CRTDUPSCDE command. Figure 13.5 is the command processing program (CPP). CRTDUPSCDE uses the IBM-supplied program QWCLSCDE, a new API that lists job schedule entries in a user space. (See A in Figure 13.5.) After retrieving the 'from' job schedule entry (which you specified in the FROMJOB parameter), the program breaks the output from the API down into the parameter values that describe the entry; then it uses the same values in the ADDJOBSCDE command to create a new entry based on the existing one. After that, it's a simple matter to use the CHGJOBSCDE command to make any minor changes the new entry needs. (You can find documentation for QWCLSCDE and the user space layouts used in CRTDUPSCDE in the System Programmer's Interface Reference (SC21-8223).) The command also uses two user space APIs: QUSCRTUS and QUSRTVUS.
In addition to being easy to use, CRTDUPSCDE is very basic. To conserve space, I didn't include some features that you might want to add. For example, I used very basic error trapping instead of error-message-handling routines. Also, the command retrieves only the first instance of a schedule entry with the name you choose, even though the job schedule could contain multiple entries of the same name. If you have multiple same-name entries and you want to retrieve one other than the first, you'll need to add the code to loop through the data structure that returns the name. Finally, my command doesn't duplicate the OMITDATE values from the original. Doing so would require adding array-handling techniques to the CL program, which isn't worth the effort to me because I hardly ever use this parameter. I encourage you to experiment with enhancing this command to suit your own needs.
Derailment Dangers A few cautionary comments are in order before we finish our exploration of the new OS/400 job schedule object. There are a few situations that I ran into when I was implementing the function and that the Work Management Guide doesn't adequately cover. It is important to know that a job submitted by the job schedule will not retain the contents of the LDA from the job that originally added it to the job schedule. In my tests of the new function, I was never able to run the scheduled job with anything other than a blank LDA. When you submit a job with the SBMJOB command, however, the system passes a copy of the LDA to the submitted job. It's a common practice to store variable processing values in the LDA as a handy means of communicating between jobs or between programs within a job. If your application depends upon specific values in the LDA, you may want to schedule jobs using the SBMJOB command instead of creating a job schedule entry. I've discovered an alternate technique, however, that still lets me take advantage of a job schedule entry for recurring jobs that need the LDA. When I add the job schedule entry, I also create a unique data area that contains the proper values in the proper locations, according to the specifications in the submitted program. It's then a simple matter to make a minor change to the submitted program so that the program either uses the new data area instead of the LDA or retrieves the new data area and copies it to the LDA using the RTVDTAARA (Retrieve Data Area) and/or CHGDTAARA (Change Data Area) commands. This new data area should be a permanent object on the system as long as the dependent job schedule entry exists. SBMJOB has another benefit that a job scheduling entry does not offer. When you use SBMJOB to schedule a job, the system defaults to using an initial library list that is identical to the library list currently in use by the submitting job. The job schedule entry, on the other hand, depends upon the library list in its JOBD component. If you've gotten out of the old S/38 habit of creating unique job descriptions primarily to handle unique library lists, you'll need to resurrect this technique to describe the library list for job schedule entries. It's also noteworthy that, just like the railroad, the job scheduling function may not always run on time, no matter whether you use SBMJOB or the job schedule object. Although you can schedule a job to the second, the load on your system determines when the job actually runs. The system submits a job schedule entry to a job queue or releases a scheduled job already on a job queue approximately on time -- usually within a few seconds. But if there are many jobs waiting on the job queue ahead of the scheduled job, it will simply have to wait its turn. If it's
critical that a job run at a specific time, you can help by ensuring that the job's priority (parameter JOBPTY) puts it ahead of other jobs on the queue; but the job may still have to wait for an available activity slot before it can begin. And as I mentioned earlier, if your system is down or in a restricted state at the appointed time, the job schedule may not submit the job at all. Changing your system's date or time can also affect your scheduled jobs. If you move the date or time system values backward, the effect is fairly straightforward: The system will not reschedule any job schedule entries that were run within the repeated time. For example, if at three o'clock you change your system's time back to one o'clock, the job you had scheduled to run at two o'clock won't repeat itself. The system stores a 'next submission' date and time for each entry, which it updates each time the job schedule submits a job. Changing the system's date or time forward, however, can be tricky. If the change causes the system to skip over a time when you had a job scheduled, the job schedule's action depends upon whether or not the system is in restricted state when you make the change. If the system was not restricted, any missed job schedule entries are submitted immediately (only one occurrence of each missed entry is submitted even if, for example, you've scheduled a job to run daily and moved the system date ahead two days). If the system is in restricted state when you change the date or time system values, the system refers to the RCYACN attributes of the missed job schedule entries to determine whether or not to submit the jobs when you bring the system out of its restricted state. The job scheduling function in V2R2 does not offer job completion dependencies, regardless of which method you use. For example, if you use the job schedule to run a daily transaction posting, then a daily closing, you cannot condition the closing job to be run only if the posting job goes through to a successful completion. Some third-party scheduling functions offer this capability. Without a third-party product, if you need to schedule jobs with such a completion requirement, your best bet is probably to incorporate the entire procedure into a single CL program with appropriate escape routes defined in case one or more functions fail.
Chapter 14 - Keeping Up With the Past For many of you, AS/400 job processing is new, or at least different. There can be multiple subsystems, job queues, output queues, and messages flying all over the place at once. You can sign on to the system and submit several batch jobs for processing immediately, or you can submit jobs to be run at night. At the same time, the system operator can run jobs and monitor their progress, and users at various remote sites can sign on to the system. With so much going on, you might wonder how you can possibly manage and audit such activity. One valuable AS/400 tool at your fingertips is the history log, which contains information about the operation of the system and system status. The history log tracks high-level activities such as the start and completion of jobs, device status changes, system operator messages and replies, attempted security violations, and other securityrelated events. It records this information in the form of messages, which are stored in files created by the system. You can learn a lot from history -- even your system's history. By maintaining an accurate history log, you can monitor specific system activities and reconstruct events to aid problem determination and debugging efforts. Please note that history logs are different from job logs. Whereas job logs record the sequential events of a job, the history log records certain operational and status messages pertaining to all the jobs on a system. You can review the history log to find a particular point of interest, and then reference a job log to investigate further.
System Message Show and Tell You can display the contents of the history log on the AS/400 by executing the DSPLOG (Display Log) command
DSPLOG LOG(QHST)
The resulting display resembles the screen in Figure 14.1. The DSPLOG command lets you look at the contents of the history log as you would messages in a message queue. Because system events such as job completions, invalid sign-on attempts, and line failures are listed as messages in file QHST, you can place the cursor on a particular message and press the Help key to display second-level help text for the message.
The DSPLOG command has several parameters that provide flexibility when inquiring into the history log. To prompt for parameters, type in DSPLOG and press F4. The system displays the screen shown in Figure 14.2. The parameters for the DSPLOG command are as follows:
LOG The system refers to the history log as 'QHST.' QHST provides many of the functions the QSRV and QCHG logs provide on the S/36.
PERIOD You can enter a specific time period or take the defaults for the beginning and ending period. Notice that the default for 'Beginning time' is the earliest available time and the default for 'Beginning date' is the current date. To look at previous days, you must supply a value. Enter values as six-digit numbers (i.e., time as hhmmss and date as mmddyy).
OUTPUT You are probably familiar with this parameter. The value * results in output to the screen, and *PRINT results in a printed spooled file.
JOB You use the JOB parameter to search for a specific job or set of jobs. You can enter just the job name, in which case the system might find several jobs with the same name that ran during a given period of time. Or you can enter the specific job name, user name, and job number to retrieve the history information for a particular job.
MSGID Like the JOB parameter, this parameter helps narrow your search. You can specify one message or multiple messages. By specifying '00' as the last two digits of the message ID, you can retrieve related sets of messages. For example, if you enter the message ID CPF2200, the system retrieves all messages from CPF2200 to CPF2299 (these are all security-related messages).
History Log Housekeeping The history log consists of a message queue and system files that store history messages. The files belong to library QSYS and begin with the letters QHST, followed by a number derived as QHSTyydddn. The yyddd stands for the Julian date on which the log was created, and n represents a sequence character appended to the Julian date (0 through 9 or A through Z). The text description maintained by the system contains the beginning and ending date and time for the messages contained in that file, which is helpful for tracking activities that occurred during a particular time period. You can use the DSPOBJD (Display Object Description) command to display a list of history files. The command
DSPOBJD OBJ(QSYS/QHST*) OBJTYPE(*FILE)
results in a display similar to the one shown in Figure 14.3. The system creates a new file each time the existing file reaches its maximum size limit, which the system value QHSTLOGSIZ controls. Because the system itself does not automatically delete files, it is important to develop a strategy for deleting the log files (to save disk space) and for using the data before you delete the files.
You should maintain enough recent history on disk to be able to easily inquire into the log to resolve problems. The best way to manage history logs on your system is to take advantage of the automatic cleanup capabilities of Operational Assistant (OA). The OA category 'System Journals and System Logs' lets you specify the number of days of information to keep in the history log. OA then deletes log files older than the specified number of days. (For more information about Operational Assistant, see IBM's AS/400 System Operations: Operational Assistant Administrator's Guide (SC41-8082).) Keep in mind that OA does not provide a strategy for archiving the history logs to a media that you can easily retrieve. If you activate OA cleanup procedures, make sure that once each month you make a save copy of the QHST files. If you are remiss in performing this save, OA will still delete the log files. If you elect not to use this automatic cleanup the OA offers, you can do the following:
•
•
On the first day of each month, save all QHST files in library QSYS to tape. It's probably wise to use the same set of tapes and save to the next sequence number. For quick reference, record on the tape label the names of the beginning and ending log files. You can use the DLTQHST utility (from the QUSRTOOL library) to delete old history files. View the existing log files on the system and delete any that are more than 30 days old. (Hint: Remember that the text description contains the beginning and ending date and time to help you determine the age of the file.)
To determine how much history log information to keep, you should consider the disk space required to store the information and schedule your file saves accordingly. In most cases, it is a good idea to keep 30 days of on-line history, although large installations with heavy history log activity may need to save and delete objects every 15 days.
Inside Information Careful review of history logs can alert you to unusual system activity. If, for example, the message 'Password from device DSP23 not correct for user QSECOFR' appears frequently in the log, you might be prompted to find out who uses DSP23 and why (s)he is trying to sign on with the system security officer profile. Or you might notice the message 'Receiver ACG0239 in JRNLIB never fully saved (I C).' The second-level help text would tell you which program was attempting to delete the journal receiver. If these events are brought to your attention, you might be able to prevent the loss of important information. Maintaining a history log lets you reconstruct events that have taken place on the system. In reviewing its history log, one company discovered that a programmer had planted a system virus. A history log can also alert you to less serious occurrences (e.g., a specific sequence of jobs was not performed exactly as planned). Or you can use it to review all completion messages to find out how many jobs are executed on your system each day or which job ended abnormally. As you monitor the history log (preferably every day), you will soon start to recognize the messages that are most beneficial to you. The history log is a management tool that lets you quickly analyze system activities. It provides a certain amount of security auditing and lets you determine whether and when specific jobs were executed and how they terminated. Using and maintaining a history log is not difficult and could prove to be time well spent.
Note: The security journaling capabilities that OS/400 offers using the audit journal QAUDJRN provide additional event-monitoring capabilities specifically related to security. This new journal is capable of monitoring for the security-related events recorded in the QHST as well as additional events that QHST does not record. For more information concerning QAUDJRN, see the AS/400 Security Reference (SC41-8083).
Chapter 15 - Backup Basics The most valuable component of any computer system isn’t the hardware or software that runs the computer but, rather, the data that resides on the system. If a system failure or disaster occurs, you can replace the computer hardware and software that runs your business. Your company’s data, however, is irreplaceable. For this reason, it’s critical to have a good backup and recovery strategy. Companies go out of business when their data can’t be recovered. What should you be backing up? The simple answer to this question is that you should back up everything. A basic rule of backup and recovery is that if you don’t save it, it doesn’t get restored. However, you may have some noncritical data (e.g., test data) on your system that doesn’t need to be restored and can be omitted from your backup. When and how often do you need to back up? Ideally, saving your entire system every night is the simplest and safest backup strategy. This approach also gives you the simplest and safest strategy for recovery. Realistically, though, when and how you run your backup, as well as what you back up, depend on the size of your backup window — the amount of time your system can be unavailable to users while you perform a backup. To simplify recovery, you need to back up when your system is at a known point and your data isn’t changing. When you design a backup strategy, you need to balance the time it takes to save your data with the value of the data you might lose and the amount of time it may take to recover. Always keep your recovery strategy in mind as you design your backup strategy. If your system is so critical to your business that you don’t have a manageable backup window, you probably can’t afford an unscheduled outage either. If this is your situation, you should seriously evaluate the availability options of the iSeries, including dual systems. For more information about these options, see “Availability Options.”
Designing and Implementing a Backup Strategy You should design your backup strategy based on the size of your backup window. At the same time you design your backup strategy, you should also design your recovery strategy to ensure that your backup strategy meets your system recovery needs. The final step in designing a backup strategy is to test a full system recovery. This is the only way to verify that you’ve designed a good backup strategy that will meet your system recovery needs. Your business may depend on your ability to recover your system. You should test your recovery strategy at your recovery services provider’s location. When designing your backup and recovery strategy, think of it as a puzzle: The fewer pieces you have in the puzzle, the more quickly you can put the pieces of the puzzle together. The fewer pieces needed in your backup strategy, the more quickly you can recover the pieces. Your backup strategy will typically be one of three types:
• • •
Simple — You have a large backup window, such as an 8- to 12-hour block of time available daily with no system activity. Medium — You have a medium backup window, such as a 4- to 6-hour block of time available daily with no system activity. Complex — You have a short backup window, with little or no time of system inactivity.
A simple way to ensure you have a good backup of your system is to use the options provided on menu SAVE ( Figure 15.1), which you can reach by typing Go Save on a command line. This command presents you with additional menus that make it easy either to back up your entire system or to split your entire system backup into two parts: system data and user data. In the following discussion of backup strategies, the menu options I refer to are from menu SAVE.
Implementing a Simple Backup Strategy The simplest backup strategy is to save everything daily whenever there is no system activity. You can use SAVE menu option 21 (Entire system) to completely back up your system (with the exception of queue entries such as spooled files). You should also consider using this option to back up the entire system after installing a new release, applying PTFs, or installing a new licensed program product. As an alternative, you can use SAVE menu option 22 (System data only) to save just the system data after applying PTFs or installing a new licensed program product. Option 21 offers the significant advantage that you can schedule the backup to run unattended (with no operator intervention). Keep in mind that unattended save operations require you to have a tape device capable of holding all your data. (For more information about backup media, see “Preparing and Managing Your Backup Media.”) Even if you don’t have enough time or enough tape-device capability to perform an unattended save using option 21, you can still implement a simple backup strategy: Daily backup: Back up only user data that changes frequently. Weekly backup: Back up the entire system. A simple backup strategy may also involve SAVE menu option 23 (All user data). This option saves user data that can change frequently. You can also schedule option 23 to run without operator intervention. If your system has a long period of inactivity on weekends, your backup strategy might look like this: Friday night: Entire system (option 21) Monday night: All user data (option 23) Tuesday night: All user data (option 23) Wednesday night: All user data (option 23) Thursday night: All user data (option 23) Friday night: Entire system (option 21)
Implementing a Medium Backup Strategy You may not have a large enough backup window to implement a simple backup strategy. For example, you may have large batch jobs that take a long time to run at night or a considerable amount of data that takes a long time to back up. If this is your situation, you’ll need to implement a backup and recovery strategy of medium complexity. When developing a medium backup strategy, keep in mind that the more often your data changes, the more often you need to back it up. You’ll therefore need to evaluate in detail how often your data changes. Several methods are available to you in developing a medium backup strategy:
• • •
saving changed objects journaling objects and saving the journal receivers saving groups of user libraries, folders, or directories
You can use one or a combination of these methods. Saving changed objects. Several commands let you save only the data that has changed since your last save operation or since a particular date and time. You can use the SavChgObj (Save Changed Objects) command to save only those objects that have changed since a library or group of libraries was last saved or since a particular date and time. This approach can be useful if you have a system environment in which program objects and data files exist in the same library. Typically, data files change very frequently, while program objects change infrequently. Using the SavChgObj command, you can save just the data files that have changed. The SavDLO (Save Document Library Objects) command lets you save documents and folders that have changed since the last save or since a particular date and time. You can use SavDLO to save changed documents and folders in all your user auxiliary storage pools (ASPs) or in a specific user ASP.
You can use the Sav (Save) command to save only those objects in directories that have changed since the last save or since a particular date or time. You can also choose to save only your changed data, using a combination of the SavChgObj, SavDLO, and Sav commands, if the batch workload on your system is heavier on specific days of the week. For example: Day/time Friday night Monday night Tuesday night Wednesday night Thursday night Friday night
Batch workload Light Heavy Light Heavy Heavy Light
Save operation Entire system (option 21) Changed data only* All user data (option 23) Changed data only* Changed data only* Entire system (option 21)
* Use a combination of the SavChgObj, SavDLO, and Sav commands.
Journaling objects and saving the journal receivers. If your save operations take too long because your files are large, saving changed objects may not help in your system environment. For instance, if you have a file member with 100,000 records and one record changes, the SavChgObj command saves the entire file member. In this environment, journaling your database files and saving the journal receivers regularly may be a better solution. However, keep in mind that this approach will make your recovery more complex. When you journal a database file, the system writes a copy of every changed record to a journal receiver. When you save a journal receiver, you’re saving only the changed records in the file, not the entire file. If you journal your database files and have a batch workload that varies, your backup strategy might look like this: Day/time Friday night Monday night Tuesday night Wednesday night Thursday night Friday night
Batch workload Light Heavy Light Heavy Heavy Light
Save operation Entire system (option 21) Journal receivers only All user data (option 23) Journal receivers only Journal receivers only Entire system (option 21)
To take full advantage of journaling protection, you should detach and save the journal receivers regularly. The frequency with which you save the journal receivers depends on the number of journaled changes that occur on your system. Saving the journal receivers several times during the day may be appropriate for your system environment. The way in which you save journal receivers depends on whether they reside in a library with other objects. Depending on your environment, you’ll use either the SavLib (Save Library) command or the SavObj (Save Object) command. It’s best to keep your journal receivers isolated from other objects so that your save/restore functions are simpler. Be aware that you must save a new member of a database file before you can apply journal entries to the file. If your applications regularly add new file members, you should consider using the SavChgObj strategy either by itself or in combination with journaling. Saving groups of user libraries, folders, or directories. Many applications are set up with data files and program objects in different libraries. This design simplifies your backup and recovery procedures. Data files change frequently, and, on most systems, program objects change infrequently. If your system environment is set up like this, you may want to save only the libraries with data files on a daily basis. You can also save, on a daily basis, groups of folders and directories that change frequently.
Implementing a Complex Backup Strategy If you have a very short backup window that requires a complex strategy for backup and for recovery, you can use some of the same techniques described for a medium backup strategy, but with a greater level of detail. For example, you may need to save specific critical files at specific times of the day or week.
Several other methods are available to you in developing a complex backup strategy. You can use one or a combination of these methods:
• • •
save data concurrently using multiple tape devices save data in parallel using multiple tape devices use the save-while-active process
Before you use any of these methods, you must have a complete backup of your entire system. Saving data concurrently using multiple tape devices. You can reduce the amount of time your system is unavailable by performing save operations on more than one tape device at a time. For example, you can save libraries to one tape device, folders to another tape device, and directories to a third tape device. Or you can save different sets of libraries, objects, folders, or directories to different tape devices. Later, I provide more information about saving data concurrently using multiple tape devices. Saving data in parallel using multiple tape devices. Starting with V4R4, you can perform a parallel save using multiple tape devices. A parallel save is intended for very large objects or libraries. With this method, the system “spreads” the data in the object or library across multiple tape devices. (This function is implemented with IBM’s Backup, Recovery and Media Services product; for more information about it, see “Backup, Recovery and Media Services (BRMS) Overview” [Chapter 16].) Save-While-Active. The save-while-active process can significantly reduce the amount of time your system is unavailable during a backup. If you choose to use save-while-active, make sure you understand the process and monitor for any synchronization checkpoints before making your objects available for use. I provide more details about save-while-active later.
An Alternative Backup Strategy Another option available to help implement your backup strategy is the Backup, Recovery and Media Services licensed program product. BRMS is IBM’s strategic OS/400 backup and recovery product on the iSeries and AS/400. BRMS is a comprehensive tool for managing the backup, archiving, and recovery environment for one or more servers in a site or across a network in which data exchange by tape is required. For more information about using BRMS to implement your backup strategy, see “Backup, Recovery and Media Services (BRMS) Overview.” [Chapter 16]
The Inner Workings of Menu SAVE Menu SAVE contains many options for saving your data, but four are primary:
• • • •
20 — Define save system and user data defaults 21 — Entire system 22 — System data only 23 — All user data
You can use these menu options to back up your system. Or, if your installation requires a more complex backup strategy, you can use OS/400’s save commands in a CL program to customize your backup. To help you make your decision, as well as to provide skeleton code that you can use as a guideline for your own backup programs, this section provides a look at some of the inner workings of these primary save options. For detailed instructions and a checklist on using these options, refer to OS/400 Backup and Recovery (SC41-5304). Figure 15.2 illustrates the save commands and the SAVE menu options you can use to save the parts of the system and the entire system.
Entire System (Option 21) SAVE menu Option 21 lets you perform a complete backup of all the data on your system, with the exception of backing up spooled files (I cover spooled file backup later). This option puts the system into a restricted state. This
means no users can access your system while the backup is running. It’s best to run this option overnight for a small system or during the weekend for a larger system. Option 21 runs program QMNSave. The following CL program extract represents the significant processing that option 21 performs:
EndSbs Sbs(*All) Option(*Immed) ChgMsgQ MsgQ(QSysOpr) Dlvry(*Break or *Notify) SavSys SavLib Lib(*NonSys) AccPth(*Yes) SavDLO DLO(*All) Flr(*Any) Sav Dev('/QSYS.LIB/TapeDeviceName.DEVD') Obj(('/*') ('/QSYS.LIB' *Omit) ('/QDLS' *Omit)) UpdHst(*Yes) StrSbs SbsD(ControllingSubsystem)
+
+ + + +
Note: The Sav command omits the QSys.Lib file system because the SavSys (Save System) command and the SavLib Lib(*NonSys) command save QSys.Lib. The Sav command also omits the QDLS file system because the SavDLO command saves QDLS.
System Data Only (Option 22) Option 22 saves only your system data. It does not save any user data. You should run this option (or option 21) after applying PTFs or installing a new licensed program product. Like option 21, option 22 puts the system into a restricted state. Option 22 runs program QSRSavI. The following program extract represents the significant processing that option 22 performs:
EndSbs Sbs(*All) Option(*Immed) ChgMsgQ MsgQ(QSysOpr) Dlvry(*Break or *Notify) SavSys SavLib Lib(*IBM) AccPth(*Yes) Sav Dev('/QSYS.LIB/TapeDeviceName.DEVD') Obj(('/QIBM/ProdData') ('/QOpenSys/QIBM/ProdData')) UpdHst(*Yes) StrSbs SbsD(ControllingSubsystem)
+
+ + +
All User Data (Option 23) Option 23 saves all user data, including files, user-written programs, and all other user data on the system. This option also saves user profiles, security data, and configuration data. Like options 21 and 22, option 23 places the system in restricted state. Option 23 runs program QSRSavU. The following program extract represents the significant processing that option 23 performs:
EndSbs Sbs(*All) Option(*Immed) ChgMsgQ MsgQ(QSysOpr) Dlvry(*Break or *Notify) SavSecDta SavCfg SavLib Lib(*AllUsr) AccPth(*Yes)
+
SavDLO DLO(*All) Flr(*Any) Sav Dev('/QSYS.LIB/TapeDeviceName.DEVD') Obj(('/*') ('/QSYS.LIB' *Omit) ('/QDLS' *Omit) ('/QIBM/ProdData' *Omit) ('/QOpenSys/QIBM/ProdData' *Omit)) UpdHst(*Yes) StrSbs SbsD(ControllingSubsystem)
+ + + + + +
Note: The Sav command omits the QSys.Lib file system because the SavSys command, the SavSecDta (Save Security Data) command, and the SavCfg (Save Configuration) command save QSys.Lib. The Sav command also omits the QDLS file system because the SavDLO command saves QDLS. In addition, the Sav command executed by option 23 omits the /QIBM and /QOpenSys/QIBM directories because these directories contain IBM-supplied objects.
Setting Save Option Defaults When you save information using option 21, 22, or 23, you can specify default values for some of the commands used by the save process. Figure 15.3 shows the Specify Command Defaults panel values used by these options. You can use SAVE menu option 20 (Define save system and user data defaults) to change the default values displayed on this panel for menu options 21, 22, and 23. Changing the defaults simplifies the task of setting up your backups. To change the defaults, you must have *Change authority to both library QUsrSys and the QSRDflts data area in QUsrSys. When you select option 20, the system displays the default parameter values for options 21, 22, and 23. The first time you use option 20, the system displays the IBM-supplied default parameter values. You can change any or all of the parameter values to meet your needs. For example, you can specify additional tape devices or change the message queue delivery default. The system saves the new default values in data area QSRDflts in library QUsrSys for future use (the system creates QSRDflts only after you change the IBM-supplied default values). Once you’ve defined new default values, you no longer need to worry about which, if any, options to change on subsequent backups. You can simply review the new default options and then press Enter to start the backup using the new default parameters. If you have multiple, distributed systems with the same save parameters on each system, option 20 offers an additional benefit: You can simply define your default parameters using option 20 on one system and then save data area QSRDflts in library QUsrSys, distribute the saved data area to the other systems, and restore it.
Printing System Information When you perform save operations using option 21, 22, or 23 from menu SAVE, you can optionally request a series of reports with system information that can be useful during system recovery. The Specify Command Defaults panel presented by these options provides a prompt for printing system information. You can also use command PrtSysInf (Print System Information) to print the system information. This information is especially useful if you can’t use your SavSys media to recover and must use your distribution media. Printing the system information requires *AllObj, *IOSysCfg, and *JobCtl authority and produces many spooled file listings. You probably don’t need to print the information every time you perform a backup. However, you should print it whenever important information about your system changes. The following lists and reports are generated when you print the system information (the respective CL commands are noted in parentheses):
• • • • •
a library backup list with information about each library in the system, including which backup schedules include the library and when the library was last backed up (DspBckupL *Lib) a folder backup list with the same information for all folders in the system (DspBckupL *Flr) a list of all system values (DspSysVal) a list of network attributes (DspNetA) a list of edit descriptions (DspEdtD)
• • • • • • • • • • • • • • •
a list of PTF details (DspPTF) a list of reply list entries (WrkRpyLE) a report of access-path relationships (DspRcyAP) a list of service attributes (DspSvrA) a list of network server storage spaces (DspNwSStg) a report showing the power on/off schedule (DspPwrScd) a list of hardware features on your system (DspHdwRsc) a list of distribution queues (DspDstSrv) a list of all subsystems (DspSbsD) a list of the IBM software licenses installed on your machine (DspSfwRsc) a list of journal object descriptions for all journals (DspObjD) a report showing journal attributes for all journals (WrkJrnA) a report showing cleanup operations (ChgClnup) a list of all user profiles (DspUsrPrf) a report of all job descriptions (DspJobD)
Saving Data Concurrently Using Multiple Tape Devices As I mentioned earlier, one way to reduce the amount of time required for a complex backup strategy is to perform save operations to multiple tape devices at once. You can save data concurrently using multiple tape devices by saving libraries to one tape device, folders to another tape device, and directories to a third tape device. Or, you can save different sets of libraries, objects, folders, or directories to different tape devices.
Concurrent Saves of Libraries and Objects You can run multiple save commands concurrently against multiple libraries. When you run multiple save commands, the system processes the request in several stages that overlap, improving save performance. To perform concurrent save operations to different tape devices, you can use the OmitLib (Omit library) parameter with generic naming. For example:
SavLib Lib(*AllUsr) + Dev(FirstTapeDevice) + OmitLib(A* B* $* #* @* ... L*) SavLib Lib(*AllUsr) + Dev(SecondTapeDevice) + OmitLib(M* N* ... Z*) You can also save a single library concurrently to multiple tape devices by using the SavObj or SavChgObj command. This technique lets you issue multiple save operations using multiple tape devices to save objects from one large library. For example, you can save generic objects from one large library to one tape device and concurrently issue another SavObj command against the same library to save a different set of generic objects to another tape device. You can use generic naming on the Obj (Object) parameter while performing concurrent SavChgObj operations to multiple tape devices against a single library. For example:
SavChgObj Obj(A* B* C* $* #* ... L*) + Dev(FirstTapeDevice) + Lib(LibraryName) SavChgObj Obj(M* N* O* ... Z*) + Dev(SecondTapeDevice) + Lib(LibraryName)
Concurrent Saves of DLOs (Folders) You can run multiple SavDLO commands concurrently for DLO objects that reside in the same ASP. This technique allows concurrent saves of DLOs to multiple tape devices.
You can use the command’s Flr (Folder) parameter with generic naming to perform concurrent save operations to different tape devices. For example:
SavDLO DLO(*All) Flr(DEPT*) Dev(FirstTapeDevice) OmitFlr(DEPT2*) SavDLO DLO(*All) Flr(DEPT2*) Dev(SecondTapeDevice)
+ + + + +
In this example, the system saves to the first tape device all folders starting with DEPT except those that start with DEPT2. Folders that start with DEPT2 are saved to the second tape device. Note: Parameter OmitFlr is allowed only when you specify DLO(*All) or DLO(*Chg).
Concurrent Saves of Objects in Directories You can also run multiple Sav commands concurrently against objects in directories. This technique allows concurrent saves of objects in directories to multiple tape devices. You can use the Sav command’s Obj (Object) parameter with generic naming to perform concurrent save operations to different tape devices. For example:
Sav Dev('/QSYS.LIB/FirstTapeDevice.DEVD') Obj(('/DIRA*')) UpdHst(*Yes) Sav Dev('/QSYS.LIB/SecondTapeDevice.DEVD') Obj(('/DIRB*')) UpdHst(*Yes)
+ + + +
Save-While-Active To either reduce or eliminate the amount of time your system is unavailable for use during a backup (your backup outage), you can use the save-while-active process on particular save operations along with your other backup and recovery procedures. Save-while-active lets you use the system during part or all of the backup process. In contrast, other save operations permit either no access or only read access to objects during the backup.
How Does Save-While-Active Work? OS/400 objects consist of units of storage called pages. When you use save-while-active to save an object, the system creates two images of the pages of the object. The first image contains the updates to the object with which normal system activity works. The second image is a “snapshot” of the object as it exists at a single point in time called a checkpoint. The save-while-active job uses this image — called the checkpoint image — to save the object. When an application makes changes to an object during a save-while-active job, the system uses one image of the object’s pages to make the changes and, at the same time, uses the other image to save the object to tape. The system locks objects as it obtains the checkpoint images, and you can’t change objects during the checkpoint processing. After the system has obtained the checkpoint images, applications can once again change the objects. The image that the system saves doesn’t include any changes made during the save-while-active job. The image on the tape is an image of the object as it existed when the system reached the checkpoint. Rather than maintain two complete images of the object being saved, the system maintains two images only for the pages of the objects that are being changed as the save is performed.
Synchronization. When you back up more than one object using the save-while-active process, you must choose when the objects will reach a checkpoint in relationship to each other — a concept called synchronization. There are three kinds of synchronization:
• • •
With full synchronization, the checkpoints for all the objects occur at the same time, during a time period in which no changes can occur to the objects. It’s strongly recommended that you use full synchronization, even when you’re saving objects in only one library. With library synchronization, the checkpoints for all the objects in a library occur at the same time. With system-defined synchronization, the system decides when the checkpoints for the objects occur. The checkpoints may occur at different times, resulting in a more complex recovery procedure.
How you use save-while-active in your backup strategy depends on whether you choose to reduce or eliminate the time your system is unavailable during a backup. Reducing the backup outage is much simpler and more common than eliminating it. It’s also the recommended way to use save-while-active. When you use save-while-active to reduce your backup outage, your system recovery process is exactly the same as if you performed a standard backup operation. Also, using save-while-active this way doesn’t require you to implement journaling or commitment control. To use save-while-active to reduce your backup outage, you can end any applications that change objects or end the subsystems in which these applications are run. After the system reaches a checkpoint for those objects, you can restart the applications. One save-while-active option lets you have the system send a message notification when it completes the checkpoint processing. Once you know checkpoint processing is completed, it’s safe to start your applications or subsystems again. Using save-while-active this way can significantly reduce your backup outage. Typically, when you choose to reduce your backup outage with save-while-active, the time during which your system is unavailable for use ranges anywhere from 10 minutes to 60 minutes. It’s highly recommended that you use save-while-active to reduce your backup outage unless you absolutely cannot have your system unavailable for this time frame. You should use save-while-active to eliminate your backup outage only if you have absolutely no tolerance for any backup outage. You should use this approach only to back up objects that you’re protecting with journaling or commitment control. When you use save-while-active to eliminate your backup outage, you don’t end the applications that modify the objects or end the subsystems in which the applications are run. However, this method affects the performance and response time of your applications. Keep in mind that eliminating your backup outage with save-while-active requires much more complex recovery procedures. You’ll need to include these procedures in your disaster recovery plans.
Save Commands That Support the Save-While-Active Option The following save commands support the save-while-active option: Command SavLib SavObj SavChgObj SavDLO Sav
Function Save library Save object Save changed objects Save document library objects Save objects in directories
The following parameters are available on the save commands for the save-while-active process: Parameter
Description
You must decide whether you're going to use full synchronization, library synchronization, or system-defined synchronization. It's highly recommended that you use full synchronization in most cases. SavActWait (Save active You can specify the maximum number of seconds that the save-while-active SavAct (Save-whileactive)
wait time) SavActMsgQ (Save active message queue) SavActOpt (Save-whileactive options)
operation will wait to allocate an object during checkpoint processing. You can specify whether the system sends you a message when it reaches a checkpoint. This parameter has values that are specific to the Sav command.
For complete details about using the save-while-active process to either reduce or eliminate your backup outage, visit IBM’s iSeries Information Center at http://publib.boulder.ibm.com/pubs/html/as400/infocenter.htm.
Backing Up Spooled Files When you save an output queue, its description is saved but not its contents (the spooled files). With a combination of spooled file APIs, user space APIs, and list APIs, you can back up spooled files, including their associated advanced function attributes (if any). The spooled file APIs perform the real work of backing up spooled files. These APIs include
• • • • • • •
QUSLSpl (List Spooled Files) QUSRSplA (Retrieve Spooled File Attributes) QSpOpnSp (Open Spooled File) QSpCrtSp (Create Spooled File) QSpGetSp (Get Spooled File Data) QSpPutSp (Put Spooled File Data) QSpCloSp (Close Spooled File)
These APIs let you copy spooled file information to a user space for save purposes and copy the information back from the user space to a spooled file. Once you’ve copied spooled file information to user spaces, you can save the user spaces. For more information about these APIs, see System API Reference (SC41-5801). One common misconception is that you can use the CpySplF (Copy Spooled File) command to back up spooled files. This command does let you copy information from a spooled file to a database file, but you shouldn’t rely on this method for spooled file backup. CpySplF copies only textual data and not advanced function attributes such as graphics and variable fonts. CpySplF also does nothing to preserve print attributes such as spacing. IBM does offer support for saving and restoring spooled files in its BRMS product. BRMS maintains all the advanced function attributes associated with the spooled files. For more information about BRMS, see “Backup, Recovery and Media Services (BRMS) Overview.” [Chapter 16]
Recovering Your System Although the iSeries is very stable and disasters are rare, there are times when some type of recovery may be necessary. The extent of recovery required and the processes you follow will vary greatly depending on the nature of your failure. The sheer number of possible failures precludes a one-size-fits-all answer to recovery. Instead, you must examine the details of your failure and recover accordingly. To help determine the best way to recover your system, you should refer to “Selecting the Right Recovery Strategy” in OS/400 Backup and Recovery, which categorizes failures and their associated recovery processes and provides checklists of recovery steps. Before beginning your recovery, be sure to do the following:
• • • •
If you have to back up and recover because of some system problem, make sure you understand how the problem occurred so you can choose the correct recovery procedures. Plan your recovery. Make a copy of the OS/400 Backup and Recovery checklist you’re using, and check off each step as you complete it. Keep the checklist for future reference. If you need help later, this record will be invaluable. If your problem requires hardware or software service, make sure you understand exactly what the service representative does. Don’t be afraid to ask questions.
Starting with V4R5, the OS/400 Backup and Recovery manual includes a new appendix called “Recovering your AS/400 system,” which provides step-by-step instructions for completely recovering your entire system to the same system (i.e., restoring to a system with the same serial number). You can use these steps only if you saved your entire system using either option 21 from menu SAVE or the equivalent SavSys, SavLib, SavDLO, and Sav commands. Continue to use the checklist titled “Recovering your entire system after a complete system loss (Checklist 17)” in Chapter 3 of OS/400 Backup and Recovery to completely recover your system in any of the following situations:
• • • •
Your system has logical partitions. Your system uses the Alternate Installation Device Setup feature that you can define through Dedicated Service Tools (DST) for a manual IPL from tape. Your system has mounted user-defined file systems before the save. You’re recovering to a different system (a system with a different serial number).
One piece of advice warrants repeating: Test as many of the procedures in your recovery plan as you possibly can before disaster strikes. If any surprises await you, it’s far better to uncover them in a test situation than during a disaster. This article is excerpted from the book Starter Kit for the IBM iSeries and AS/400 by Gary Guthrie and Wayne Madden (29th Street Press, 2001). For more information about the book, see http://www.iseriesnetwork.com/str/books/uniquebook2.cfm?NextBook=187. Debbie Saugen is the technical owner of iSeries 400 and AS/400 Backup and Recovery in IBM’s Rochester, Minnesota, Development Lab. She is also a senior recovery specialist with IBM Business Continuity and Recovery Services. Debbie enjoys sharing her knowledge by speaking at COMMON, iSeries 400 and AS/400e Technical Conferences, and Business Continuity and Recovery Services Conferences and writing for various iSeries and AS/400e magazines and Web sites. Availability Options Availability options are a complement to a backup strategy, not a replacement. These options can significantly reduce the time it takes you to recover after a failure. In some cases, availability options can prevent the need for recovery. To justify the cost of using availability options, you need to understand the following:
• • •
the value of the data on your system the cost of a scheduled or unscheduled outage your availability requirements
The following availability options can complement your backup strategy:
• • • • • • •
journal management access-path protection auxiliary storage pools device parity protection mirrored protection dual systems clustered systems
You should compare these options and decide which are best suited to your business needs. For details about availability options, their benefits versus costs, and how to implement them, refer to IBM's iSeries Information Center at http://publib.boulder.ibm.com/pubs/html/as400/infocenter.htm. We'll look more closely at each availability option in a moment, but first, it's helpful to be acquainted with the following terms, which are often used in discussing system availability:
•
An outage is a period of time during which the system is unavailable to users. During a scheduled outage, you deliberately make your system unavailable to users. You might use a
• • •
scheduled outage to run batch work, back up your system, or apply PTFs. An unscheduled outage is usually caused by a failure of some type. High availability means that the system has no unscheduled outages. In continuous operations, the system has no scheduled outages. Continuous availability means that the system has neither scheduled nor unscheduled outages.
Journal Management for Backup and Recovery You can use journal management (often referred to as journaling a file or an access path) to recover the changes to database files (or other objects) that have occurred since your last complete backup. You use a journal to define which files and access paths you want to protect. A journal receiver contains the entries (called journal entries) that the system adds when events occur that are journaled, such as changes to database files, changes to other journaled objects, or security-related events. You can use the remote journal function to set up journals and journal receivers on a remote iSeries system. These journals and journal receivers are associated with journals and journal receivers on the source system. The remote journal function lets you replicate journal entries from the source system to the remote system.
Access-Path Protection An access path describes the order in which the records in a database file are processed. Because different programs may need to access the file’s records in different sequences, a file can have multiple access paths. Access paths in use at the time of a system failure are at risk of corruption. If access paths become corrupted, the system must rebuild them before you can use the files again. This can be a very time-consuming process. You should consider an access-path protection plan to limit the time required to recover corrupted access paths. The system offers two methods of access-path protection:
• •
system-managed access-path protection (SMAPP) explicit journaling of access paths
You can use these methods independently or together. By using journal management to record changes to access paths, you can greatly reduce the amount of time it takes to recover access paths should doing so become necessary. Using journal entries, the system can recover access paths without the need for a complete rebuild. This can result in considerable time savings. With SMAPP, you can let the system determine which access paths to protect. The system makes this determination based on access-path target recovery times that you specify. SMAPP provides a simple way to reduce recovery time after a system failure, managing the required environment for you. You can use explicit journaling, even when using SMAPP, to ensure that certain access paths critical to your business are protected. The system evaluates the protected and unprotected access paths to develop its strategy for meeting your access-path recovery targets.
Auxiliary Storage Pools Your system may have many disk units attached to it for auxiliary storage of your data that, to your system, look like a single unit of storage. When the system writes data to disk, it spreads the data across all of these units. You can divide your disk units into logical subsets known as auxiliary storage pools (ASPs) which don't necessarily correspond to the physical arrangement of disks. You can then assign objects to particular ASPs, isolating them on particular disk units. When the system now writes to these objects, it spreads
the information across only the units within the ASP. ASPs provide a recovery advantage if the system experiences a disk unit failure that results in data loss. In such a case, recovery is required only for the objects in the ASP containing the failed disk unit. System objects and user objects in other ASPs are protected from the disk failure. In addition to the protection that isolating objects to particular ASPs provides, the use of ASPs provides a certain level of flexibility. When you assign the disk units on your system to more than one ASP, each ASP can have different strategies for availability, backup and recovery, and performance.
Device Parity Protection Device parity protection is a hardware availability function that protects against data loss due to disk unit failure or damage to a disk. To protect data, the disk controller or input/output processor (IOP) calculates and saves a parity value for each bit of data. The disk controller or IOP computes the parity value from the data at the same location on each of the other disk units in the device parity set. When a disk failure occurs, the data can be reconstructed by using the parity value and the values of the bits in the same locations on the other disks. The system continues to run while the data is being reconstructed. The overall goal of device parity protection is to provide high availability and to protect data as inexpensively as possible. If possible, you should protect all the disk units on your system with either device parity protection or mirrored protection (covered next). In many cases, your system remains operational during repairs. Device parity protection is designed to prevent system failure and to speed the recovery process for certain types of failures, not as a substitute for a good backup and recovery strategy. Device parity protection doesn’t protect you if you have a site disaster or user error. It also doesn’t protect against system outages caused by failures in other disk-related hardware (e.g., disk controllers, disk I/O).
Mirrored Protection Mirrored protection is a software availability function that protects against data loss due to failure or damage to a disk-related component. The system protects your data by maintaining two copies of the data on two separate disk units. When a disk-related component fails, the system continues to operate without interruption, using the mirrored copy of the data until repairs are complete on the failed component. When you start mirrored protection or add disk units to an ASP that has mirrored protection, the system creates mirrored pairs using disk units that have identical capacities. The goal is to protect as many disk-related components as possible. To provide maximum hardware redundancy and protection, the system tries to pair disk units from different controllers, IOPs, and buses. Different levels of mirrored protection are possible, depending on the duplicated hardware. For instance, you can duplicate
• • • •
disk units disk controllers disk IOPs a bus
If a duplicate exists for the failing component and attached hardware components, the system remains available during the failure. Remote mirroring support lets you have one mirrored unit within a mirrored pair at the local site and the second mirrored unit at a remote site. For some systems, standard DASD mirroring will remain the best choice; for others, remote DASD mirroring provides important additional capabilities.
Dual Systems System installations with very high availability requirements use a dual-systems approach, in which two systems maintain some or all data. If the primary system fails, the secondary system can take over critical application programs. The most common way to maintain data on the secondary system is through journaling. The primary system transmits journal entries to the secondary system, where a user-written program uses them to update files and other journaled objects in order to replicate the application environments of the primary system. Users sometimes implement this by transmitting journal entries at the application layer. The remote journal function improves on this technique by transmitting journal entries to a duplicate journal receiver on the secondary system at the licensed internal code layer. Several software packages are available from independent software vendors to support dual systems.
Clustered Systems A cluster is a collection or group of one or more systems that work together as a single system. The cluster is identified by name and consists of one or more cluster nodes. Clustering let you efficiently group your systems together to create an environment that approaches 100 percent availability. Preparing and Managing Your Backup Media OS/400’s save commands support different types of devices (including save file, tape, diskette, and optical). For a backup strategy, you should always back up to a tape device. Choose a tape device and tape media that has the performance capabilities and density capacity that will meet your backup window and any requirements you have for running an unattended backup. Preparing and managing your tape media is an important part of your backup operations. You need to be able to easily locate the correct media to perform a successful system recovery. You’ll need to use sets of tapes and implement a rotation schedule. An important part of a good backup strategy is to have more than one set of backup media. When you perform a system recovery, you may need to go back to an older set of tape media if your most recent set is damaged or if you discover a programming error that has affected data on your most recent backup media. At a minimum, you should rotate three sets of media, as follows: Backup Backup 1 Backup 2 Backup 3 Backup 4 Backup 5 Backup 6 . . .
Media set Set 1 Set 2 Set 3 Set 1 Set 2 Set 3 . . .
You may find that the easiest method is to have a different set of media for each day of the week. This strategy makes it easier for the operator to know which set to mount for backup.
Cleaning Your Tape Devices It’s important to clean your tape devices regularly. The read-write heads can collect dust and other material that can cause errors when reading or writing to tape media. If you’re using new tapes, it’s especially important to clean the device because new tapes tend to collect more material on the read-
write heads. For specific recommendations, refer to your tape drive’s manual.
Preparing Your Tapes for Use To prepare tape media for use, you’ll need to use the InzTap (Initialize Tape) command. (Some tapes come pre-initialized.) When you initialize tapes, you’re required to give each tape a new-volume identifier (using the InzTap command’s NewVol parameter) and a density (Density parameter). The new-volume identifier identifies the tape as a standard-labeled tape that can be used by the system for backups. The density specifies the format in which to write the data on the tape based on the tape device you’re using. You can use the special value *DevType to easily specify that the format be based on the type of tape device being used. When initializing new tapes, you should also specify Check(*No); otherwise, the system tries to read labels from the volume on the specified tape device until the tape completely rewinds. Here’s a sample command to initialize a new tape volume:
InzTap Dev(Tap01) + NewVol(A23001) + Check(*No) + Density(*DevType) Tip: It’s important to initialize each tape only once in its lifetime and give each tape volume a different volume identifier so tape-volume error statistics can be tracked.
Naming and Labeling Your Tapes Initializing each tape volume with a volume identifier helps ensure that your operators load the correct tape for the backup. It’s a good idea to choose volume-identifier names that help identify tape-volume contents and the volume set to which each tape belongs. The following table illustrates how you might initialize your tape volumes and label them externally in a simple backup strategy. Each label has a prefix that indicates the day of the week (A for Monday, B for Tuesday, and so on), the backup operation (option number from menu SAVE), and the media set with which the tape volume is associated. Volume Naming — Part of a Simple Backup Strategy Volume name External label B23001 Tuesday-Menu SAVE, option 23-Media set 1 B23002 Tuesday-Menu SAVE, option 23-Media set 2 B23003 Tuesday-Menu SAVE, option 23-Media set 3 E21001 Friday-Menu SAVE, option 21-Media set 1 E21002 Friday-Menu SAVE, option 21-Media set 2 E21003 Friday-Menu SAVE, option 21-Media set 3 Volume names and labels for a medium backup strategy might look like this: Volume Naming — Part of a Medium Backup Strategy Volume name External label E21001 Friday-Menu SAVE, option 21-Media set 1 E21002 Friday-Menu SAVE, option 21-Media set 2 AJR001 Monday-Save journal receivers-Media set 1 AJR002 Monday-Save journal receivers-Media set 2 ASC001 Monday-Save changed Nothing is simpler, but not everyone can afford the outage that this type of save requires. BRMS is an effective solution in backing up only what's really required. BRMS also lets you easily schedule a backup that includes a SavSys (Save System) operation, which isn't so easy using just OS/400. In addition to these capabilities, BRMS offers step-by-step recovery information, printed after backups are complete. Recovery no longer consists of operators clenching the desk with white knuckles at 4:00 a.m., trying desperately to recover the system in time for the users who'll arrive at 8:00 a.m., without any idea what's going on or how long the process will take. With native OS/400 commands, the only feedback that recovery personnel get is the occasional change to the message line on line 25 of the screen as the recovery takes place. BRMS changes
this with full and detailed feedback during the recovery process — with an auto-refresh screen, updated as each library is restored. Following are some of the features that contribute to the robustness of BRMS:
•
• •
•
• • • •
Data archive — Data archive is important for organizations that must keep large volumes of history data yet don't require rapid access to this information. BRMS can archive data from DASD to tape and track information about objects that have been archived. Locating data in the archives is easy, and the restore can be triggered from a work-with screen. Dynamic data retrieval — Dynamic retrieval for database files, document library objects, and stream files is possible with BRMS. Once archived with BRMS, these objects can be automatically restored upon access within user applications. No changes are required to user applications to initiate the restore. Media management — In a large single- or multisystem environment, control and management of tape media is critical. BRMS allows cataloging of an entire tape inventory and manages the media as they move from location to location. This comprehensive inventory-management system provides many reports that operators can use as instructions. Parallel save and restore — BRMS supports parallel save and restore, reducing the backup and recovery times of very large objects and libraries by 'spreading' data across multiple tape drives. This method is in contrast to concurrent save and restore, in which the user must manage the splitting of data. With parallel save and restore, operations end at approximately the same time for all tape drives. Lotus Notes Servers backup — BRMS supports backup of online Lotus Notes Servers, including Domino and Quickplace Lotus Notes Servers. Flexible backup options — You can define different backup scenarios and execute the ones appropriate for particular circumstances. Spooled file backup — Unlike OS/400 save and restore functions, BRMS provides support for backing up spooled files. Spooled file backup is important to a complete backup, and BRMS lets you tailor spooled file backup to meet your needs. Storage alternatives — You can save to a tape device, a Media Library device, a save file, or a Tivoli Storage Manager server (previously known as an ADSM server).
It is these features, and more, that make BRMS a popular solution for many installations. Later, we'll take a closer look at some of these capabilities.
Getting Started with BRMS BRMS brings with it a few new save/restore concepts as well as some new terminology. For instance, you'll find repeated references to the following terms when working with BRMS:
• • • • •
media — a tape cartridge or save file that will hold the objects being backed up media identifier — a name given to a physical piece of media media class — a logical grouping of media with similar physical and/or logical characteristics (e.g., density) policy — a set of commonly used defaults (e.g., device, media class) that determine how BRMS performs its backup backup control group — a grouping of items (e.g., libraries, objects, stream files) to back up
You're probably thinking that 'media' and 'media identifier' aren't such new terms. True, but most people don't think of save files as media, and media identifier is typically thought to mean volume identifier. Policies and backup control groups are concepts central to BRMS in that they govern the backup process. IBM provides default values in several policies and control groups. You can use these defaults or define your own for use in your save/restore operations. Policies are templates for managing backups and media management operations. They act as a control point for defining operating characteristics. The standard BRMS package provides the following policies:
• •
System Policy — The System Policy is conceptually similar to system values. It contains general defaults for many BRMS operations. Backup Policy — The Backup Policy determines how the system performs backups. It contains defaults for backup operations.
• • •
Recovery Policy — The Recovery Policy defines how the system typically performs recovery operations. Media Policies — Media Policies control media-related functionality. For instance, they determine where BRMS finds tapes needed for a backup. Move Policies — Move Policies define the way media moves through storage locations from creation time through expiration.
In pre-V5R1 releases of OS/400, BRMS is shipped with two default backup control groups, *SysGrp (system group) and *BkuGrp (backup group). The *SysGrp control group backs up all system data, and the *BkuGrp control group backs up all user data. You can back up your entire system using these two control groups, but doing so requires two backup commands, one for each group. To back up your entire system using a single control group, you can create a new backup control group that includes the following BRMS special values as backup items: Seq 10 20 30 40 50
Backup items *SavSys *IBM *AllUsr *AllDLO *Link
The time required to back up the system using this full backup control group is less than that required to use a combination of the *SysGrp and *BkuGrp backup control groups. The *SysGrp control group contains the special value *SavSys, which saves the licensed internal code, OS/400, user profiles, security data, and configuration data. The *BkuGrp control group contains the special values *SavSecDta and *SavCfg, which also save user profiles, security data, and configuration data. If you use the two control groups *SysGrp and *BkuGrp, you save the user profiles, security data, and configuration data twice. This redundancy in saved data contributes to the additional backup time when using control groups *SysGrp and *BkuGrp. Starting with V5R1, BRMS includes a new, full-system default backup control group, *System, that combines the function of groups *SysGrp and *BkuGrp. Note that none of the full backup control groups discussed so far saves spooled files. If spooled files are critical to your business, you'll need to create a backup list of your spooled files to be included in your full backup control group (more about how to do this later).
Saving Data in Parallel with BRMS As I mentioned, BRMS supports parallel save/restore function. This support is intended for use with large objects and libraries. Its goal is to reduce backup and recovery times by evenly dividing data across multiple tape drives. You typically define parallel resources when you work with backup control groups. You specify both a maximum number of resources (devices) and a minimum number of resources to be used during the backup. For example, you could specify 32 for maximum resources and 15 for minimum resources. When the backup is submitted, the system checks for available tape resources. If it can't find 32 available tape devices, the backup will be run with the minimum of 15. It's not a requirement that the number of devices used for the backup be used on the restore. However, to reduce the number of tape mounts, it's best to use the same number of tape devices on the restore. Starting with V5R1, the special values *AllProd, *AllTest, *AllUsr, *ASP01-*ASP99, and *IBM are supported on BRMS parallel saves, with the objects being 'spread' at the library level. Restores for objects saved in parallel with these special values are still done in a serial mode.
Online Backup of Lotus Notes Servers with BRMS In today's working environment, users demand 24x7 access to their mail and other Lotus Notes databases, yet it's also critical that user data be backed up frequently and in a timely way. BRMS Online Lotus Notes Servers Backup support meets these critical needs. With this support, you can save Lotus Notes databases while they're in use, without requiring users to exit the system. Prior save-while-active support required ending applications to reach a checkpoint or the use of commitment control or journaling. Another alternative was to invest in an additional server, replicate the server
data, and perform the backup from the second server. Online Lotus Notes Servers Backup with BRMS avoids these requirements. Installation of BRMS automatically configures control groups and policies that help you perform online backup of your Lotus Notes Servers. The Online Lotus Notes Servers Backup process allows the collection of two backups into one entity. BRMS and Domino or Quickplace accomplish this using a BRMS concept called a package. The package is identified by the PkgID (Package identifier) parameter on the SavBRM (Save Object using BRM) command. Domino or Quickplace will back up the databases while they are online and in use. When the backup is completed, a secondary file is backed up and associated with the first backup using the package concept. The secondary file contains all the changes that occurred during the online backup, such as transaction logs or journaling information. When you need to recover a Lotus Notes Server database that was backed up using BRMS Online Backup, BRMS calls Domino or Quickplace through recovery exits that let Domino or Quickplace apply any changes from the secondary file backup to the database that was just restored. This recovery process maintains the integrity of the data.
Restricted-State Saves Using BRMS You can use the console monitor function of BRMS to schedule unattended restricted-state saves. This support is meaningful because with OS/400 save functions, restricted-state saves must be run interactively from a display in the controlling subsystem. BRMS's support means you can run an unattended SavSys operation to save the OS/400 licensed internal code and operating system (or other functions you want to run in a restricted state). You simply specify the special value *SavSys on the StrBkuBRM (Start Backup using BRM) command or within a BRMS control group to perform a SavSys. You can temporarily interrupt the console-monitoring function to enter OS/400 commands and then return the console to a monitored state. Console monitoring lets users submit the SavSys job to the job scheduler instead of running the save interactively. You can use the Submit to batch parameter on the StrBkuBRM command to enter *Console as a value, thereby performing your saves in batch mode. Thus, you don't have to be nearby when the system save is processed. However, you must issue this command from the system console because BRMS runs the job in subsystem QCtl. If you try to start the console monitor from your own workstation, BRMS sends a message indicating that you're not in a correct environment to start the console monitor.
Backing Up Spooled Files with BRMS With BRMS, you can create a backup list that specifies the output queues you want to save. You can then specify this backup list on your backup control groups. You create a spooled file backup list using command WrkLBRM (Work with Lists using BRM). You simply add a list, specifying
• • •
*Bku for the Use field a value for the List name (e.g., SaveSplF) *Spl for the Type field
When you press Enter, the Add Spooled File List panel (Figure 16.1) is displayed. (The figure shows the panel after backup information has been entered.)
Including Spooled File Entries in a Backup List Now, you can update the backup list by adding the output queues you want to save. Within a spooled file list, you can save multiple output queues by selecting multiple sequence numbers. When you add an output queue to the list, you can filter the spooled files to save by specifying values for spooled file name, job name, user name, or user data. For example, if you want to save only spooled files that belong to user A, you can specify user A's name in the User field. Generic names are also allowed.
The sample setup in Figure 16.1 saves output queue Prt01 in library QUsrSys. If you leave the Outq field at its default value *All, BRMS saves all spooled files from all output queues in library QUsrSys. To exclude an output queue, you can use the *Exc value. Once you set up your backup list, you can add it to your daily, weekly, or monthly backup control group as a backup item with a list type of *Spl. Note that BRMS doesn't support incremental saves of spooled files. If you specify an incremental save for a list type of *Spl, all spooled files in the list are saved. BRMS doesn't automatically clear the output queues after the spooled files are successfully saved. After you've successfully saved your spooled files, you can use the WrkSplFBRM (Work with Spooled Files using BRM) command to display the status of your saves. The WrkSplFBRM panel displays your spooled files in the order in which they were created on the system.
Restoring Spooled Files Saved Using BRMS BRMS doesn't automatically restore spooled files when you restore your user data during a system recovery. To restore saved spooled files, use the WrkSplFBRM command and select option 7 (Restore spooled file) on the resulting screen. From the Select Recovery Items panel that appears, you can specify the spooled files you want to restore. By default, BRMS restores spooled file data in the output queue from which the data was saved. If necessary, you can change any of the BRMS recovery defaults by pressing F9 on the Select Recovery Items screen. During the save and restore operations, BRMS retains spooled file attributes, names, user names, user data fields, and, in most cases, job names. During the restore operation, OS/400 assigns new job numbers, system dates, and times; the original dates and times aren't restored. Be aware that BRMS saves spooled files as a single folder, with multiple documents (spooled members) within the folder. During the restore, BRMS searches the tape label for the folder and restores all the documents. If your spooled file save happens to span multiple tape volumes, you'll be prompted to load the first tape to read the label information before restoring the documents on the subsequent tapes. To help with recovery, consider saving your spooled files on a separate tape using the *Load exit in a control group, or split your spooled file saves so you use only one tape at a time.
The BRMS Operations Navigator Interface With V5R1, BRMS has an Operations Navigator (OpsNav) interface that makes setting up and managing your backup and recovery strategy even easier. Using wizards, you can simplify the common operations you need to perform, such as creating a backup policy, adding tape media to BRMS, preparing the tapes for use, adding items to a backup policy, and restoring backed-up items. If you're currently using BRMS, you may not find all the functionality in OpsNav that you have with the greenscreen version. However, watch for additional features in future releases of BRMS Operations Navigator. You may still want to use the graphical interface to perform some of the basic operations. If so, you'll need to be aware of some differences between the green-screen and the OpsNav interfaces.
Terminology Differences The OpsNav version of BRMS uses some different terminology than the green-screen BRMS. Here are some key terms: New terminology Definition Backup history
Information about each of the objects backed up using BRMS. The backup history includes any items backed up using a backup policy. In the greenscreen interface, the equivalent term is media information.
Backup policy
Defaults that control what data is backed up, how it is backed up, and where it is backed up. In the green-screen interface, a combination of a backup control group and a media policy would make up a backup policy. Also, there
is no system policy in the OpsNav interface. All information needed to perform a backup is included in the backup policy. Media pool
A group of media with similar density and capacity characteristics. In the green-screen interface, this is known as a media class.
Functional Differences As of this writing, the current version of BRMS Operations Navigator lets you
• • • • • • • • • • •
run policies shipped with BRMS view the backup history view the backup and recovery log create and run a backup policy back up individual items restore individual items schedule items to be backed up and restored print a system recovery report customize user access to BRMS functions and components run BRMS maintenance activities add, display, and manage tape media
Some functions unavailable in the current release of BRMS Operations Navigator but included in the green-screen interface include
• • • • • • • •
move policies tape library support backup to save files backup of spooled files parallel backup networked systems support advanced functions, such as hierarchical storage management (HSM) BRMS Application Client for Tivoli Storage Manager
Backup and Recovery with BRMS OpsNav BRMS Operations Navigator is actually a plug-in to OpsNav. A plug-in is a program that's created separately from OpsNav but, when installed, looks and behaves like the rest of the graphical user interface of OpsNav.
Backup Policies One ease-of-use advantage offered by BRMS OpsNav is that you can create backup policies to control your backups. A backup policy is a group of defaults that controls what data is backed up, how it is backed up, and where it is backed up. Once you've defined your backup policies, you can run your backup at any time or schedule your backup to run whenever it fits into your backup window. Three backup policies come with BRMS:
• • •
*System — backs up the entire system *SysGrp — backs up all system data *BkuGrp — backs up all user data
If you have a simple backup strategy, you can implement your strategy using these three backup policies. If you have a medium or complex strategy, you create your own backup policies. When you back up your data using a BRMS backup policy, information about each backed-up item is stored in the backup history. This information includes the item name, the type of backup, the date of the backup, and the
volume on which the item is backed up. You can specify the level of detail you want to track for each item in the properties for the policy. You can then restore items by selecting them from the backup history. You also use the backup history information for system recoveries.
Creating a BRMS Backup Policy You can use the New Backup Policy wizard in OpsNav to create a new BRMS backup policy. To access the wizard: 1. 2.
Expand Backup, Recovery and Media Services. Right-click Backup policies, and select New policy.
The wizard gives you the following options for creating your backup policies: Option
Description
Back up all system and user data
Enables you to do a full system backup of IBM-supplied data and all user data (spooled files are not included in this backup)
Back up all user data Enables you to back up the data that belongs to users on your system, such as user profiles, security data, configuration data, user libraries, folders, and objects in directories Back up Lotus server data online
Enables you to perform an online backup of Lotus Domino and Quickplace servers
Back up a customized set of objects
Enables you to choose the items you want to back up
After creating a backup policy, you can choose to run the backup policy immediately or schedule it to run later. If you want to change the policy later, you can do so by editing the properties of the policy. Many customization options that aren't available in the New Backup Policy wizard are available in the properties of the policy. To access the policy properties, right-click the policy and select Properties.
Backing Up Individual Items In addition to using backup policies to back up your data, you can choose to back up individual files, libraries, or folders using the OpsNav hierarchy. You can also choose to back up just security or configuration data. Using OpsNav, simply right-click the item you want to back up and select Backup.
Restoring Individual Items If a file becomes corrupted or accidentally deleted, you may need to restore individual items on your system. If you use backup policies to back up items on your system, you can restore those items from the backup history. When you restore an item from the backup history, you can view details about the item, such as when it was backed up and how large it is. If there are several versions of the item in the backup history, you can select which version of the item you want to restore. You can also restore items that you backed up without using a backup policy. However, for these items, you don't have the benefit of using the backup history to make your selection. Fortunately, you can use the OpsNav Restore wizard to restore individual items on your system, whether they were backed up with a backup policy or not. To access the wizard in OpsNav, right-click Backup, Recovery and Media Services and select Restore.
Scheduling Unattended Backup and Restore Operations Earlier, you saw how to schedule unattended restricted-state saves using the console monitor and the StrBkuBRM command. Of course, you can also schedule non-restricted-state save and restore operations.
In addition, you can use OpsNav to schedule your backup. To do so, you simply use the OpsNav New Policy wizard to create and schedule a backup. If you need to schedule an existing backup policy, you can do so by rightclicking its entry under Backup Policies in OpsNav and selecting Schedule. If the save operation requires a restricted-state system, you need only follow the console monitor instructions presented by OpsNav when you schedule the backup. Tip: When you schedule a backup policy to be run, remember that only the items scheduled to be backed up on the day you run the policy will be backed up. For example, say you have a backup policy that includes the library MyLib. In the policy properties, you schedule MyLib for backup every Thursday. If you schedule the policy to run on Thursday, the system backs up MyLib. However, if you schedule the same policy to run on any other day, the system does not back up MyLib. You can also schedule restore operations in much the same manner as backup operations using OpsNav. Restore operations, however, are scheduled less often than backup operations.
System Recovery Report BRMS produces a complete system recovery report that guides you through an entire system recovery. The report lets you know exactly which tape volumes are needed to recover your system. When recovering your entire system, you should use the report in conjunction with OS/400 Backup and Recovery (SC41-5304). Keep the recovery report with your tape volumes in a secure and safe off-site location.
BRMS Security Functions BRMS provides security functions via the Functional Usage Model, which lets you customize access to selected BRMS functions and functional components by user. You must use the OpsNav interface to access the Functional Usage Model feature. You can let certain users use specific functions and components while letting others use and change specific functions and components. You can grant various types of functional usage to all users or to specified users only. Each BRMS function, functional component, and specific backup and media management item (e.g., policy, control group) has two levels of authority access:
•
•
Access or No Access — At the first level of authority access using the Functional Usage Model, a user either has access to a BRMS function or component or has no access to it. If a user has access, he or she can use and view the function or component. With this basic level of access, a user can process a specific item (e.g., a library, a control group) in a backup operation but can't change the item. Specific Change or No Change — The second level of authority access lets a user change a specific function, component, or item. For example, to change a backup list, a user must have access to the specific backup list. Similarly, to change a media policy, a user must have access to the specific media policy.
The Functional Usage Model provides lists of existing items (e.g., control groups, backup lists, media and move policies) for which you can grant specific access. With the Functional Usage Model, you can give a user both types of access (so the user can both use and change a particular function, component, or item) or only one type of access (e.g., access to use but not to change a particular function, component, or item).
Security Options for BRMS Functions, Components, and Items In the backup area, the following usage levels are available:
•
• •
Basic Backup Activities — Users with Basic Backup Activities access can use and view the backup policy, control groups, and backup lists. With use access, these users can also process backups by using backup control groups (i.e., using the StrBkuBRM command) or by saving libraries, objects, or folders (SavLibBRM, SavObjBRM, or SavFlrLBRM). A user without Basic Backup Activities access can't see backup menu options or command parameter options. Backup Policy — Users with Backup Policy access can change the backup policy (in addition to using and viewing it). Users without access to the backup policy cannot change it. Backup Control Groups — Users with Backup Control Groups access can change specific backup control groups (in addition to using and viewing them). A user can find a list of his or her existing backup control
•
groups under the backup control groups heading in OpsNav. You can grant a user access to any number of specific control groups. Users without access to the backup control groups cannot change them. Backup Lists — Users with Backup Lists access can change specific backup lists (in addition to using and viewing them). A user can find a list of his or her existing backup lists under the backup lists heading in OpsNav. You can grant a user access to any number of specific backup lists. Users without access to a backup list cannot change it.
In the recovery area, the following usage levels are available:
•
•
Basic Recovery Activities — Users with Basic Recovery Activities access can use and view the recovery policy. They can also use the WrkMedIBRM (Work with Media Information using BRM) command to process basic recoveries, command RstObjBRM (Restore Object using BRM), and command RstLibBRM (Restore Library using BRM). Users without Basic Recovery Activities access can't see recovery menu options or command parameter options. Recovery Policy — Users with Recovery Policy access can change the recovery policy (in addition to using and viewing it). Users without access to the recovery policy can't change it.
In the area of media management, the following usage levels are available:
•
• •
•
• • • •
Basic Media Activities — Users with Basic Media Activities access can perform basic media-related tasks, such as using and adding media to BRMS. Users with this access can also use and view (but not change) media policies and media classes. Users without Basic Media Activities access can't see related menu options or command parameter options. Advanced Media Activities — Users with Advanced Media Activities access can perform media-related tasks such as expiring, removing, and initializing media. Media Policies — Users with Media Policies access can change specific media policies (in addition to using and viewing them). A user can find a list of his or her existing media policies under the media policies heading in OpsNav. You can grant a user access to any number of media policies. Users without access to a media policy cannot change it. Media Classes — Users with Media Classes access can change specific media classes (in addition to using and viewing them). A user can find a list of his or her existing media classes under the media classes heading in OpsNav. You can grant a user access to any number of media classes. Users without access to a media class cannot change it. Media Information — Users with Media Information access can change media information with command WrkMedIBRM (Work with Media Information). Basic Movement Activities — Users with Basic Movement Activities access can manually process or display MovMedBRM (Move Media using BRM) commands, but they can't change them. Move Verification — Users with Move Verification access can perform move verification tasks. Move Policies — Users with Move Policies access can change specific move policies (in addition to using and viewing them). A user can find a list of his or her existing move policies under the move policies heading in OpsNav. You can grant a user access to any number of move policies. Users without access to a move policy cannot change it.
In the system area, the following usage options are available:
• • • • • • •
Basic System-related Activities — Users with Basic System-related Activities access can use and view device panels and commands. They can also view and display auxiliary storage pool (ASP) panels and commands. Users with this access level can also use and view the system policy. Devices — Users with Devices access can change device-related information. Users without this access can't change device-related information. Auxiliary Storage Pools — Users with ASP access can change information about BRMS ASP management. Maintenance — Users with Maintenance access can schedule and run maintenance operations. System Policy — Users with System Policy access can change system policy parameters. Log — Users with Log access can remove log entries. Any user can display log information, but only those with Log access can remove log entries. Initialize — Users with Initialize access can use the InzBRM (Initialize BRM) command.
Media Management
BRMS makes media management simple by maintaining an inventory of your tape media. It keeps track of what data is backed up on which tape and which tapes have available space. When you run a backup, BRMS selects the tape to use from the available pool of tapes. BRMS prevents a user from accidentally writing over active files or using the wrong tape. Before you can use any tape media with BRMS, you need to add it to the BRMS inventory and initialize it. You can do this using OpsNav's Add media wizard (under Media, right-click Tape Volumes and select Add). You can also use the green-screen BRMS command AddMedBRM (Add Media to BRM). Once you've added tape media to the BRMS inventory, you can view the media based on criteria you specify, such as the volume name, status, media pool, or expiration date. This gives you the capability to manually expire a tape and make it available for use in the BRMS media inventory. To filter which media you see in the list, under Media, right-click Tape Volumes and select Include. To view information about a particular tape volume or perform an action on that volume, right-click the volume and select the action you want to perform from the menu.
BRMS Housekeeping You should perform a little BRMS housekeeping on a daily basis. The BRMS maintenance operation automatically performs BRMS cleanup on your system, updates backup information, and runs reports. BRMS maintenance performs these functions:
• • • • • • • • • • • •
expires media removes media information removes migration information (180 days old) removes log entries (from beginning entry to within 90 days of current date) runs cleanup retrieves volume statistics audits system media changes journal receivers prints expired media report prints version report prints media information prints recovery reports
You can run BRMS maintenance using OpsNav (right-click Backup, Recovery and Media Services and select Run Maintenance) or using BRMS command StrMntBRM (Start Maintenance for BRM).
Check It Out As you can see, BRMS provides some powerful features for simplifying and managing many aspects of iSeries backup and recovery. Keep in mind that BRMS isn't a replacement for your backup and recovery strategy; rather, it's a tool that can help you implement and carry out such a strategy. There's a lot more to BRMS than what's been covered here. For the complete details, see Backup, Recovery and Media Services (SC41-5345), as well as the BRMS home page (http://www.as400.ibm.com/service/brms.htm) and IBM's iSeries Information Center (http://publib.boulder.ibm.com/pubs/html/as400/infocenter.htm).
Chapter 17 - Defining a Subsystem We've all found ourselves lost at some time or other. It's not that we're dumb. We've simply gone to an unfamiliar place without having the proper orientation. You may have experienced a similar feeling of discomfort the first few times you signed on to your AS/400. Perhaps you submitted a job and then wondered, 'How do I find that job?' or 'Where did it go?' Although I'm sure you have progressed beyond these initial stages of bewilderment, you may still need a good introduction to the concepts of work management on the AS/400.
Work management on the AS/400 refers to the set of objects that define jobs and how the system processes those jobs. With a good understanding of work management concepts, you can easily perform such tasks as finding a job on the system, solving problems, improving performance, or controlling job priorities. I can't imagine anyone operating an AS/400 in a production environment without having basic work management skills to facilitate problem solving and operations. Let me illustrate two situations in which work management could enhance system operations. Perhaps you are plagued with end users who complain that the system takes too long to complete short jobs. You investigate and discover that, indeed, the system is processing short jobs slowly because they spend too much time in the job queue behind long-running end-user batch jobs, operator-submitted batch jobs, and even program compiles. You could tell your operators not to submit jobs, or you could have your programmers compile interactively, but those approaches would be impractical and unnecessary. The answer lies in understanding the work management concepts of multiple subsystems and multiple job queues. Perhaps when your 'power users' and programmers share a subsystem, excessive peaks and valleys in performance occur due to the heavy interaction of these users. Perhaps you want to use separate storage pools (i.e., memory pools) based on user profiles so that you can place your power users in one pool, your programmers in another, and everyone else in a third pool, thereby creating consistent performance for each user group. You could do this if you knew the work management concepts of memory management. Learning work management skills means learning how to maximize system resources. My goal for this and the next two chapters is to teach you the basic skills you need to effectively and creatively manage all the work processed on your AS/400.
Getting Oriented Just as a road map gives you the information you need to find your way in an unfamiliar city, Figure 17.1 (401 KB - yes, 401! Might want to go have some coffee while you wait for it to download.) serves as a guide to understanding work management. It shows the basic work management objects and how they relate to one another. The objects designated by a 1 represent jobs that enter the system, the objects designated by a 2 represent parts of the subsystem description, and the objects designated by a 3 represent additional job environment attributes (e.g., class, job description, and user profile) that affect the way a job interacts with the system. You will notice that all the paths in Figure 17.1 lead to one destination -- the subsystem. In the Roman Empire all roads led to Rome. On the AS/400, all jobs must process in a subsystem. So what better place to start our study of work management than with the subsystem?
Defining a Subsystem A subsystem, defined by a subsystem description, is where the system brings together the resources needed to process work. As shown in Figure 17.2, the subsystem description contains seven parts that fall into three categories. Let me briefly introduce you to these components of the subsystem description.
• •
•
•
Subsystem attributes provide the general definition of the subsystem and control its main storage allocations. The general definition includes the subsystem name, description, and the maximum number of jobs allowed in the subsystem. Storage pool definitions are the most significant subsystem attributes. A subsystem's storage pool definition determines how the subsystem uses main storage for processing work. The storage pool definition lets a subsystem either share an existing pool of main storage (e.g., *BASE and *INTERACT) with other subsystems, establish a private pool of main storage, or both. The storage pool definition also lets you establish the activity level -- the maximum number of jobs allowed in the subsystem -- for a particular storage pool. Work entries define how jobs enter the subsystem and how the subsystem processes that work. They consist of autostart job entries, workstation entries, job queue entries, communications entries, and prestart job entries. Autostart job entries let you predefine any jobs you want the system to start automatically when it starts the subsystem. Workstation entries define which workstations the subsystem will use to receive work. You can use a workstation entry to initiate an interactive job when a user signs on to the system or when a user transfers an interactive job from another subsystem. You can create workstation entries for specific workstation names (e.g., DSP10 and OH0123), for generic names (e.g., DSP*, DP*, and OH*), or by the type of
•
workstations (e.g., 5251, 3476, and 3477). Job queue entries define the specific job queues from which to receive work. A job queue, which submits jobs to the subsystem for processing, can only be allocated by one active subsystem. A single subsystem, however, can allocate multiple job queues, prioritize them, and specify for each a maximum number of active jobs. Communications entries define the communications device associated with a remote location name from which you can receive a communications evoke request. Prestart job entries define jobs that start on a local system before a remote system sends a communications request. When a communications evoke request requires the program running in the prestart job, the request attaches to that prestart job, thereby eliminating all overhead associated with initiating a job and program. Routing entries identify which programs to call to control routing steps that will execute in the subsystem for a given job. Routing entries also define in which storage pool the job will be processed and which basic execution attributes (defined in a job class object associated with a routing entry) the job will use for processing.
All these components of the subsystem description determine how the system uses resources to process jobs within a subsystem. I will expand upon my discussion of work entries in Chapter 18 and my discussion of routing entries in Chapter 19. Now that we've covered some basic terms, let's take a closer look at subsystem attributes and how subsystems can use main storage for processing work.
Main Storage and Subsystem Pool Definitions When the AS/400 is shipped, all of main storage resides in two system pools: the machine pool (*MACHINE) and the base pool (*BASE). You must define the machine pool to support your system hardware; the amount of main storage you allocate to the machine pool is hardware-dependent and varies with each AS/400. For more information about calculating the required machine pool size, see Chapter 2 and IBM's AS/400 Programming: Work Management Guide (SC41-8078). The base pool is the main storage that remains after you reserve the machine pool. You can designate *BASE as a shared pool for all subsystems to use to process work, or you can divide it into smaller pools of shared and private main storage. A shared pool is an allocation of main storage where multiple subsystems can process work. *MACHINE and *BASE are both examples of shared pools. Other shared storage pools that you can define include *INTERACT (for interactive jobs), *SPOOL (for printers), and *SHRPOOL1 to *SHRPOOL10 (for pools that you can define for your own purposes). You can control shared pool sizes by using the CHGSHRPOOL (Change Shared Storage Pool) or WRKSHRPOOL (Work with Shared Storage Pools) commands. Figure 17.3 shows a WRKSHRPOOL screen, on which you can modify the pool size or activity level simply by changing the entries. The AS/400's default controlling subsystem (QBASE) and the default spooling subsystem (QSPL) are configured to take advantage of shared pools. QBASE uses the *BASE pool and the *INTERACT pool, while QSPL uses *BASE and *SPOOL. To see what pools a subsystem is using, you use the DSPSBSD (Display Subsystem Description) command. For instance, when you execute the command
DSPSBSD QBASE OUTPUT(*PRINT) you will find the following pool definitions for QBASE listed (if the defaults have not been changed):
QBASE
((1 *BASE) (2 *INTERACT))
Parentheses group together two definitions, each of which can contain two distinct parts (the subsystem pool number and size). In this example of the QBASE pool definitions, the (1 *BASE) represents the subsystem pool number 1 and a size of *BASE, meaning that the system will use all of *BASE as a shared pool. A third part of the pool definition, the activity level, doesn't appear for *BASE because system value QBASACTLVL maintains the activity level. The second pool definition for QBASE is (2 *INTERACT). Because you can use the CHGSHRPOOL or WRKSHRPOOL commands to modify the activity level for shared pool *INTERACT, the activity level is not listed as part of the subsystem description, nor is it specified when you use the CRTSBSD or CHGSBSD commands. Be careful not to confuse subsystem pool numbering with system pool numbering. The AS/400's two predefined system pools, *MACHINE and *BASE, are defined as system pool number 1, and system pool number 2, respectively. Pool numbering within a subsystem is unique to that subsystem, and only the routing entries in that subsystem use it to determine which pool jobs will use, based on the routing data associated with each job. As subsystems define new storage pools (shared or private) in addition to the two predefined system pools, the system simply assigns the next available system pool number to use as a reference on the WRKSYSSTS display. For example, with the above pools for QBASE and the following pools for QSPL,
QSPL
((1 *BASE) (2 *SPOOL))
the system pool numbering might correspond to the subsystem pool numbering as shown in Figure 17.4.
A private pool is a specific allocation of main storage reserved for one subsystem. It's common to use a private pool when the system uses the controlling subsystem QCTL instead of QBASE. If you change your controlling subsystem to QCTL, the system startup program starts several subsystems (i.e., QINTER, QBATCH, QCMN, and QSPL) at IPL that are designed to support specific types of work. Although using QBASE as the controlling subsystem lets you divide main storage into separate pools, using QCTL is inherently easier to manage and administer in terms of controlling the number of jobs and performance tuning. IBM ships the following pool definitions for the multiple subsystem approach:
QCTL ((1 *BASE)) QINTER ((1 *BASE) (2 *INTERACT)) QBATCH ((1 *BASE)) QCMN ((1 *BASE)) QSPL ((1 *BASE) (2 *SPOOL)) As you can see, the initial configuration of these subsystems is like the initial configuration of subsystem QBASE, in that shared pools reserve areas of main storage for specific types of jobs. However, pool sharing does not provide optimum performance in a diverse operations environment where various types of work process simultaneously. In such cases, subsystems with private pools may be necessary to improve performance. Look at the pool definitions in Figure 17.5, in which two interactive subsystems (QINTER and QPGMR) provide private pools for both end users and programmers. Both QINTER and QPGMR define specific amounts of main storage to be allocated to the subsystem instead of sharing the *INTERACT pool. Also, both storage definitions require a specific activity level, whereas shared pool activity levels are maintained as part of the shared pool definitions (using the CHGSHRPOOL or WRKSHRPOOL commands). The private pool configuration in this example, with private main storage and private activity levels, prevents unwanted contention for resources between end users and programmers. Figure 17.5 also demonstrates how you can use multiple batch subsystems. Three batch subsystems (QBATCH, DAYQ, and QPGMRB, respectively) provide for daytime and nighttime processing of operator-submitted batch jobs, daytime end-user processing of short jobs, and program compiles. A separate communications subsystem, QCMN, is configured to handle any communications requests, and QSPL handles spooling.
The decision about whether to use shared pools or private pools should depend on the storage capacity of your system. On one hand, because shared pools ensure efficient use of main storage by letting more than one subsystem share a storage pool, it's wise to use shared pools if you have a system with limited main storage. On the other hand, private pools provide a reserved pool of main storage and activity levels that are constantly available to a subsystem without contention from any other subsystem. They are easy to manage when dealing with multiple subsystems. Therefore, private pools are a wise choice for a system with ample main storage.
Starting a Subsystem A subsystem definition is only that -- a definition. To start a subsystem, you use the STRSBS (Start Subsystem) command. Figure 17.6 outlines the steps your system takes to activate a subsystem after you execute a STRSBS command. First, it uses the storage pool definition to allocate main storage for job processing. Next, it uses the workstation entries to allocate workstation devices and present the workstation sign-on displays. If the system finds communications entries, it uses them to allocate the named devices. The system then allocates job queues so that when the subsystem completes the start-up process, the subsystem can receive jobs from the job queues. Next, it starts any defined prestart or autostart jobs. When the system has completed all these steps, the subsystem is finally ready to begin processing work. Now that I've introduced you to subsystems, look over IBM's AS/400 Programming: Work Management Guide and make a sketch of your system's main storage pool configuration to see how your subsystems work. Chapter 18 examines work entries and where jobs come from, and Chapter 19 discusses routing and where jobs go. When we're done with all that, you'll find yourself on Easy Street -- with the skills you need to implement a multiple subsystem work environment.
Chapter 18 - Where Jobs Come From One of OS/400's most elegant features is the concept of a 'job,' a unit of work with a tidy package of attributes that lets you easily identify and track a job throughout your system. The AS/400 defines this unit of work with a job name, a user profile associated with the job, and a computer-assigned job number; it is these three attributes that make a job unique. For example, when a user signs on to a workstation, the resulting job might be known to the system as Job name . . . : DSP10 (Workstation ID) User profile . : WMADDEN Job number . : 003459 Any transaction OS/400 completes is associated with an active job executing on the system. But where do these jobs come from? A job can be initiated when you sign on to the system from a workstation, when you submit a batch job, when your system receives a communications evoke request from another system, when you submit a prestart job, or when you create autostart job entries that the system automatically executes when it starts the associated subsystem. Understanding how jobs get started on the system is crucial to grasping AS/400 work management concepts. So let's continue Chapter 17's look at the subsystem description by focusing on work entries, the part of the description that defines how jobs gain access to the subsystem for processing.
Types of Work Entries There are five types of work entries: workstation, job queue, communications, prestart job, and autostart job. The easiest to understand is the workstation entry, which describes how a user gains access to a particular subsystem (for interactive jobs) using a workstation. To define a workstation entry, you use the ADDWSE (Add Work Station Entry) command. A subsystem can have as many workstation entries as you need, all of which have the following attributes:
• • • •
WRKSTNTYPE (workstation type) or WRKSTN (workstation name) JOBD (job description name) MAXACT (maximum number of active workstations) AT (when to allocate workstation)
When defining a workstation entry, you can use either the WRKSTNTYPE or WRKSTN attribute to specify which workstations the system should allocate. For instance, if you want to allocate all workstations, you specify WRKSTNTYPE(*ALL) in the workstation entry. This entry tells the system to allocate all workstations, regardless of the type (e.g., 5250, 5291, 3476, or 3477). Or you can use the WRKSTNTYPE attribute in one or more workstation entries to tell the system to allocate a specific type of workstation (e.g., WRKSTNTYPE(3477)). You can also define workstation entries using the WRKSTN attribute to specify that the system allocate workstations by name. You can enter either a specific name or a generic name. For example, an entry defining WRKSTN(DSP01) tells the subsystem to allocate device DSP01. The generic entry, WRKSTNN(OHIO*), tells the subsystem to let any workstation whose name begins with 'OHIO' establish an interactive job. You must specify a value for either the WRKSTNTYPE parameter or the WRKSTN parameter. In addition, you cannot mix WRKSTNTYPE and WRKSTN entries in the same subsystem. If you do, the subsystem recognizes only the entries that define workstations by the WRKSTN attribute and ignores any entries using the WRKSTNTYPE attribute. The JOBD workstation entry attribute specifies the job description for the workstation entry. You can give this attribute a value of *USRPRF (the default) to tell the system to use the job description named in the user profile of the person who signs on to the workstation. Or you can specify a value of *SBSD to tell the system to use the job description of the subsystem. You can also use a qualified name of an existing job description. For security reasons, it's wise to use the default value *USRPRF for the JOBD attribute so that a user profile is required to sign on to the workstation. If you use the value *SBSD or a job description name and there is a valid user profile associated with the job description via the USER attribute, any user can simply press Enter and sign on to the subsystem. In such a situation, the user then assumes the user ID associated with the default job description named on the workstation entry. There may be times when you want to define a workstation entry so that one user profile is always used when someone accesses the system via a particular workstation (e.g., if you wanted to disseminate public information at a courthouse, mall, or school). In such cases, be sure to construct such configurations so that only certain workstation entries have a job description that provides this type of access. The workstation entry's MAXACT attribute determines the maximum number of workstations allowed in the subsystem at one time. When this limit is reached, the subsystem must de-allocate one workstation before it can allocate another. The value that you should normally use for this attribute is the default, *NOMAX, because you typically control (i.e., you physically limit) the number of devices. In fact, supplying a number for this attribute could cause confusion if one day the limit is reached and some poor soul has to figure out why certain workstations aren't functioning. It could take days to find this seldom-used attribute and change the value. The AT attribute tells the system when to allocate the workstation. The default value, AT(*SIGNON), tells the system to allocate the workstation (i.e., initiate a sign-on screen at the workstation) when the subsystem is started. AT(*ENTER) tells the system to let jobs enter the subsystem only via the TFRJOB (Transfer Job) command. (To transfer a job into an interactive subsystem, a job queue and a subsystem description job queue entry must exist.) Now you're acquainted with the workstation entry attributes, but how can you use workstation entries? Let's say you want to process all your interactive jobs in subsystem QINTER. When you look at the default workstation entries for QINTER, you see the following: CRTSBSD (Create Subsystem Description) commands:
The first set of values tells the system to allocate all workstations to subsystem QINTER when the subsystem is started. The second set of values tells the system to let the console transfer into the subsystem, but not to allocate the device. What about a multiple subsystem environment for interactive jobs? Let's say you want to configure three subsystems: one for programmers (PGMRS), one for local end-user workstations (LOCAL), and one for remote end-user workstations (REMOTE). How can you make sure the system allocates the workstations to the correct subsystem? Perhaps you're thinking you can create individual workstation entries for each device. You can, but such a method would be a nightmare to maintain, and it would require you to end the subsystem each time you added a new device. Likewise, it would be impractical to use the WRKSTNTYPE attribute, because defining types does not necessarily define specific locations for certain workstations. So you have only two good options for ensuring that the correct subsystem allocates the devices. One is to name your various workstations so you can use generic WRKSTN values in the workstation entry. For example, you can allocate programmers' workstations to the proper subsystem by first giving them names like PGMR01 or PGMR02 and then creating a workstation entry that specifies WRKSTN(PGMR*). You might preface all local end-user workstation names with ADMN and LOC and then create workstation entries in the local subsystem using WRKSTN(ADMN*) and WRKSTN(LOC*). For the remote subsystem, you could continue to create workstation entries using generic names like the ones described above, or simply specify WRKSTNTYPE(*ALL), which would cause the subsystem to allocate the remaining workstations. However, you will need to read on to learn how subsystems allocate workstations to ensure that those workstations in the programmer and local subsystems are allocated properly. Your second option for ensuring that the correct subsystem allocates the devices is to use routing entries to reroute workstation jobs to the correct subsystem (I will explain how to do this in the next chapter).
Conflicting Workstation Entries Can workstation entries in different subsystems conflict with each other? You bet they can! Consider what happens when two different subsystems have workstation entries that allocate the same device. If AT(*SIGNON) is specified in the workstation entry, the first subsystem will allocate the device, and the device will show a sign-on display. When the system starts another subsystem with a workstation entry that applies to that same device (with AT(*SIGNON) specified), the subsystem will try to allocate it. If no user is signed on to the workstation, the second subsystem will allocate the device. This arrangement isn't all bad. In fact, you can make it work for you. Imagine that you want to establish an interactive environment for two subsystems: QINTER (for all end-user workstations) and QPGMR (for all programmer workstations). You supply WRKSTNTYPE(*ALL) for subsystem QINTER and WRKSTN(PGMR*) for subsystem QPGMR. To ensure that each workstation is allocated to the proper subsystem, you should start QINTER first. Consequently, the system will allocate all workstations to QINTER. After a brief delay, start QPGMR, which will then allocate (from QINTER) only the workstations whose names begin with 'PGMR'. Every workstation has its rightful place by simply using the system to do the work. What about you? Can you see how your configuration is set up to let interactive jobs process? Take a few minutes to examine the workstation entries in your system's subsystems. You can use the DSPSBSD (Display Subsystem Description) command to display the work entries that are part of the subsystem description.
Job Queue Entries Job queue entries control job initiation on your system and define how batch jobs enter the subsystem for processing. To submit jobs for processing, you must assign one or more job queues to a subsystem. A job queue entry associates a job queue with a subsystem. The attributes of a job queue entry are as follows:
• • • •
JOBQ (job queue name) MAXACT (maximum number of active jobs from this job queue) SEQNBR (sequence number used to determine order of selection among all job queues) MAXPTYn (maximum number of active jobs with this priority)
The JOBQ attribute, which is required, defines the name of the job queue you are attaching to the subsystem. The subsystem will search this job queue to receive jobs for processing. You can name only one job queue for a job queue entry, but you can define multiple job queue entries for a subsystem. The MAXACT attribute defines the maximum number of jobs that can be active in the subsystem from the job queue named in this entry. This attribute controls only the maximum number of jobs allowed into the subsystem from the job queue. The default for MAXACT is 1, which lets only one job at a time from this job queue process in the subsystem. The MAXACT (yes, same name) attribute of the subsystem description controls the maximum number of jobs in the subsystem from all entries (e.g., job queue and communications entries). You can use the SEQNBR attribute to sequence multiple job queue entries associated with the subsystem. The subsystem searches each job queue in the order specified by the SEQNBR attribute of each job queue entry. The default for this attribute is 10, which you can use to define only one subsystem job queue entry; however, when defining multiple job queue entries, you should determine the appropriate sequence numbers desired to prioritize the job queues. The MAXPTYn attribute is similar to the MAXACT attribute except that MAXPTYn controls the number of active jobs from a job queue that have the same priority (e.g., MAXPTY1 defines the maximum for jobs with priority 1, MAXPTY2 defines the maximum number for jobs with priority 2). The default for MAXPTY1 through MAXPTY9 is *NOMAX. To illustrate how job queue entries work together to create a proper batch environment, Figure 18.1 shows a scheme that includes three subsystems: DAYSBS, NIGHTSBS, and BATCHSBS. DAYSBS processes daytime, short-running end-user batch jobs. NIGHTSBS processes nighttime, long-running end-user batch jobs. BATCHSBS processes operator-submitted requests and program compiles. To create the batch work environment in Figure 18.1, you first create the subsystems using the following CRTSBSD (Create Subsystem Description) commands:
CRTSBSD SBSD(QGPL/DAYSBS) POOL((1 *BASE) (2 400 1)) MAXACT(1) CRTSBSD SBSD(QGPL/NIGHTSBS) POOL((1 *BASE) (2 2000 2)) MAXACT(2) CRTSBSD SBSD(QGPL/BATCHSBS) POOL((1 *BASE) (2 1500 3) MAXACT(3) Notice that each subsystem has an established maximum number of active jobs (MAXACT(n)). The maximum limit matches the activity level specified in the subsystem pool definition so that each active job is assigned an activity level without having to wait for one. The next step is to create the appropriate job queues with the following CRTJOBQ (Create Job Queue) commands:
CRTJOBQ CRTJOBQ CRTJOBQ CRTJOBQ
JOBQ(QGPL/DAYQ) JOBQ(QGPL/NIGHTQ) JOBQ(QGPL/PGMQ) JOBQ(QGPL/BATCHQ)
Then, add the job queue entries to associate the job queues with the subsystems:
ADDJOBQE ADDJOBQE ADDJOBQE ADDJOBQE
SBSD(DAYSBS) JOBQ(DAYQ) MAXACT(*NOMAX) SEQNBR(10) SBSD(NIGHTSBS) JOBQ(NIGHTQ) MAXACT(*NOMAX) SEQNBR(10) SBSD(BATCHSBS) JOBQ(PGMQ) MAXACT(1) SEQNBR(10) SBSD(BATCHSBS) JOBQ(BATCHQ) MAXACT(2) SEQNBR(20)
Now let's walk through this batch work environment. Subsystem DAYSBS is a simple configuration that lets one job queue feed jobs into the subsystem. Because the MAXACT attribute value of DAYSBS is 1, only one job filters into the subsystem at a time, despite the fact that you specified the attribute MAXACT(*NOMAX) for the DAYQ job queue entry. Later, you can change the subsystem pool size and activity level, along with the MAXACT subsystem attribute, to let more jobs from the job queue process without having to re-create the job queue entry to modify MAXACT. The configuration of NIGHTSBS is similar to the configuration of DAYSBS, except that it lets two jobs process at the same time. This subsystem is inactive during the day and starts at night via the STRSBS (Start Subsystem) command. When a subsystem is inactive, no job queues are allocated and no jobs are processed. Therefore, application programs can send batch jobs to the NIGHTQ job queue, where they wait to process at night. When NIGHTSBS starts, the system allocates job queue NIGHTQ and jobs can be processed. To show you how job queues can work together to feed into one subsystem, I configured the BATCHSBS subsystem with two job queue entries. Notice that BATCHSBS supports a maximum of three jobs (MAXACT(3)). Job queue entry PGMQ lets one job from that queue be active (MAXACT(1)), while job queue entry BATCHQ lets two jobs be active (MAXACT(2)). As with workstation entries, job queue entries can conflict if you define the same job queue as an entry for more than one subsystem. When a subsystem starts, the job queues defined in the job queue entries are allocated. And when a job queue is allocated to an active subsystem, that job queue cannot be allocated to another subsystem until the first subsystem ends. In other words, first come, first served... or first come, first queued!
Communications Entries After you establish a workstation and a physical connection between remote sites, you need a communications entry, which enables the subsystem to process the program start request. If there are no communications entries, the system rejects any program start request. There's no real pizazz to this entry; you simply need it to link the remote system with your subsystem. A communications entry has the following attributes:
• • • • • •
DEV (name or type of communications device) RMTLOCNAME (remote location name) JOBD (job description name) DFTUSR (default user profile name) MODE (mode description name) MAXACT (maximum number of jobs active with this entry)
The DEV attribute specifies the particular device (e.g., COMMDEV or REMSYS) or device type (e.g., *APPC) needed for communications. The RMTLOCNAME attribute specifies the remote location name you define when you use the CRTDEVxxxx command to create the communications device. There is no default for the DEV or the RMTLOCNAME attribute. As with the WRKSTNTYPE and WRKSTN attributes, you must specify one or the other, but not both. The next two attributes, JOBD and DFTUSR, are crucial. JOBD specifies the job description to associate with this entry. As you do with the workstation entry, you should use the default value *USRPRF to ensure that a user profile is used and that the system uses the job description associated with the user making the program start request. As with workstation entries, using a specific job description can cause a security problem if that job description names a default user. DFTUSR defines the default user for the communications entry. You should specify *NONE for this attribute to ensure that any program start request supplies a valid user profile and password. The MODE attribute defines specific communications boundaries and variables. For more information about the MODE attribute, see the CRTMODD (Create Mode Description) command description in IBM's AS/400 Programming: Control Language Reference (SC41-0030).
The MAXACT attribute defines the maximum number of program start requests that can be active at any time in the subsystem for this communications entry. You can add a communications entry by using the ADDCMNE (Add Communications Entry) command, as in the following example:
ADDCMNE SBSD(COMMSBS) RMTLOCNAME(NEWYORK) JOBD(*USRPRF) DFTUSR(*NONE) MODE(*ANY) MAXACT(*NOMAX)
+
If you are communicating already and you want to know what entries are configured, use the DSPSBSD (Display Subsystem Description) command to find out.
Prestart Job Entries The prestart job entry goes hand-in-hand with the communications entry, telling the subsystem which program to start when the subsystem itself is started. The program does not execute -- the system simply performs all the opens and initializes the job named in the prestart job entry and then waits for a program start request for that particular program. When the system receives a program start request, it starts a job by using the prestart program that is ready and waiting, thus saving valuable time in program initialization. The prestart job entry is the only work entry that defines an actual program and job class to be used. (Other jobs get their initial routing program from the routing data entries that are part of the subsystem description.) The two key attributes of the prestart job entry are PGM and JOBD. The PGM attribute specifies the program to use and the JOBD attribute specifies the job description to be used. To add a prestart job entry, use an ADDPJE (Add Prestart Job Entry) command similar to the following:
ADDPJE SBSD(COMMSBS) PGM(OEPGM) JOBD(OEJOBD) Then, when the communications entry receives a program start request (an EVOKE) and processes the request, it will compare the program evoke to the prestart job program defined. In this case, if the program evoke is also OEPGM, the system has no need to start a job because the prestart job is already started.
Autostart Job Entry An autostart job entry specifies the job to be executed when the subsystem starts. For instance, if you want to print a particular history report each time the system is IPLed, you can add the following autostart job entry to the controlling subsystem description:
ADDAJE SBSD(sbs_name) JOB(HISTORY) JOBD(MYLIB/HISTJOBD) The JOB and JOBD attributes are the only ones the autostart job entry defines, which means that the job description must use the request data or routing data to execute a command or a program. In the example above, HISTJOBD would have the correct RQSDTA (Request Data) attribute to call the program that generates the history report (e.g., RQSDTA('call histpgm')). The job HISTORY, defined in the autostart job entry, starts each time the associated subsystem starts, ensuring that the job runs whether or not anyone remembers to submit it. OS/400 uses an autostart job entry to assist the IPL process. When you examine either the QBASE or QCTL subsystem description (using the DSPSBSD command), you will find that an autostart job entry exists to submit the QSTRUPJD job using the job description QSYS/QSTRUPJD. This job description uses the request data to call a program used in the IPL process.
Where Jobs Go Now we've seen where jobs come from on the AS/400 -- but where do they go? I'll address that question in the next chapter when we look at how routing entries provide the final gateway to subsystem processing. One reminder. If you decide to create or modify the system-supplied work management objects such as subsystem descriptions and job queues, you should place the new objects in a user-defined library. When you are
ready to start using your new objects, you can change the system startup program QSYS/QSTRUP to use your new objects for establishing your work environment (to change the system startup program, you modify the CL source and recompile the program). By having your new objects in your own library, you can easily document any changes.
Chapter 19 - Demystifying Routing So far, I have explained how jobs are defined and started on the AS/400. We've seen that jobs are processed in a subsystem, which is where the system combines all the resources needed to process work. And we've seen how work entries control how jobs gain access to the subsystem. Now we need to talk about routing, which determines how jobs are processed after they reach the subsystem. I am constantly surprised by the number of AS/400 programmers who have never fully examined routing. In fact, it's almost as though routing is some secret whose meaning is known by only a few. In this chapter, I concentrate on subsystem routing entries to prove to you, once and for all, that you have nothing to fear! The AS/400 uses routing to determine where jobs go. To understand routing, it might help to think of street signs, which control the flow of traffic from one place to another. The AS/400 uses the following routing concepts to process each and every job:
• • •
Routing data -- A character string, up to 80 characters long, that determines the routing entry the subsystem will use to establish the routing step. Routing entry -- A subsystem description entry, which you create, that determines the program and job class the subsystem will use to establish a routing step. Routing step -- The processing that starts when the routing program executes.
To execute in a subsystem, AS/400 jobs must have routing data. Routing data determines which routing entry the subsystem will use. For most jobs, routing data is defined by either the RTGDTA (Routing Data) parameter of the job description associated with the job or by the RTGDTA parameter of the SBMJOB (Submit Job) command. Now let's look at each of these job types to see how routing data is defined for each.
Routing Data for Interactive Jobs Users gain access to a given subsystem for interactive jobs via workstations, defined by workstation entries. The key to determining routing data for an interactive job is the JOBD (Job Description) parameter of the workstation entry that the subsystem uses to allocate the workstation being used. If the value for the JOBD parameter is *USRPRF, the routing data defined on the job description associated with the user profile is used as the routing data for the interactive job. If the value of the JOBD parameter of the workstation entry is *SBSD (which instructs the system to use the job description that has the same name as the subsystem description) or an actual job description name, the routing data of the specified job description will be used as the routing data for the interactive job. Let me give you a couple examples. Let's say you create a user profile using the CRTUSRPRF (Create User Profile) command and do not enter a specific job description. The system uses QDFTJOBD (the default job description) for that user profile. Executing DSPJOBD QDFTJOBD reveals that the RTGDTA attribute has a value of QCMDI. When a user signs on to a workstation that uses a subsystem workstation entry where *USRPRF is defined as the JOBD attribute, the routing data for that interactive job would be the routing data defined on the job description associated with the user profile; in this case, the JOBD would be QDFTJOBD, and the routing data would be QCMDI.
Now look at Figure 19.1, in which the workstation entry defines SPJOBD as the job description. Instead of using the job description associated with the user profile, the subsystem uses the SPJOBD job description to establish job attributes, including the RTGDTA value of SPECIAL.
Routing Data for Batch Jobs Establishing routing data for a batch job is simple; you use the RTGDTA parameter of the SBMJOB (Submit Job) command. The RTGDTA parameter on this command has four possible values:
• •
• •
JOBD -- the routing data of the job description. RQSDTA -- the value specified in the RQSDTA (Request Data) parameter on the SBMJOB command. (Because the request data represents the actual command or program to process, specifying *RQSDTA is practical only if specific routing entries have been established in a subsystem to start specific routing steps based on the command or program being executed by a job.) QCMDB -- the default routing data used by the IBM-supplied subsystems QBASE or QBATCH to route batch jobs to the CL processor QCMD (more on this later). routing-data -- up to 80 characters of user-defined routing data.
Keeping these values in mind, let's look at a SBMJOB command. To submit a batch job that sends the operator the message 'hi,' you would enter the command
SBMJOB JOB(MESSAGE) CMD('SNDMSG'hi' TOMSGQ(QSYSOPR)') This batch job would use the routing data of QCMDB. How do I know that? Because, as I stated above, the value QCMDB is the default. If you submit a job using the SBMJOB command without modifying the default value for the RTGDTA parameter, the routing data is always QCMDB -- as long as this default has not been changed via the CHGCMDDFT (Change Command Default) command. Now examine the following SBMJOB command:
SBMJOB JOB(PRIORITY) CMD('call user-pgm') RTGDTA('high-priority')
+
In this example, a routing data character string ('high-priority') is defined. By now you are probably wondering just how modifying the routing data might change the way a job is processed. We'll get to that in a minute. Figure 19.2 provides an overview of how the routing data for a batch job is established. A user submits a job via the SBMJOB command. The RTGDTA parameter of the SBMJOB command determines the routing data, and the resulting job (012345/USER_XX/job_name) is submitted to process in a subsystem. We can pick any of the four possible values for the RTGDTA attribute on the SBMJOB command and follow the path to see how that value eventually determines the routing data for the submitted batch job. If you specify RTGDTA(*JOBD), the system examines the JOBD parameter of the SBMJOB command and then uses either the user profile's job description or the actual job description named in the parameter. If you define the RTGDTA parameter as *RQSDTA, the job uses the value specified in the RQSDTA (Request Data) parameter of the SBMJOB command as the routing data. Finally, if you define the RTGDTA parameter as QCMDB or any userdefined routing data, that value becomes the routing data for the job.
Routing Data for Autostart, Communications, and Prestart Jobs As you may recall from Chapter 18, an autostart job entry in the subsystem description consists of just two attributes: the job name and the specific job description to be used for processing. The routing data of a particular job description is the only source for the routing data of an autostart job. For communications jobs (communications evoke requests), the subsystem builds the routing data from the program start request, which always has the value PGMEVOKE starting in position 29, immediately followed by the desired program name. The routing data is not taken from a permanent object on the AS/400, but is instead derived from the program start request that the communications entry in the subsystem receives and processes. Prestart jobs use no routing data. The prestart job entry attribute, PGM, specifies the program to start in the subsystem. The processing of this program is the routing step for that job.
The Importance of Routing Data When a job enters a subsystem, the subsystem looks for routing data that matches the compare value in one or more routing entries of the subsystem description -- similar to the way you would check your written directions to see which highway exit to take. The subsystem seeks a match to determine which program to use to establish the routing step for that job. Routing entries, typically defined when you create a subsystem, are defined as part of the subsystem description via the ADDRTGE (Add Routing Entry) command. Before we take a closer look at the various attributes of a routing entry, let me explain how routing entries relate to routing data. Figure 19.3 shows how the subsystem uses routing data for an interactive job. When USER_XX signs on to workstation DSP01, the interactive job is started, and the routing data (QCMDI) is established. When the job enters the subsystem, the system compares the routing data in the job to the routing data of each routing entry until it finds a match. (The search is based on the starting position specified in the routing entry and the literal specified as the compare value.) In Figure 19.3, the compare value for the first routing entry (SEQNBR(10)) and the routing data for job 012345/USER_XX/DSP01 are the same. Because the system has found a match, it executes the program defined in the routing entry (QCMD in library QSYS) to establish the routing step for the job in the subsystem. In addition to establishing the routing step, the routing entry also provides the job with specific runtime attributes based on the job class specified. In this case, the specified class is QINTER. Jobs that require routing data (all but prestart jobs) follow this same procedure when being started in the subsystem. Now that you have the feel of how this process works, let's talk about routing entries and associated job classes. In Chapter 18, I said that routing entries identify which programs to call, define which storage pool the job will be processed in, and specify the execution attributes the job will use for processing. As shown in Figure 19.3, a routing entry consists of a number of attributes: sequence number, compare value, starting position, program, class, maximum active, and pool ID. Each attribute is defined when you use the ADDRTGE command to add a routing entry to a subsystem description. It's important that you understand these attributes and how you can use them to create the routing entries you need for your subsystems. The sequence number is simply a basic numbering device that determines the order in which routing entries will be compared against routing data to find a match. When assigning a sequence number, you need to remember two rules. First, always use the compare value *ANY with SEQNBR(9999) so it will be used only when no other match can be found. (Notice that routing entry SEQNBR(9999) in Figure 19.3 has a compare value of *ANY.) Second, when using similar compare values, use the sequence numbers to order the values from most to least specific. For example, you would arrange the values PGMR, PGMRS, AND PGMRS1 this way:
Sequence Number 10 20 30
Compare Value 'PGMRS1' 'PGMRS' 'PGMR'
Placing the least specific value (PGMR) first would cause a match to occur even when the intended value (e.g., PGMRS1) is more specific.
The compare value and starting position attributes work together to search a job's routing data for a match. For example, if the value (ROUTE 5) is used, the system searches the job's routing data starting in position 5 for the value ROUTE. The compare value can be any characters you want (up to 80). The important thing is to use a compare value that matches some routing data that identifies a particular job or job type. Why go to this trouble? Because you can use this matching routing entry to determine a lot about the way a job is processed on the system (e.g., subsystem storage pool, run priority, and time slice). The PGM attribute determines what program is called to establish the routing step for the job being processed. Remember, a routing step simply starts the program named in the routing entry. Normally, this program is QCMD (the IBM CL processor), but it can be any program. When QCMD is the routing program, it waits for a request message to process. For an interactive job, the request message would be the initial program or menu request; for a batch job, it would be the request data (i.e., the command or program to execute). If the routing program is a user-defined program, the program simply executes. The routing entry program is the first program executed in the routing step. The routing entry can be used to make sure that a specific program is executed when certain routing data is found, regardless of the initial program or specific request data for a job. Later in this chapter, I explain how this might be beneficial to you.
Runtime Attributes The CLASS (job class) is an important performance-related object that defines the run priority of the job, as well as the time slice for a job. (The time slice is the length of time, in CPU milliseconds, a job will process before being bumped from the activity level to wait while another job executes a time slice.) A routing entry establishes a job's run priority and time slice much the way speed limit or yield signs control the flow of traffic. For more information on these performance-related attributes of the CLASS object, see IBM's AS/400 Programming: Work Management Guide (SC41-8078). In Figure 19.3, all the routing entries use class QINTER, which is defined to represent the run priority and time slice typical for an interactive job. Because you would not want to process a batch job using these same values, the system also has an IBM-supplied class, called QBATCH, that defines attributes more typical for batch job processing. If you look at the subsystem description for QBASE or QBATCH, you will find the following routing entry:
Sequence Number 10
Compare Value 'QCMDB'
Program QSYS/QCMD
Class QBATCH
This entry uses program QCMD and directs the system to use class QBATCH to define the runtime attributes for jobs having routing data QCMDB. To route jobs with the correct routing program and job class, the systemsupplied routing data for the default batch job description QBATCH is QCMDB. You can use different classes to create the right performance mix. MAXACT determines the maximum number of active jobs that can use a particular routing entry. You will rarely need to change this attribute's default (*NOMAX). The last routing entry attribute is the POOLID (subsystem storage pool ID). As I explained in Chapter 17, the subsystem definition includes the specific storage pools the subsystem will use. These storage pools are numbered in the subsystem, and these numbers are used only within that particular subsystem description; they do not match the numbering scheme of the system pools. The routing entry attribute POOLID tells the system which subsystem storage pool to use for processing this job. Look at the following pool definition and abbreviated routing entry:
Pool Definition: ((1 *BASE) (2 10000 20)) Sequence Number 10
Compare Value 'QCMDI'
Pool ID 1
This routing entry tells the system to use subsystem pool number 1 (*BASE). Considering that 10,000 KB of storage is set aside in pool number 2, this routing entry is probably incorrectly specifying pool number 1. Beginners
commonly make the mistake of leaving the default value in the routing entry definition when creating their own subsystems and defining their own routing entries. Just remember to compare the pool definition with the routing entry definition to ensure that the correct subsystem pool is being used.
Is There More Than One Way to Get There? So far, we've discussed how routing data is created, how routing entries are established to search for that routing data, and how routing entries establish a routing step for a job and control specific runtime attributes of a job. Now for one more hurdle... A job can have more than one routing step. But why would you want it to? One reason might be to use a new class to change the runtime attributes of the job. After a job is started, you can reroute it using the RRTJOB (Reroute Job) command or transfer it to another subsystem using the TFRJOB (Transfer Job) command. Both commands have the RTGDTA parameter, which lets you modify the job's current routing data to establish a new routing step. Suppose you issue the following command during the execution of a job:
RRTJOB RTGDTA('FASTER') RQSDTA(*NONE) Your job would be rerouted in the same subsystem but use the value FASTER as the value to be compared in the routing entries.
Do-It-Yourself Routing To reinforce your understanding of routing and tie together some of the facts you've learned about work management, consider the following example. Let's say you want to place programmers, OfficeVision/400 (OV/400) users, and general end users in certain subsystems based on their locations or functions. You need to do more than just separate the workstations; you need to separate the users, no matter what workstation they are using at the time. Figures 19.4a through 19.4f describe the objects and attributes needed to define such an environment. Figure 19.4a lists three job descriptions that have distinct routing data. User-defined INTERJOBD has QINTER as the routing data. OFFICEJOBD and PGMRJOBD have QOFFICE and QPGMR specified, respectively, as their routing data. (Note that the routing data need not match the job description name.) To enable users to work in separate subsystems, you first need to create or modify their user profiles and supply the appropriate job description based on the subsystem in which each user should work. In our example, general end users would have INTERJOBD, OV/400 users would have OFFICEJOBD, and programmers would have the job description PGMRJOBD.
Next, you must build subsystem descriptions that use the routing entries associated with the job descriptions. Figure 19.4b shows some sample subsystem definitions. All three subsystems use the WRKSTNTYPE (workstation type) entry with the value *ALL. However, only the workstation entry in QINTER uses the AT(*SIGNON) entry to tell the subsystem to allocate the workstations. This means that subsystem QINTER allocates all workstations and QOFFICE and QPGMR (both with AT(*ENTER)) only allocate workstations as jobs are transferred into those subsystems. Also, notice that each workstation entry defines JOBD(*USRPRF) so that the routing data from the job descriptions of the user profiles will be the routing data for the job. After a user signs on to a workstation in subsystem QINTER, the routing entries do all the work. The first routing entry looks for the compare value QOFFICE. When it finds QOFFICE, program QOFFICE in library SYSLIB is called to establish the routing step. In Figure 19.4c, program QOFFICE simply executes the TFRJOB command to transfer this particular job into subsystem QOFFICE. However, if you look carefully at Figure 19.4c, you will see that the TFRJOB command also modifies the routing data to become QCMDI, so that when the job enters subsystem QOFFICE, routing data QCMDI matches the corresponding routing entry and uses
program QCMD and class QOFFICE. If an error occurs on the TFRJOB command, the MONMSG CPF0000 EXEC(RRTJOB RTGDTA(QCMDI)) command reroutes the job in the current subsystem. Figure 19.4d shows how class QOFFICE might be created to provide the performance differences needed for OV/400 users. Look again at Figure 19.4b. The next routing entry in the QINTER subsystem looks for compare value QPGMR. When it finds QPGMR, it calls program QPGMR (Figure 19.4e) to transfer the job into subsystem QPGMR. Routing data QCMDI calls program QCMD and then processes the initial program or menu of the user profile. The same is true for routing data *ANY. In our example, subsystems QOFFICE and QPGMR use similar routing entries to make sure each job enters the correct subsystem. Notice that each subsystem has a routing entry that searches for QINTER. If this compare value is found, program QINTER (Figure 19.4f) is called to transfer the job into subsystem QINTER. As intimidating as they may at first appear, routing entries are really plain and simple. Basically, you can use them to intercept jobs as they enter the subsystem and then control the jobs using various run-time variables. I strongly recommend that you take the time to learn how your system uses routing entries. Start by studying subsystem descriptions to learn what each routing entry controls. Once you understand them, you will find that you can use routing entries as solutions to numerous work management problems.
Chapter 20 - File Structures Getting a handle on AS/400 file types can be puzzling. If you count the various types of files the AS/400 supports, how many do you get? The answer is five. And 10. The AS/400 supports five types of files — database files, source files, device files, DDM files, and save files. So if you count types, you get five. However, if you count the file subtypes — all the objects designated as OBJTYPE(*FILE) — you get 10. Still puzzled? Figure 20.1 lists the five file types that exist on the AS/400, as well as the 10 subtypes and the specific CRTxxxF (Create xxx File) commands used to create them. Each file type (and subtype) contains unique characteristics that provide unique functions on the AS/400. In this chapter, I look at the various types of files and describe the way each file type functions.
Structure Fundamentals If there is any one AS/400 concept that is the key to unlocking a basic understanding of application development, it is the concept of AS/400 file structure. It's not that the concept is difficult to grasp; it's just that there are quite a few facts to digest. So let's start by looking at how files are described. On the AS/400, all files are described at four levels (Figure 20.2). First is the object-level description. A file is an AS/400 object whose object type is *FILE. The AS/400 maintains the same object description information for a file (e.g., its library and size) as it does for any other object on the system. You can look at the object-level information with the DSPOBJD (Display Object Description) command. The second level of description the system maintains for *FILE objects is a file-level description. The file description is created along with the file when you execute a CRTxxxF command. It describes the attributes or characteristics of a particular file and is embedded within the file itself. You can display or print a file description with the DSPFD (Display File Description) command. The file subtype is one of the attributes maintained as part of the file description. This allows OS/400 to present the correct format for the description when using the DSPFD command. This also provides OS/400 with the ability to determine which commands can operate on which types of files. For instance, the DLTF (Delete File) command
works for any type of file on the system, but the ADDPFM (Add Physical File Member) command only works for physical files. OS/400 uses the description of the file to maintain and enforce each file's object identity. The third level of descriptive information the system maintains for files is the record-level description. This level describes the various, if more than one, record formats that exist in the file. A record format describes a set of fields that make a record. If the fourth level of description — field descriptions — is not used when creating the file, the record format is described by a specific record length. All files have at least one record format, and logical files can have multiple record formats (we'll cover this topic in a future chapter). Applications perform I/O by using specific record formats. An application can further break the record format into fields by either explicitly defining those fields within the application or by working with the external field definitions if they are defined for a record format. While there is the DSPOBJD command and the DSPFD command, there is no Display Record Description command. You use the DSPF command and the DSPFFD (Display File Field Description) command to display or print the record-level information. The final level of descriptive information the system maintains for files is the field-level description. Field descriptions do not exist for all types of files; tape files, diskette files, DDM files, and save files have no field descriptions because they have no fields. (In the case of DDM files, the field descriptions of the target system file are used.) For the remaining files — physical, logical, source, display, printer, and ICF — a description of each field and field attribute is maintained. You can use the DSPFFD command to display or print the field-level descriptions for a file.
Data Members: A Challenge Now that you know how files are described, you need a challenge! We now need to consider a particular organizational element that applies only to database and source files, the two types of files that actually contain records of data. You may be saying, 'Wait, you don't have to tell us that. Each file is described (as discussed), and each file has records, right?' I wish it were that simple, but on the AS/400 there is an additional element of file organization, the data member, that has caused even the best application programmers to cry in anguish, just as Martin Luther did, until they discover the truth. Now that I have your attention (and you're trying to remember just who Martin Luther was — look under Church History: The Reformation), I will impart the truth to you and save you any future anguish. Examine Figure 20.3, which introduces you to the concept of the file data member. You traditionally think of a file containing a set of records, and usually an AS/400 database file has a description and a data member that contains all the records that exist in that database file. If you create a physical file using the CRTPF (Create Physical File) command and take the defaults for member name and maximum number of members, which are MBR(*FILE) and MAXMBRS(1), respectively, you will create a file that contains only one data member, and the name of that member will be the same name as the file itself. So far, so good. Now comes the tricky part. Believe it or not, AS/400 database and source files can have no data members. If you create a physical file and specify MBR(*NONE), the file will be created without any data member for records. If you try to add records to that file, the system will issue an error stating that no data member exists. You would have to use the ADDPFM command to create a data member in the file before you could add records to the file. At the other end of the scale is the fact that you can have multiple data members in a file. A source file offers a good example.Figure 20.4 represents the way a source file is organized. Each source member is a different data member in the file. When you create a new source member, you are actually creating another data member in this physical source file. Whether you are using PDM (Programming Development Manager) or SEU (Source Entry Utility), by specifying the name of the source member you want to work with, you are instructing the software to override the file to use that particular member for record retrieval. Consider another example — a user application that views both current and historical data by year. Each year represents a unique set of records. This type of application might use a database file to store each year's records in separate data members, using the year itself to construct the name of the data member. Figure 20.5 represents how this application might use a single physical file to store these records. As you can see, each year has a unique data member, and each member has a various number of records. All members have the same description in terms of record format and fields, but each member contains unique data. The applications that access this data must use the OVRDBF (Override with Database File) command to open the correct data member for record retrieval.
Wow! No database members... one database member... multiple database members... Why? That's a fair question. Using multiple data members provides a unique manner to handle data that uses the same record format and same field descriptions and yet must be maintained separately for business reasons. One set of software can be written to support the effort, but the data can be maintained, even saved, separately. Having sorted through the structure of AS/400 files and dealt with data members, let's look specifically at the types of files and how they are used.
Database Files Database files are AS/400 objects that actually contain data or provide access to data. Two types of files are considered database files — physical files and logical files. A physical file, denoted as TYPE(*FILE) and ATTR(PF), has file-, record-, and field-level descriptions and can be created with or without using externally described source specifications. Physical files — so called because they contain your actual data (e.g., customer records) — can have only one record format. The data entered into the physical file is assigned a relative record number based on arrival sequence. As I indicated earlier, database files can have multiple data members, and special program considerations must be implemented to ensure that applications work with the correct data members. You can view the data that exists in a specific data member of a file using the DSPPFM (Display Physical File Member) command. A logical file, denoted as TYPE(*FILE) and ATTR(LF), is created in conjunction with physical files to determine how data will be presented to the requester. For those of you coming from a S/36, the nearest kin to a logical file is an index or alternate index. Logical files contain no data but instead are used to specify key fields, select/omit logic, field selection, or field manipulation. The key fields serve to specify the access paths to use for accessing the actual data records that reside in physical files. Logical files must be externally described using DDS and can be used only in conjunction with externally described physical files.
Source Files A source file, like QRPGSRC where RPG source members are maintained, is simply a customized form of a physical file; and as such, source files are denoted as TYPE(*FILE) and ATTR(PF). (Note: If you work with objects using PDM, physical data files and physical source files are distinguished by using two specific attributes — PFDTA and PF-SRC.) All source files created using the CRTSRCPF (Create Source Physical File) command have the same record format and thus the same fields. When you use the CRTSRCPF command, the system creates a physical file that allows multiple data members. Each program source is one physical file member. When you edit a particular source member, you are simply editing a specific data member in the file.
Device Files Device files contain no actual data. They are files whose descriptions provide information about how an application is to use particular devices. The device file must contain information valid for the device type the application is accessing. The types of device files are display, printer, tape, diskette, and ICF. Display files, denoted by the system as TYPE(*FILE) and ATTR(DSPF), provide specific information relating to how an application can interact with a workstation. While a display file contains no data, the display file does contain various record formats that represent the screens the application will present to the workstation. Each specific record format can be viewed and maintained using IBM's Screen Design Aid (SDA), which is part of the Application Development Tools licensed program product. Interactive high-level language (HLL) programs include the workstation display file as one of the files to be used in the application. The HLL program writes a display file record format to the screen to present the end user with formatted data and then reads that format from the screen when the end user presses Enter or another appropriate function key. Whereas I/O to a database file accesses disk storage, I/O to a display file accesses a workstation. Printer files, denoted by the system as TYPE(*FILE) and ATTR(PRTF), provide specific information relating to how an application can spool data for output to a writer. The print file can be created with a maximum record length specified and one format to be used with a HLL program and program-described printing, or the print file can be created from external source statements that define the formats to be used for printing. Like display files, the print files themselves contain no data and therefore have no data member associated with them. When an application
program performs output operations to a print file, the output becomes spooled data that can be printed on a writer device. Tape files, denoted by the system as TYPE(*FILE) and ATTR(TAPF), provide specific information relating to how an application can read or write data using tape media. The description of the tape file contains information such as the device name for tape read/write operations, the specific tape volume requested (if a specific volume is desired), the density of the tape to be processed, the record and block length to be used, and other essential information relating to tape processing. Without the use of a tape file, HLL programs cannot access the tape media devices. Diskette files, denoted by the system as TYPE(*FILE) and ATTR(DKTF), are identical to tape files except that these files support diskette devices. Diskette files have attributes that describe the volume to be used and the record and block length. ICF (Intersystem Communications Function) files, denoted by the system as TYPE(*FILE) and ATTR(ICFF), provide specific attributes to describe the physical communications device used for application peer-to-peer communications programming. When a local application wants to communicate with an application on a remote system, the local application turns to the ICF file for information regarding the physical device to use for those communications. The ICF file also contains record formats used to read and write data from and to the device and the peer program.
DDM Files DDM (Distributed Data Management) files, denoted by the system as TYPE(*FILE) and ATTR(DDMF), are objects that represent files that exist on a remote system. For instance, if your customer file exists on a remote system, you can create a DDM file on the local system that specifically points to that customer file on the remote system. DDM files provide you with an interface that lets you access the remote file just as you would if it were on your local system. You can compile programs using the file, read records, write records, and update records while the system handles the communications. Figure 20.6 represents a typical DDM file implementation.
Save Files Save files, denoted by the system TYPE(*FILE) and ATTR(SAVF), are a special form of file designed specifically to handle save/restore data. You cannot determine the file-, record-, and field-level descriptions for a save file. The system creates a specific description used for all save files to make them compatible with save/restore operations. Save files can be used to receive the output from a save operation and then be used as input for a restore operation. This works just as performing save/restore operations with tape or diskette, except that the saved data is maintained on disk, which enhances the save/restore process because I/O to the disk file is faster than I/O to a tape or diskette device. Save file data also can be transmitted electronically or transported via a sneaker network or overnight courier network to another system and then restored. We have briefly looked at the various types of files that exist on the AS/400. Understanding these objects is critical to effective application development and maintenance on the AS/400. One excellent source for further reading is IBM's Programming: Data
Chapter 21 - So You Think You Understand File Overrides 'Try using OvrScope(*Job).'
How many times have you heard this advice when a file override wasn't working as intended? Changing your application to use a job-level override may produce the intended results, but doing so is a bit like replacing a car's engine because it has a fouled spark plug. Actually, with a fully functional new engine, the car will always run right again. A job-level override, on the other hand, may or may not produce the desired results, depending on your application's design. And even if the application works today, an ill-advised job-level override coupled with modifications may introduce application problems in the future. If you're considering skipping this article because you believe you already understand file overrides, think again! I know many programmers, some excellent, who sincerely believe they understand this powerful feature of OS/400 — after all, they've been using overrides in their applications for years. However, I've yet to find anyone who does fully understand overrides. So, read on, surprise yourself, and learn once and for all how the system processes file overrides. Then put this knowledge to work to get the most out of overrides in your applications.
Anatomy of Jobs Before examining file overrides closely, you need to be familiar with the parts of a job's anatomy integral to the function of overrides. The call stack and activation groups both play a key role in determining the effect overrides have in your applications. Jobs typically consist of a chain of active programs, with one program calling another. The call stack is simply an ordered list of these active programs. When a job starts, the system routes it to the beginning program to execute and designates that program as the first entry in the call stack. If the program then calls another program, the system assigns the newly called program to the second call stack entry. This process can continue, with the second program calling a third, the third calling a fourth, and so on, each time adding the new program to the end of the call stack. The call stack therefore reflects the depth of program calls. Consider the following call stack: ProgramA ProgramB ProgramC ProgramD You can see four active programs in this call stack. In this example, the system called ProgramA as its first program when the job started. ProgramA then called ProgramB, which in turn called ProgramC. Last, ProgramC called ProgramD. Because these are nested program calls, each program is at a different layer in the call stack. These layers are known as call levels. In the example, ProgramA is at call level 1, indicating the fact that it is the first program called when the job started. ProgramB, ProgramC, and ProgramD are at call levels 2, 3, and 4, respectively. As programs end, the system removes them from the call stack, reducing the number of call levels. For instance, when ProgramD ends, the system removes it from the call stack, and the job then consists of only three call levels. If ProgramC then ends, the job consists of only two call levels, with ProgramA and ProgramB making up the call stack. This process continues until ProgramA ends, at which time the job ends. So far, you've seen that when one program calls another, the system creates a new, higher call level at which the called program runs. The called program then begins execution, and when it ends, the system removes it from the call stack, returning control to the calling program at the previous call level. That's the simple version, but there's a little more to the picture. First, it's possible for one program to pass control to another program without the newly invoked program running at a higher call level. For instance, with CL's TfrCtl (Transfer Control) command, the system replaces (in the call stack) the program issuing the command with the program to which control is to be transferred. Not only does this action result in the invoked program running at the same call level as the invoking program, but the invoking program is also completely removed from the chain of programs making up the call stack. Hence, control can't be returned to the program that issued the TfrCtl command. Instead, when the newly invoked program ends, control returns to the program at the immediately preceding call level.
You may recall that earlier I said that as programs end, the system removes them from the call stack. In reality, when a program ends, the system removes from the call stack not only the ending program but also any program at a call level higher than that of the ending program. You might be thinking about our example and scratching your head, wondering, 'How can ProgramB end before ProgramC?' Consider the fact that ProgramD can send an escape message to ProgramB's program message queue. This event results in the system returning control to ProgramB's error handler. This return of control to ProgramB results in the system removing from the call stack all programs at a call level higher than ProgramB — namely, ProgramC and ProgramD. ProgramB's design then determines whether it is removed from the call stack. If it handles the exception, ProgramB is not removed from the call stack; instead, processing continues in ProgramB. You should also note that under normal circumstances, the call stack begins with several system programs before any user-written programs appear. In fact, system programs will likely appear throughout your call stack. This point is important only to demonstrate that the call stack isn't simply a representation of user-written programs as they are called. In addition to an understanding of a job's call levels, you need a basic familiarity with activation groups to comprehend file overrides. You're probably familiar with the fact that a job is a structure with its own allocated system resources, such as open data paths (ODPs) and storage for program variables. These resources are available to programs executed within that job but are not available to other jobs. Activation groups, introduced with the Integrated Language Environment (ILE), are a further division of jobs into smaller substructures. As is the case with jobs, activation groups consist of private system resources, such as ODPs and storage for program variables. An activation group's allocated resources are available only to program objects that are assigned to, and running in, that particular activation group within the job. You assign ILE program objects to an activation group when you create the program objects. Then, when you execute these programs, the system creates the activation group (or groups) to which the programs are assigned. A job can consist of multiple activation groups, none of which can access the resources unique to the other activation groups within the job. For example, although multiple activation groups within a job may open the same file, each activation group can maintain its own private ODP. In such a case, programs assigned to the same activation group can use the ODP, but programs assigned to a different activation group don't have access to the same ODP. A complete discussion of activation groups could span volumes. For now, it's sufficient simply to note that activation groups exist, that they are substructures of a job, and that they can contain their own set of resources not available to other activation groups within the job.
Override Rules The rules governing the effect overrides have on your applications fall into three primary areas: the override scope, overrides to the same file, and the order in which the system processes overrides. After examining the details of each of these areas, we'll look at a few miscellaneous rules. Scoping an Override An override's scope determines the range of influence the override will have on your applications. You can scope an override to the following levels:
•
•
•
Call level — A call-level override exists at the level of the process that issues the override, unless the override is issued using a call to program QCmdExc; in that case, the call level is that of the process that called QCmdExc. A call-level override remains in effect from the time it is issued until the system replaces or deletes it or until the call level in which the override was issued ends. Activation group level — An activation-grouplevel override applies to all programs running in the activation group associated with the issuing program, regardless of the call level in which the override is issued. In other words, only the most recently issued activation-grouplevel override is in effect. An activation-group level override remains in effect from the time the override is issued until the system replaces it, deletes it, or deletes the activation group. These rules apply only when the override is issued from an activation group other than the default activation group. Activation-grouplevel overrides issued from the default activation group are scoped to call-level overrides. Job level — A job-level override applies to all programs running in the job, regardless of the activation group or call level in which the override is issued. Only the most recently issued job-level override is in effect. A job-level override remains in effect from the time it is issued until the system replaces or deletes it or until the job in which the override was issued ends.
You specify an override's scope when you issue the override, by using the override command's OvrScope (Override scope) parameter. Figure 1 depicts an ILE application's view of a job's structure, along with the manner in which you can specify overrides. First, notice that two activation groups, the default activation group and a named activation group, make up the job. All jobs have as part of their structure the default activation group and can optionally have one or more named activation groups. Original Program Model (OPM) programs can run only in the default activation group. Figure 1 shows two OPM programs, Program1 and Program2, both running in the default activation group. Because OPM programs can't be assigned to a named activation group, jobs that run only OPM programs consist solely of the default activation group. ILE program objects, on the other hand, can run in either the default activation group or a named activation group, depending on how you assign the program objects to activation groups. If any of a job's program objects are assigned to a named activation group, the job will have as part of its structure that named activation group. In fact, if the job's program objects are assigned to different named activation groups, the job will have each different named activation group as part of its structure. Figure 1 shows five ILE programs: Program3 and Program4 are both running in the default activation group, and Program5, Program6, and Program7 are running in a named activation group. The figure not only depicts the types of program objects that can run in the default activation group and in a named activation group; it also shows the valid levels to which you can scope overrides. Programs running in the default activation group, whether OPM or ILE, can issue overrides scoped to the job level or to the call level. ILE programs running in a named activation group can scope overrides not only to these two levels but to the activation group level as well. Figure 1 portrays each of these possibilities.
Overriding the Same File Multiple Times One feature of call-level overrides is the ability to combine multiple overrides for the same file so that each of the different overridden attributes applies. Consider the following program fragments, which issue the OvrPrtF (Override with Printer File) command: ProgramA:
OvrPrtF Call
File(Report) OutQ(Sales01) + OvrScope(*CallLvl) Pgm(ProgramB)
ProgramB:
OvrPrtF Call
File(Report) Copies(3) + OvrScope(*CallLvl) Pgm(PrintPgm)
When program PrintPgm opens and spools printer file Report, the overrides from both programs are combined, resulting in the spooled file being placed in output queue Sales01 with three copies set to be printed. Now, consider the following program fragment: ProgramC:
OvrPrtF OvrPrtF Call
File(Report) OutQ(Sales01) + OvrScope(*CallLvl) File(Report) Copies(3) + OvrScope(*CallLvl) Pgm(PrintPgm)
What do you think happens? You might expect this program to be functionally equivalent to the two previous programs, but it isn't. Within a single call level, only the most recent override is in effect. In other words, the most recent override replaces the previous override in effect. In the case of ProgramC, the Copies(3) override is in effect, but the OutQ(Sales01) override is not. This feature provides a convenient way to replace an override within a single call level without the need to first delete the previous override. It's also fun to show programmers ProgramA and ProgramB, explain that things worked flawlessly, and then ask them to help you figure out why things didn't work right after you changed the application to look like ProgramC! When they finally figure out that only the most recent override within a program is in effect, show them your latest modification — ProgramA:
OvrPrtF TfrCtl
File(Report) OutQ(Sales01) + OvrScope(*CallLvl) Pgm(ProgramB)
ProgramB:
OvrPrtF Call
File(Report) Copies(3) + OvrScope(*CallLvl) Pgm(PrintPgm)
— and watch them go berserk again! This latest change is identical to the first iteration of ProgramA and ProgramB, except that rather than issue a Call to ProgramB from ProgramA, you use the TfrCtl command to invoke ProgramB. Remember, TfrCtl doesn't start a new call level. ProgramB will simply replace ProgramA on the call stack, thereby running at the same call level as ProgramA. Because the call level doesn't change, the overrides aren't combined. You may need to point out to the programmers that they didn't really figure it out at all when they determined that only the most recent override within a program is in effect. The rule is: Only the most recent override within a call level is in effect.
The Order of Applying Overrides You've seen the rules concerning the applicability of overrides. In the course of a job, many overrides may be issued. In fact, as you've seen, many may be issued for a single file. When many overrides are issued for a single file, the system constructs a single override from the overridden attributes in effect from all the overrides. This type of override is called a merged override. Merged overrides aren't simply the result of accumulating the different overridden file attributes, though. The system must also modify, or replace, applicable attributes that have been overridden multiple times and remove overrides when an applicable request to delete overrides is issued. To determine the merged override, the system follows a distinct set of rules that govern the order in which overrides are processed. The system processes the overrides for a file when it opens the file and uses the following sequence to check and apply overrides: 1. 2. 3. 4.
call-level overrides up to and including the call level of the oldest procedure in the activation group containing the file open (beginning with the call level that opens the file and progressing in decreasing call-level sequence) the most recent activation-grouplevel overrides for the activation group containing the file open call-level overrides lower than the call level of the oldest procedure in the activation group containing the file open (beginning with the call level immediately preceding the call level of the oldest procedure in the activation group containing the file open and progressing in decreasing call-level sequence) the most recent job-level overrides
This ordering of overrides can get tricky! It is without a doubt the least-understood aspect of file overrides and the source of considerable confusion and errors. To aid your understanding, let's look at an example. Figure 2A shows a job with 10 call levels, programs in the default activation group and in two named activation groups (AG1 and AG2), and overrides within each call level and each activation group. Before we look at how the system processes these overrides, see whether you can determine the file that ProgramJ at call level 10 will open, as well as the attribute values that will be in effect due to the job's overrides. In fact, try the exercise twice, the first time without referring to the ordering rules.
Figure 2B reveals the results of the job's overrides. Did you arrive at these results in either of your tries? Let's walk, step by step, through the process of determining the overrides in effect for this example. Step 1 — call-level overrides up to and including the call level of the oldest procedure in the activation group containing the file open Checking call level 10 shows that the system opens file Report1 in activation group AG1. The oldest procedure in activation group AG1 appears at call level 2. Therefore, in step 1, the system processes call-level overrides beginning with call level 10 and working up the call stack through call level 2. When the system processes call level 2, step 1 is complete. a. b. c. d.
e.
There is no call-level override for file Report1 at call level 10. There is no call-level override for file Report1 at call level 9. There is no call-level override for file Report1 at call level 8. There is no call-level override for file Report1 at call level 7. Call level 6 contains a call-level override for file Report1. The Copies attribute for file Report1 is overridden to 7. Active overrides at this point: Copies(7)
f.
Call level 5 shows an activation-grouplevel override, but the program is running in the default activation group. Remember, activation-grouplevel overrides issued from the default activation group are scoped to call-level overrides. Therefore, the system processes this override as a call-level override. The CPI attribute for file Report1 is overridden to 13.3, and the previous Copies attribute value is replaced with this latest value of 6. Active overrides at this point: CPI(13.3) Copies(6)
g.
There is no call-level override for file Report1 at call level 4. Call level 3 contains a call-level override for file Report1. The LPI attribute for file Report1 is overridden to 9, and the previous Copies attribute value is replaced with this latest value of 4. Active overrides at this point: LPI(9) CPI(13.3) Copies(4)
h.
i.
There is no call-level override for file Report1 at call level 2.
Step 1 is now complete. Call level 2 contains the oldest procedure in activation group AG1 (the activation group containing the file open). Step 2 — the most recent activation-grouplevel overrides for the activation group containing the file open The system now checks for the most recently issued activation-grouplevel override within activation group AG1, where file Report1 was opened. a. b. c.
There is no activation-grouplevel override for file Report1 at call level 10. There is no activation-grouplevel override for file Report1 in activation group AG1 at call level 9. The activation-grouplevel override in call level 9 is in activation group AG2 and is therefore not applicable. Call level 8 contains an activation-grouplevel override in activation group AG1 for file Report1. The FormFeed attribute for file Report1 is overridden to *Cut, the previous LPI attribute value is replaced with this latest value of 12, and the previous Copies attribute value is replaced with this latest value of 9.
Active overrides at this point: LPI(12) CPI(13.3) FormFeed(*Cut) Copies(9) Step 2 is now complete. The system discontinues searching for activation-grouplevel overrides because this is the most recently issued activation-grouplevel override in activation group AG1. Step 3 — call-level overrides lower than the call level of the oldest procedure in the activation group containing the file open Remember, call level 2 is the call level of the oldest procedure in activation group AG1. The system begins processing call-level overrides at the call level preceding call level 2. In this case, there is only one call level lower than call level 2.
a.
Call level 1 contains a call-level override for file Report1. The OutQ attribute for Report1 is overridden to Prt01, and the previous Copies attribute value is replaced with this latest value of 2.
Active overrides at this point: LPI(12) CPI(13.3) FormFeed(*Cut) OutQ(Prt01) Copies(2) Step 3 is now complete. The call stack has been processed through call level 1. Step 4 — the most recent job-level overrides The system finishes processing overrides by checking for the most recently issued job-level override for file Report1. a. b. c. d.
There is no job-level override for file Report1 at call level 10. There is no job-level override for file Report1 at call level 9. There is no job-level override for file Report1 at call level 8. Call level 7 contains a job-level override for file Report1. Notice that the program runs in activation group AG2 rather than AG1. Job-level overrides can come from any activation group. The previous Copies attribute value is replaced with this latest value of 8.
Active overrides at this point: LPI(12) CPI(13.3) FormFeed(*Cut) OutQ(Prt01) Copies(8) Step 4 is now complete. The system discontinues searching for job-level overrides because this is the most recently issued job-level override. This completes the application of overrides. The final merged override that will be applied in call level 10 is
LPI(12) CPI(13.3) FormFeed(*Cut) OutQ(Prt01) Copies(8) All other attribute values come from the file description for printer file Report1. It's easy to see how this process could be confusing and lead to the introduction of errors in applications! Now, let's make the process even more confusing! In the previous example, our HLL program (ProgramJ) opened file Report1, and no programs issued an override to the file name. What do you think happens when you override the file name to a different file using the ToFile parameter on the OvrPrtF command? Once the system issues an override that changes the file, it searches for overrides to the new file, not the original. Let's look at a slightly modified version of our example. Figure 2C contains the new programs. Only two of the original programs have been changed in this new example. In ProgramC at call level 3, the ToFile parameter has been added to the OvrPrtF command, changing the file to be opened from Report1 to Report2. And ProgramB at call level 2 now overrides printer file Report2 rather than Report1. Figure 2D shows the results of the overrides. Again, let's step through the process of determining the overrides in effect for this example. Step 1 — call-level overrides up to and including the call level of the oldest procedure in the activation group containing the file open Checking call level 10 shows that the system opens file Report1 in activation group AG1. The oldest procedure in activation group AG1 appears at call level 2. Therefore, in step 1, the system processes call-level overrides beginning with call level 10 and working up the call stack through call level 2. When the system processes call level 2, step 1 is complete. a. b. c. d.
There is no call-level override for file Report1 at call level 10. There is no call-level override for file Report1 at call level 9. There is no call-level override for file Report1 at call level 8. There is no call-level override for file Report1 at call level 7.
e.
Call level 6 contains a call-level override for file Report1. The Copies attribute for file Report1 is overridden to 7. Active overrides at this point: Copies(7)
f.
Call level 5 shows an activation-grouplevel override, but the program is running in the default activation group. Again, activation-grouplevel overrides issued from the default activation group are scoped to calllevel overrides. Therefore, the system processes this override as a call-level override. The CPI attribute for file Report1 is overridden to 13.3, and the previous Copies attribute value is replaced with this latest value of 6. Active overrides at this point: CPI(13.3) Copies(6)
g. h.
There is no call-level override for file Report1 at call level 4. Call level 3 contains a call-level override for file Report1. The LPI attribute for file Report1 is overridden to 9, and the previous Copies attribute value is replaced with this latest value of 4. Notice that the printer file has also been overridden to Report2. This is especially noteworthy because the system will now begin searching for overrides to file Report2 rather than file Report1. Active overrides at this point: ToFile(Report2) LPI(9) CPI(13.3) Copies(4)
i.
There is no call-level override for file Report2 at call level 2.
Step 1 is now complete. Call level 2 contains the oldest procedure in activation group AG1 (the activation group containing the file open). Step 2 — the most recent activation-grouplevel overrides for the activation group containing the file open The system now checks for the most recently issued activation-grouplevel override within activation group AG1 where file Report1 (actually Report2 now) was opened. a. b. c. d. e. f. g. h. i.
There is no activation-grouplevel override for file Report2 at call level 10. There is no activation-grouplevel override for file Report2 in activation group AG1 at call level 9. The activation-grouplevel override in call level 9 is in activation group AG2 and is therefore not applicable. There is no activation-grouplevel override for file Report2 at call level 8. There is no activation-grouplevel override for file Report2 at call level 7. There is no activation-grouplevel override for file Report2 at call level 6. There is no activation-grouplevel override for file Report2 at call level 5. There is no activation-grouplevel override for file Report2 at call level 4. There is no activation-grouplevel override for file Report2 at call level 3. Call level 2 contains an activation-grouplevel override in activation group AG1 for file Report2. The FormType attribute for file Report2 is overridden to FormB, the previous LPI attribute value is replaced with this latest value of 7.5, and the previous Copies attribute value is replaced with this latest value of 3.
Active overrides at this point: ToFile(Report2) LPI(7.5) CPI(13.3) FormType(FormB) Copies(3) Step 2 is now complete. The system discontinues searching for activation-grouplevel overrides because this is the most recently issued activation-grouplevel override in activation group AG1. Step 3 — call-level overrides lower than the call level of the oldest procedure in the activation group containing the file open Again, call level 2 is the call level of the oldest procedure in activation group AG1. The system begins processing call-level overrides at the call level preceding call level 2 (i.e., call level 1). a.
There is no call-level override for file Report2 at call level 1.
Step 3 is now complete. The call stack has been processed through call level 1.
Step 4 — the most recent job-level overrides The system finishes processing overrides by checking for the most recently issued job-level override for file Report2. a. b. c. d. e. f. g. h. i. j.
There is no job-level override for file Report2 at call level 10. There is no job-level override for file Report2 at call level 9. There is no job-level override for file Report2 at call level 8. There is no job-level override for file Report2 at call level 7. There is no job-level override for file Report2 at call level 6. There is no job-level override for file Report2 at call level 5. There is no job-level override for file Report2 at call level 4. There is no job-level override for file Report2 at call level 3. There is no job-level override for file Report2 at call level 2. There is no job-level override for file Report2 at call level 1.
Step 4 is now complete. There are no job-level overrides for file Report2. This completes the application of overrides. The final merged override that will be applied to printer file Report2 in call level 10 is
LPI(7.5) CPI(13.3) FormType(FormB) Copies(3) All other attribute values come from the file description for printer file Report2.
Protecting an Override In some cases, you may want to protect an override from the effect of other overrides to the same file. In other words, you want to ensure that an override issued in a program is the override that will be applied when you open the overridden file. You can protect an override from being changed by overrides from lower call levels, the activation group level, and the job level by specifying Secure(*Yes) on the override command. Figure 3 shows excerpts from two programs, ProgramA and ProgramB, running in the default activation group and with call-level overrides only. ProgramA simply issues an override to set the output queue attribute value for printer file Report1 and then calls ProgramB. ProgramB in turn calls two HLL programs, HLLPrtPgm1 and HLLPrtPgm2, both of which function to print report Report1. Before the call to each of these programs, ProgramB issues an override to file Report1 to change the output queue attribute value. When you call ProgramA, the system first issues a call-level override that sets Report1's output queue attribute to value Prt01. Next, ProgramA calls ProgramB, thereby creating a new call level. ProgramB begins by issuing a calllevel override, setting Report1's output queue attribute value to Prt02. Notice that the OvrPrtF command specifies the Secure parameter with a value of *Yes. ProgramB then calls HLL program HLLPrtPgm1 to open and print Report1. Because this call-level OvrPrtF command specifies Secure(*Yes), the system does not apply call-level overrides from lower call levels — namely, the override in ProgramA that sets the output queue attribute value to Prt01. HLLPrtPgm1 therefore places the report in output queue Prt02. ProgramB continues with yet another call-level override, setting Report1's output queue attribute value to Prt03. Because this override occurs at the same call level as the first override in ProgramB, the system replaces the call level's override. However, this new override doesn't specify Secure(*Yes). Therefore, the system uses the calllevel override from call level 1. This override changes the output queue attribute value from Prt03 to Prt01. ProgramB finally calls HLLPrtPgm2 to open and spool Report1 to output queue Prt01. These two overrides in ProgramB clearly demonstrate the behavioral difference between an unsecured and a secured override.
Explicitly Removing an Override The system automatically removes overrides at certain times, such as when a call level ends, when an activation group ends, and when the job ends. However, you may want to remove the effect of an override at some other time. The DltOvr (Delete Override) command makes this possible, letting you explicitly remove overrides. With this command, you can delete overrides at the call level, the activation group level, or the job level as follows:
Call level: DltOvr File(File1) OvrScope(*) Activation group level: DltOvr File(File2) OvrScope(*ActGrpDfn) Job level: DltOvr File(File3)
OvrScope(*Job) Value *ActGrpDfn is the default value for the DltOvr command's OvrScope (Override scope) parameter. If you don't specify parameter OvrScope on the DltOvr command, this value is used. The command's File parameter also supports special value *All, letting you extend the reach of the DltOvr command. This option gives you a convenient way to remove overrides for several files with a single command.
Miscellanea I've covered quite a bit of ground with these rules of overriding files. In addition to the rules you've already seen, I'd like to introduce you to a few tidbits you might find useful. You've probably grown accustomed to the way a CL program lets you know when you've coded something erroneously — the program crashes with an exception! However, specify a valid, yet wrong, file name on an override, and the system gives you no warning you've done so. This seemingly odd behavior is easily explained. Consider the following code:
OvrPrtF
File(Report1) OutQ(Prt01)
Call
Pgm(HLLPrtPgm)
However, HLLPrtPgm opens file Report2, not Report1. The system happily spools Report2 without any regard to the override. Although this is clearly a mistake in that you've specified the wrong file name in the OvrPrtF command, the system has no way of knowing this. The system can't know your intentions. Remember, this override could be used somewhere else in the job, perhaps even in a different call level. The second tidbit involves a unique override capability that exists with the OvrPrtF command. OvrPrtF's File parameter supports special value *PrtF, letting you extend the reach of an override to all printer files (within the override scoping rules, of course). All rules concerning the application of overrides still apply. Special value *PrtF simply gives you a way to include multiple files with a single override command. Also, you may recall an earlier reference to program QCmdExc and how its use affects the scope of an override. This program's primary purpose is to serve as a vehicle that lets HLL programs execute system commands. You can therefore use QCmdExc from within an HLL program to issue a file override. Remember that when you issue an override using this method, the call level is that of the process that invoked QCmdExc. You should note that override commands may or may not affect system commands. For more information about overrides and system commands, see 'Overrides and System Commands,'.
Important Additional Override Information With the major considerations of file overrides covered, let's now take a brief look at some additional override information of note. Overriding the Scope of Open File At times, you'll want to share a file's ODP among programs in your application. For instance, when you use the OpnQryF (Open Query File) command, you must share the ODP created by OpnQryF or your application won't use the ODP created by OpnQryF. To share the ODP, you specify Share(*Yes) on the OvrDbF (Override with Database File) command. You can also explicitly control the scope of open files (ODPs) using the OpnScope (Open scope) parameter on the OvrDbF command. You can override the open scope to the activation group level and the job level. Non-File Overrides
In addition to file overrides, the system provides support for overriding message files and program device entries used in communications applications. You can override the message file used by programs by using the OvrMsgF (Override with Message File) command. However, the rules for applying overrides with OvrMsgF are quite different from those with other override commands. You can override only the name of the message file used, not the attributes. During the course of normal operations, the system frequently sends various types of messages to various types of message queues. OvrMsgF provides a way for you to specify that when sending a message for a particular message ID, the system should first check the message file specified in the OvrMsgF for the identified message. If the message is found, the system sends the message using the information from this message file. If the message isn't found, the system sends the message using the information from the original message file. Using the OvrICFDevE (Override ICF Program Device Entry) command, you can issue overrides for program device entries. Overrides for program device entries let you override attributes of the Intersystem Communications Function (ICF) file that provides the link between your programs and the remote systems or devices with which your program communicates. Overrides and Multithreaded Jobs The system provides limited support for overrides in multithreaded jobs. Some restrictions apply to the provided support. The system supports the following override commands:
• • • •
OvrDbF — You can issue this command from the initial thread of a multithreaded job. Only overrides scoped to the job level or an activation group level affect open operations performed in a secondary thread. OvrPrtF — You can issue this command from the initial thread of a multithreaded job. As with OvrDbF, only overrides scoped to the job level or an activation group level affect open operations performed in a secondary thread. OvrMsgF — You can issue this command from the initial thread of a multithreaded job. This command affects only message file references in the initial thread. Message file references performed in secondary threads are not affected. DltOvr — You can issue this command from the initial thread of a multithreaded job.
The system ignores any other override commands in multithreaded jobs. File Redirection You can use overrides to redirect input or output to a file of a different type. For instance, you may have an application that writes directly to tape using a tape file. If at some time you'd like to print the information that's written to tape, you can use an override to accomplish your task. When you redirect data to a different file type, you use the override appropriate for the new target file. In the case of our example, you would override from the tape file to a printer file using the OvrPrtF command. I mention file redirection so that you know it's a possibility. Of course, many restrictions apply when using file redirection, so if you decide you'd like to use the technique, refer to the documentation. IBM's File Management provides more information about file redirection. You can find this manual on the Internet at IBM's iSeries Information Center (http://publib.boulder.ibm.com/pubs/html/as400/infocenter.htm).
Chapter 22 - Logical Files For many years, IBM sold the S/38 on the premise that it was the 'logical choice.' Yes, that play on words was corny, but true. One of the S/38's strongest selling points was the relational database implementation provided by logical files, and the AS/400 has inherited that feature. Logical files on the AS/400 provide the flexibility needed to build a database for an interactive multiuser environment. As I said in the last chapter, there are two kinds of database files: physical files and logical files. Physical files contain data; logical files do not. Logical files control how data in physical files is presented, most commonly using
key fields (whose counterpart on the S/36 is the alternate index) so that data can be retrieved in key-field sequence. However, the use of key fields is not the only function logical files provide. Let me introduce you to the following basic concepts about logical files:
• • • •
record format definition/physical file selection key fields select/omit logic multiple logical file members
Record Format Definition/Physical File Selection To define a logical file, you must select the record formats to be used and the physical files to be referenced. You can use the record format found in the physical file, or you can define a new record format. If you use the physical file record format, every field in that record format is accessible through the logical file. If you create a new record format, you must specify which fields will exist in the logical file. A logical file field must either reference a field in the physical file record format or be derived by using concatenation or substring functions. Because the logical file does not contain any data, it must know which physical file to access for the requested data. You use the DDS PFILE keyword to select the physical file referenced by the logical file record format. You specify the physical file in the PFILE keyword as a qualified name (i.e., library_namefile_name) or as the file name alone. Figure 22.1a lists the DDS for physical file HREMFP, and Figure 22.1b shows the DDS for logical file HREMFL1. Notice that the logical file references the physical file's record format (HREMFR). Consequently, every field in the physical file will be presented in logical file HREMFL1. Also notice that the PFILE keyword in Figure 22.1b references physical file HREMFP. In Figure 22.1c, logical file HREMFL2 defines a record format not found in PFILE-referenced HREMFP. Therefore, this logical file must define each physical file field it will use. A logical file can thus be a projection of the physical file -- that is, contain only selected physical file fields. Notice that fields EMEMP#, EMSSN#, and EMPAYR all appear in the physical file but are not included in file HREMFL2.
Key Fields Let's look at Figures 22.1b and 22.1c again to see how key fields are used. File HREMFL1 identifies field EMEMP# as a key field (in DDS, key fields are identified by a K in position 17 and the name of the field in positions 19 through 28). When you access this logical file by key, the records will be presented in employee number sequence. The logical file simply defines an access path for the access sequence -- it does not physically sort the records. The UNIQUE keyword in this source member tells the system to require a unique value for EMEMP# for each record in the file, thus establishing EMEMP# as the primary key to physical file HREMFP. Should the logical file be deleted, records could be added to the physical file with a non-unique key, giving rise to a question that has been debated over the years: Is it better to use a keyed physical file or a keyed logical file to establish a file's primary key? You could specify EMEMP# as the key in the DDS for physical file HREMPF and enforce it as the primary key using the UNIQUE keyword. Making the primary key a part of the physical file has a distinct advantage: The primary key is always enforced because the physical file cannot be deleted without deleting the data. Even if all dependent logical files were deleted, the primary key would be enforced. However, placing the key in the physical file also has a disadvantage. Should the access path for a physical file data member be damaged (a rare, but possible, occurrence), the damaged access path prevents access to the data. Your only recourse in that case would be to delete the member and restore it from a backup. Another minor inconvenience is that any time you want to process the file in arrival sequence (e.g., to maximize retrieval performance), you must use the OVRDBF (Override with Database File) command or specify arrival sequence in your high-level language program. Placing the primary key in a logical file, as I did in Figure 22.1b, ensures that access path damage results only in the need to recompile the logical file -- the physical file remains intact. This method also means that you can access the physical file in arrival sequence. As I mentioned earlier, the negative effect is that deleting the logical file results in leaving the physical file without a primary key.
Let me make a few comments concerning the issue of where to place the primary key. Access path maintenance is costly; when records are updated, the system must determine whether any key fields have been modified, requiring the access path to be updated. The overhead for this operation is relatively small in an interactive environment where changes are made randomly based on business demands. However, for files where batch purges or updates result in many access path updates, the overhead can be quite detrimental to performance. With that in mind, here are some suggestions.
•
• •
For work files, which are frequently cleared and reloaded, create the physical file with no keys, and place the primary and alternate keys in logical files. Then delete the logical files (access paths) before you clear and reload the file. The update will be much faster with no access path maintenance to perform. After the update, rebuild or restore the logical files. The same method works best for very large files. When you need to update the entire file, you can delete the logical files, perform the update, and then rebuild or restore the logical files. For files updated primarily through interactive maintenance programs, putting the key in the physical file poses no performance problems.
The UNIQUE keyword is also expensive in terms of system overhead, so you should use it only to maintain the primary key. Logical file HREMFL2 specifies three key fields -- EMLNAM (employee last name), EMFNAM (employee first name), and EMMINT (employee middle initial). The UNIQUE keyword is not used here because the primary key is the employee number and there is no advantage in requiring unique names (even if you could ensure that no two employees had the same name). A primary key protects the integrity of the file, while alternative keys provide additional views of the same data.
Select/Omit Logic Another feature that logical files offer is the ability to select or omit records from the referenced physical file. You can use the keywords COMP, VALUES, and RANGE to provide select or omit statements when you build logical files. Figure 22.2 shows logical file HREMFL3. Field EMTRMD (employee termination date) is used with keyword COMP to compare values, forming a SELECT statement (notice the S in position 17). This DDS line tells the system to select records from the physical file in which field EMTRMD is equal to 0 (i.e., no termination date has been entered for that employee). Therefore, when you create logical file HREMFL3, OS/400 builds indexed entries in the logical file only for records in which employee termination date is equal to zero, thus omitting terminated employees (EMTRMD NE 0). When a program accesses the logical file, it reads only the selected records. Before looking at some examples, I want to go over some of the basic rules for using select/omit statements. 1. You can use select/omit statements only if the logical file specifies key fields (the value *NONE in positions 19 through 23 satisfies the requirement for a key field) or if the logical file uses the DYNSLT keyword. (I'll go into more detail about this keyword later.) 2. To locate the field definitions for fields named on a select/omit statement, OS/400 first checks the field name specified in positions 19 through 28 in the record format definition and then checks fields specified as parameters on CONCAT (concatenate) or RENAME keywords. If the field name is found in more than one place, OS/400 uses the first occurrence of the field name. 3. Select/omit statements are specified by an S or an O in position 17. Multiple statements coded with an S or an O form an OR connective relationship. The first true statement is used for select/omit purposes. 4. You can follow a select/omit statement with other statements containing a blank in position 17. Such additional statements form an AND connective relationship with the initial select or omit statement. All related statements must be true before the record is selected or omitted. 5. You can specify both select and omit statements in the same file, but the following rules apply: a. If you specify both select and omit for a record format, OS/400 processes the statements only until one of the conditions is met. Thus, if a record satisfies the first statement or group of related statements, the record is processed without being tested against the subsequent select/omit statements.
b. If you specify both select and omit, you can use the ALL keyword to specify whether records that do not meet any of the specified conditions should be selected or omitted. c. If you do not use the ALL keyword, the action taken for records not satisfying any of the conditions is the converse of the last statement specified. For example, if the last statement was an omit, the record is selected. Now let's work through a few select/omit examples to see how some of these rules apply. Consider the statements in Figure 22.3. Based on rule 3, OS/400 selects any record in which employee termination date equals 0 or employee type equals H (i.e., hourly). Both statements have an S coded in position 17, representing an OR connective relationship. Contrast the statements in Figure 22.3 with the statements in Figure 22.4. Notice that the second statement in Figure 22.4 does not have an S or an O in position 17. According to rule 4, the second statement is related to the previous statement by an AND connective relationship. Therefore, both comparisons must be true for a record to be selected, so all current hourly employees will be selected. To keep it interesting, let's change the statements to appear as they do in Figure 22.5. At first glance, you might think this combination of select and omit would provide the same result as the statements in Figure 22.4. However, it doesn't -- for two reasons. As rule 5a explains, the order of the statements is significant. In Figure 22.5, the first statement determines whether employee type equals H. If it does, the record is selected and the second test is not performed, thus allowing records for terminated hourly employees to be selected. The second reason the statement in Figures 22.4 and 22.5 produce different results is because of the absence of the ALL keyword, which specifies how to handle records that do not meet either condition. According to rule 5c, records that do not meet either comparison are selected because the system performs the converse of the last statement listed (e.g., the omit statement). Figure 22.6 shows the correct way to select records for current hourly employees using both select and omit statements. The ALL keyword in the last statement tells the system to omit records that don't meet the conditions specified by the first two statements. In general, however, it is best to use only one type of statement (either select or omit) when you define a logical file. By limiting your definitions this way, you will avoid introducing errors that result when the rules governing the use of select and omit are violated. Select/omit statements give you dynamic selection capabilities via the DDS DYNSLT keyword. DYNSLT lets you defer the select/omit process until a program requests input from the logical file. When the program reads the file, OS/400 presents only the records that meet the select/omit criteria. Figure 22.7 shows how to code the DYNSLT keyword. So now I guess you are wondering just how this differs from an example without the DYNSLT keyword. It differs in one significant way: performance. In the absence of the DYNSLT keyword, OS/400 builds indexed entries only for those records that meet the stated select/omit criteria. Access to the correct records is faster, but the overhead of maintaining the logical file is increased. When you use DYNSLT, all records in the physical file are indexed, and the select/omit logic is not performed until the file is accessed. You only retrieve records that meet the select/omit criteria, but the process is dynamic. Because DYNSLT decreases the overhead associated with access path maintenance, it can improve performance in cases where that overhead is considerable. As a guideline, if you have a select/omit logical file that uses more than 75 percent of the records in the physical file member, the DYNSLT keyword can reduce the overhead required to maintain that logical file without significantly affecting the retrieval performance of the file, because most records will be selected anyway. If the logical file uses less than 75 percent of the records in the physical file member, you can usually maximize performance by omitting the DYNSLT keyword and letting the select/omit process occur when the file is created.
Multiple Logical File Members The last basic concept you should understand is the way logical file members work. The CRTLF (Create Logical File) command has several parameters related to establishing the member or members that will exist in the logical file. These parameters are MBR (the logical file member name), DTAMBRS (the physical file data members upon which the logical file member is based), and MAXMBRS (the maximum number of data members the logical file can contain). The default values for these parameters are *FILE, *ALL, and 1, respectively.
Typically, a physical file has one data member. When you create a logical file to reference such a physical file, these default values instruct the system to create a logical file member with the same name as the logical file itself, base this logical file member on the single physical file data member, and specify that a maximum of one logical file member can exist in this file. When creating applications with multiple-data-member physical files, you often don't know precisely what physical and logical members you will eventually need. For example, for each user you might add members to a temporary work file for each session when the user signs on. Obviously, you (or, more accurately, your program) don't know in advance what members to create. In such a case, you would normally
•
Create the physical file with no members:
CRTPF FILE(TESTPF) MBR(*NONE) •
Create the logical file with no members:
CRTLF FILE(TESTLF) MBR(*NONE) •
For every user that signs on, add a physical file member to the physical file:
ADDPFM FILE(TESTPF) MBR(TESTMBR) TEXT('Test PF Data Member') •
For every physical file member, add a member to the logical file and specify the physical file member on which to base the logical member:
ADDLFM FILE(TESTLF) MBR(TESTMBR) DTAMBRS((TESTPF TESTMBR)) + TEXT('Test LF Data Member') When a logical file member references more than one physical file member, and your application finds duplicate records in the multiple members, the application processes those records in the order in which the members are specified on the DTAMBRS parameter. For instance, if the CRTLF command specifies
CRTLF FILE(TESTLIB/TESTLF) MBR(ALLYEARS) + DTAMBRS((YRPF DT1988) (YRPF DT1989) (YRPF DT1990)) a program that processes logical file member ALLYEARS first reads the records in member DT1988, then in member DT1989, and finally in member DT1990.
Keys to the AS/400 Database Understanding logical files will take you a long way toward creating effective database implementations on the AS/400. Since I have introduced the basic concepts only, I strongly recommend that you spend some time in the manuals to increase your knowledge about logical files. Start with the description of the CRTLF command in IBM's Programming: Control Language Reference (SC41-0030) and also refer to Chapter 3, 'Setting Up Logical Files,' in the AS/400 Database Guide (SC41-9659). As you master the methods presented, you will discover many ways in which logical files can enhance your applications.
Chapter 23 - File Sharing As the father of two young children (ages 4 and 9), I have learned that to maintain peace in the house, my wife and I must either teach our children to share or buy two of everything. Those of you who can identify with this predicament know that in reality peace occurs only when you do a little of both -- sometimes you teach, and sometimes you buy. The AS/400 inherited a performance-related virtue from the S/38 that lets you 'teach' your programs to share file resources. I call it a performance-related virtue because the benefit of teaching your programs to share boosts performance for many applications. However, as is the case with children, there will be times when sharing doesn't provide any benefits and, in fact, is more trouble than it's worth. In this chapter, as we continue to examine files on
the AS/400, we will focus on the SHARE (Share Open Data Path) attribute and how you can use it effectively in your applications. You may already be familiar with the general concept of file sharing, a common feature for many operating systems that lets more than one program open the same file. When each program opens the file, a unique set of resources is established to prevent conflict between programs. This type of file sharing is automatic on the AS/400 unless you specifically prevent it by allocating a file for exclusive operations (using the ALCOBJ (Allocate Object) command). The SHARE attribute does not control this automatic function. On the AS/400, SHARE is a file attribute. It goes beyond normal file sharing to let programs within the same job share the open data path (ODP) established when the file was originally opened in the job. This means that programs share the file status information (i.e., the general and file-dependent I/O feedback areas), as well as the file pointer (i.e., a program's current record position in a file). As we further examine the SHARE attribute, you will see that this type of sharing enhances modular programming performance, but that you must manage it effectively to prevent conflicts between programs. The SHARE attribute is valid for database, source, device, distributed data management, and save files. You can establish the SHARE attribute or modify it for a file using any of the CRTxxxF (Create File), CHGxxxF (Change File), or OVRxxxF (Override with File) commands. The valid values are *YES and *NO. If SHARE(*NO) is specified for a file, each program operating on that file in the same job must establish a unique ODP.
Sharing Fundamentals While sharing ODPs can be a window to enhancing performance, doing so can also generate programming errors if you try to share without understanding a few simple fundamentals. The first fundamental pertains to open options that programs establish. When a program opens a file, the options specified on the OPNDBF (Open Data Base File) command or by the high-level language definition of the file determine the open options. The open options are *INP (input only), *OUT (output only), and *ALL (input, output, update, and delete operations). These options are significant when you use shared ODPs. If you specify SHARE(*YES) for a file, the initial program's open of the file must use all the open options required for any subsequent programs in the same job. For example, if PGMA opens file TEST (specified with SHARE(*YES)) with the open option *INP (for input only), and then PGMB, which requires the open option *ALL (for an update or delete function) is called, PGMB will fail. Besides sharing open options, programs also share the file pointer, a capability that is both powerful and problematic. Figure 23.1 displays the eight records that exist in file TEST. In Figures 23.2a and 23.2b are RPG programs TESTRPG1 and TESTRPG2, respectively, which alternately read a record in file TEST. After TESTRPG1 reads a record, it calls TESTRPG2, which then reads a record in file TEST. TESTRPG2 calls TESTRPG1, which reads another record, and so on. Both programs use print device file QPRINT to generate a list of the records read.
When the SHARE attribute for both file TEST and file QPRINT is SHARE(*NO), the output generated appears as displayed in Figure 23.3. Each program reads all eight records because each program uses a unique ODP. If you change file TEST or override it to specify SHARE(*YES), the programs generate the lists displayed in Figure 23.4. Each program reads only four records, because the programs share the same ODP. Finally, if you also change or override the attribute of file QPRINT to be SHARE(*YES), the output appears as shown in Figure 23.5. Both programs share print file QPRINT and, while each program reads only four records, the output is combined in a single output file. One common misconception is that using SHARE(*YES) alters the way in which the database manager performs record locking -- a conclusion you could easily reach if you confuse record locking with file locking. It is true that when you specify SHARE(*YES), file locking is handled differently than when you specify SHARE(*NO); when you specify SHARE(*YES), the first open establishes the open options. Thus, if the first open of a file with SHARE(*YES) uses option *ALL, every program using that file obtains a SHRUPD (Shared Update) lock on that file. This lock occurs even when a particular program normally opens the file with *INP open options.
Record locking, on the other hand, is not controlled by the open options, but by the RPG compiler. The program compiler determines which locks are needed for any input operations in the program and creates the object code to make them happen during program execution. Thus, programs perform record locking on files with SHARE(*YES) the same way they perform record locking on files with SHARE(*NO). Let me stress that this fact alone does not prevent the problems you must address when you write multiple programs to perform with files having SHARE(*YES) in an on-line update environment. But record locking, in and of itself, is not a serious concern. The real hazard is that because SHARE(*YES) lets programs share the file pointer, programs can easily become confused about which record is actually being retrieved, updated, or output if you fail to write the programs so they recognize and manage the shared pointer. The following example illustrates this potential problem. PGMA first reads file TEST for update purposes. Then PGMA calls PGMB, which also reads file TEST for update. If PGMB ends before performing the update, the file pointer remains positioned at the record read by PGMB. If PGMA then performs an update, PGMA updates the values of the current record variables (from the first read in PGMA) into the record PGMB read because that is where the file pointer is currently positioned. While you would never purposely code this badly, you might accidentally cause the same problem in your application if you fit program modules together without considering the current value of the SHARE attribute on the files. The moral of the story is this: When calling programs that use the same file, always reposition the file pointer after the called program ends, unless you are specifically coding to take advantage of file pointer positioning within those applications.
Sharing Examples The most popular use of the SHARE attribute is to open files at the menu level when users frequently enter and exit applications on that menu. Figure 23.6 illustrates a simple order-entry menu with five options, each of which represents a program that uses one or more of the described files. If SHARE(*NO) is defined for each file, then each time one of these programs is called, an ODP is created for each program file. If users frequently switch between menu options, they experience delays each time a file is opened. The coding example in Figure 23.7 provides a solution to this problem. First, the OVRDBF (Override with Database File) command specifies SHARE(*YES) for each file identified. Then, OPNDBF opens each file with the maximum open options required for the various applications. The overhead required to open the files affects the menu program only. When users select an option on the menu, the respective program need not open the file, and thus the programs are initiated more quickly. Remember, however, to plan carefully when using SHARE to open files, keeping in mind the above-mentioned guidelines about placing the file pointer. The SHARE attribute also comes in handy when you write applications that provide on-line inquiries into related files. Figure 23.8 outlines an order-entry program that opens several files and that lets the end user call a customer inquiry program or item master inquiry program to look up specific customers or items. Either program uses a file already opened by the initial program. By including the statements in Figure 23.9 in a CL program that calls the order-entry program, you can ensure that the ODP for these files is shared, reducing the time needed to access the two inquiry programs. There is no doubt that SHARE is a powerful attribute. Unfortunately, the power it provides can introduce errors (specifically, the wrong selection of records due to file pointer position) unless you understand it and use it carefully. SHARE(*YES) can shorten program initiation steps and can let programs share vital I/O feedback information. If you're using batch programs that typically open files, process the records, and then remain idle until the next night, SHARE(*YES) will buy you nothing. But if you're considering highly modular programming designs, SHARE(*YES) is a must. For more information about SHARE, see IBM's Programming: Data Base Guide (SC419659) and Programming: Control Language Reference (SC41-0030).
Chapter 24 - CL Programming: You're Stylin' Now! The key to creating readable, maintainable code is establishing and adhering to a set of standards about how the code should look. Standards give your programs a consistent appearance -- a style -- and create a comfortable environment for the person reading and maintaining the code. They also boost productivity. Programmers with a consistent style don't think about how to arrange code; they simply follow clearly defined coding standards, which become like second nature through habit. And programmers reading such code can directly interpret the program's actions without the distraction of bad style. Good coding style transcends any one language. It's a matter of professionalism, of doing your work to the best of your abilities and with pride.
Although most CL programs are short and to the point, a consistent programming style is as essential to CL as it is to any other language. When I started writing CL, I used the prompter to enter values for command parameters. Today, I still use the prompter for more complex commands or to prompt for valid values when I'm not sure what to specify. The prompter produces a standard of sorts. Every command begins in column 14, labels are to the left of the commands, and the editor wraps the parameters onto continuation lines like a word processor wraps words when you've reached the margin. While using the prompter is convenient, code generated this way can be extremely difficult to read and maintain. Let's look at CL program CVTOUTQCL (Figure 24.1), which converts the entries of an output queue listing into a database file. Another application can then read the database file and individually process each spool file (e.g., copy the contents of the spool file to a database file for saving or downloading to a PC). Without a program such as CVTOUTQCL, you would have to jot down the name of each output queue entry and enter each name into the CPYSPLF (Copy Spool File) command or any other command you use to process the entry. Now compare the code in Figure 24.1 to the version of CVTOUTQCL shown in Figure 24.2. The programs' styles are dramatically different. Figure 24.1's code is crowded and difficult to read, primarily because of the CL prompter's default layout. In addition, this style lacks elements such as helpful spacing, code alignment, and comments that help you break the code down into logical, readable chunks. Figure 24.2's code is much more readable and comprehensible. An informative program header relates the program's purpose and basic functions. The program also features more attractive code alignment, spacing that divides the code into distinct sections, indentation for nested DO-ENDDO groups, and mnemonic variable names. Let's take a closer look at the elements responsible for Figure 24.2's clarity and some coding guidelines you can use to produce sharp CL code with a consistent appearance.
Write a descriptive program header. If the first source statement in your CL program is the PGM statement, something's missing. All programs, including CL programs, need an introduction. To create a stylish CL program, first write a program header that describes the program's purpose and basic function. Figure 24.2's program header provides the basic information a programmer needs to become familiar with the program's purpose and function. An accurate introduction helps programmers who come after you feel more comfortable as they debug or enhance your code. The program header begins with the program's name, followed by the author's name and the date created. An essential piece of the program header is the 'program type,' which identifies the type of code that follows. CL program types include the CPP (command processing program), the VCP (validity checking program), the CPO (command prompt override program), the MENU (menu program), and the PROMPT (prompter). You may use other categories or different names to describe the types of CL programs. But whatever you call it, you should identify the type of program you are writing and label it appropriately in the header. Another important part of the introduction is a description of what the program does. State the program's purpose concisely, and, in the program summary, outline the basic program functions to familiarize the programmer with how the program works. You should detail the summary only in terms of what happens and what events occur (e.g., building a file or copying records). A good program header also includes a revision summary, featuring a list of revisions, the dates they were made, and the names of those who made them. If you don't have a standard CL program header, create a template of one in a source member called CLHEADER (or some other obvious name) and copy the member into each CL program. Fill in the current information for each program, and remember to maintain the information as part of the quality control checks you perform on production code. While an up-to-date program header is valuable, an outdated one can be misleading and harmful.
Format your programs to aid understanding. Determining where to start each statement is one of the most basic coding decisions you can make. If you're used to prompting each CL statement, your first inclination would be to begin each one in column 14. While you should use prompting when necessary to enter proper parameter values, the resulting alignment of commands, keywords, and values creates code that is difficult, at best, to read and maintain. Over the years, I've collected several guidelines about where to place code and comments within CL programs.
For starters, begin all comments in column 1, and make comment lines a standard length. Beginning comments in column 1 gives you the maximum number of columns to type the comment. And establishing a consistent comment line length (i.e., the number of spaces between the beginning /* and the closing */) makes the program look neat and orderly. Comments should also stand out in the source. In Figure 24.2, a blank line precedes and follows each comment line to make it more visible. Notice that comments describing a process are boxed in by lines of special characters (I use the = character). Nobody wants to read code in which comments outnumber program statements. But descriptive (not cryptic) comments that define and describe the program's basic sections and functions are helpful road signs. A second guideline is to begin all label names in column 1 on a line with no other code (or at the appropriate nesting level, if located within an IF-THEN or IF-THEN-ELSE construct). Labels in CL programs serve as targets of GOTO statements. The AS/400 implementation of CL requires you to use GOTO statements to perform certain tasks that other languages can accomplish through a subroutine or a DO WHILE construct. (CL/free, a precompiler for CL that supports subroutines and other language enhancements, lets you create more-structured CL programs.) Because labels provide such a basic function, they should clearly reveal entry points into specific statements. Starting a label name in column 1 and placing it alone on the line helps separate it from subsequent code. Notice in Figure 24.2 how you can quickly scan down column 1 and locate the labels (e.g., GLOBAL_ERR, CLEAN_UP, RSND_BGN). However, notice the placement of labels RSND_RPT and RSND_END (at B and C). Instead of beginning these two labels in column 1, I indented them to the expected nesting level to promote comprehension of the overall process. The code following the indented labels remains indented to help the labels stand out and to make the IF-THEN construct more readable. To offset command statements from comments and labels, start commands in column 3. Beginning commands in column 3 -- rather than the prompter's default starting in column (14) -- gives you much more room to enter keywords and values. It also gives you more room to arrange your code. The exception to this guideline concerns using the DO command as part of an IF-THEN or IF-THEN-ELSE construct. To help identify what code is executed in a DO group, I recommend that you indent the code in each DO group. A simple indented DO-ENDDO group might appear as follows:
IF ('condition') DO CL statement CL statement ENDDO A multilevel set of DO-ENDDO groups, including an ELSE statement, might appear like this:
IF ('condition') DO IF ('condition') DO CL statement CL statement IF ('condition') DO CL statement ENDDO ENDDO ELSE DO CL statement CL statement ENDDO ENDDO Notice that the IF and ENDDO statements -- and thus the logic -- are clearly visible.
Simplify and align command parameters. When you use the prompter to enter values for command parameters, Source Entry Utility (SEU) automatically places the selected keywords and values into the code. Several simple guidelines can greatly enhance the way commands, keywords, and values appear in your CL programs. First, omit the following common keywords when using the associated commands:
Command DCL CHGVAR IF ELSE GOTO MONMSG
Keyword VAR, TYPE, LEN VAR, VALUE COND, THEN CMD CMDLBL MSGID
The meanings of the parameter values are always obvious by position. Thus, the keywords just clutter up your code. The following statements omit unneeded keywords:
DCL &outq *CHAR 10 CHGVAR &outq (%SST(&i_ql_outq 1 10)) IF (&flag) GOTO FINISH GOTO RSND_RPT By starting commands in column 3 and following the indentation guidelines, you can type most commands on one line. But when you must continue the command to another line, you have several alternatives, as Figure 24.3 shows. The first alternative is to use the + continuation symbol, indent a couple of spaces on the next line, and continue entering command keywords and values. This is the simplest way to continue a command but the most difficult to read. The second alternative is to place as many keywords and values as possible on the first line and arrange the continuation lines so the additional keywords and values appear as columns under those on the first line. Although this option may be the easiest to read, creating the alignment is a major headache. The third alternative is simply to place each keyword and associated value on a separate line. This method is both simple to implement and easy to read. Thus, a second guideline is to place the entire command on one line when possible; otherwise, place the command and first keyword on the first line and each subsequent keyword on a separate line, using the + continuation symbol. A third guideline is to align the command and its parameters in columns when you repeat the same single-line command statement. This rule of thumb applies when you have a group of statements involving the same command. The DCL statement is a good example. Normally, one or more groups of DCL statements appear at the beginning of each CL program to define variables the program uses. Figure 24.2 shows how placing the DCL statement and parameter values in columns creates more readable code. This alignment rule also applies to multiple CHGVAR (Change Variable) commands. While you can apply the above rules to most commands, the IF command may require special alignment consideration. If the IF statement won't fit on a single line, use the DO-ENDDO construct. For example, the IF statement
IF (&fl2exist) CRTDUPOBJ OBJ(QACVTOTQ) FROMLIB(KWMLIB) OBJTYPE(*FILE) TOLIB(&outlib) NEWOBJ(&outfile)
+ +
should be written
IF (&fl2exist) DO CRTDUPOBJ OBJ(QACVTOTQ) FROMLIB(KWMLIB) OBJTYPE(*FILE) TOLIB(&outlib) NEWOBJ(&outfile) ENDDO
+ + + +
This construction implements guidelines discussed earlier and presents highly accessible code.
Align, shorten, and simplify for neatness. One of the most common symptoms of poor CL style is a general overcrowding of code. Such code moves from one statement to the next without any thought to organization, spacing, or neatness. The result looks more like a
blob of commands than a flowing stream of clear, orderly statements. To save yourself and others the eyestrain of trying to read a jumble of code, follow these suggestions for clean, crisp CL programs: Align all + continuation symbols so they stand out in the source code. In Figure 24.2, I've aligned all + continuation symbols in column 69. Not only does alignment give your programs a uniform appearance, but it also clearly identifies commands that are continued on several lines. I use the + symbol instead of the - for continuation because the + better controls the number of blanks that appear when continuing a string of characters. Both symbols include as part of the string blanks that immediately precede or follow the symbol on the same line. But when continuing a string onto the next line in your source, the + symbol ignores blanks that precede the first nonblank character in the next record. The - continuation symbol includes them. Use blank lines liberally to make code more accessible. Spacing between blocks of code and between comment lines and code can really help programmers identify sections of code, distinguish one command from another, and generally 'get into' the program. Blank lines don't cost you processing time, so feel free to space, space, space. Use the shorthand symbols ||, |<, and |> instead of the corresponding *CAT, *TCAT, and *BCAT operatives. Concatenation can be messy when you use a mixture of strings, variables, and the *CAT, *BCAT, and *TCAT operators. The shorthand symbols shorten and simplify expressions that use concatenation keywords and commands and clearly identify breaks between strings and variables.
Highlight variables with distinct, lowercase names. An essential part of CL style concerns how you use variables in your programs. I don't have any hard-and-fast rules, but I do have some suggestions. First, consider the names you assign variables. Give program parameters distinct names that identify them as parameters. In Figure 24.2, the two parameters processed by the CPP are &I_ql_outq and &i_ql_outf. The i_ in the names tells me both parameters are input-only ( io_ would have indicated a return variable). The ql_ tells me the parameters' values are qualified names (i.e., they include the library name). When program CVTOUTQCL calls program CVTOUTQR, it uses parameter &io_rtncode (A in Figure 24.2). The prefix indicates the parameter is both an input and an output variable, and the rest of the name tells me program CVTOUTQR will return a value to the calling program. A second guideline concerns variables that contain more than one value (e.g., a qualified name or the contents of a data area). You should extract the values into separate variables before using the values in your program. In Figure 24.2, input parameter &i_ql_outq is the qualified name of the output queue. Later in the program, you find the following two statements:
CHGVAR &outq CHGVAR &outqlib
(%SST(&i_ql_outq 1 10)) (%SST(&i_ql_outq 11 10))
These two statements divide the qualified name into separate variables. The separate variables let you code a statement such as
CHKOBJ OBJ(&outqlib/&outq) OBJTYPE(*OUTQ) instead of
CHKOBJ OBJ(%SST(&i_ql_outq 11 10)/ %SST(&I_ql_outq 1 10)) OBJTYPE(*OUTQ)
+
You should also define variables to represent frequently used literal values. For example, define values such as x, ', and 0 as variables [email protected], [email protected], and [email protected], and then use the variable names in tests instead of repeatedly coding the constants as part of the test condition. This guideline lets you define all of a program's constants in one set of DCL statements, which you can easily create and maintain at the start of the program source. In addition, notice the difference between the following statements:
IF (&value = ' ') DO IF (&value = [email protected]) DO
You can more easily digest the second statement because it explicitly tells you what value will result in execution of the DO statement. You may find that defining frequently used variables not only improves productivity, but also promotes consistency as programmers simply copy the variable DCL statements into new source members. A final guideline concerning variables is to type variable names in lowercase. The lowercase variable names contrast nicely with the uppercase commands/parameters. Although typing the names in lowercase may not be easy using SEU, the contrast in type will greatly improve the program's readability. Compare Figure 24.1 with Figure 24.2 again. Which code would you like to encounter the next time you examine a CL program for the first time? I hope you can use these guidelines to create a consistent CL style from which everyone in your shop can benefit. Remember: When you're trying to read a program you didn't write, appearance can be everything.
Sidebar: CL Coding Suggestions Sidebar: Command, RPG program, and physical file associated with CL program CVOUTQCL shown in Figure 24.2.
Chapter 25 - CL Programming: The Classics Since the inception of CL on the S/38 in the early eighties, programmers have been collecting their favorite and most useful CL techniques and programs. Over time, some of these have become classics. In this chapter, we'll visit three timeless programs and five techniques essential to writing classic CL. The five techniques:
• • • • •
Error/exception message handling String manipulation Outfile processing IF-THEN-ELSE and DO groups OPNQRYF (Open Query File) command processing
When I consider the CL programs I would label as classic, I find these techniques being employed to some degree. You may recognize the classic programs we'll visit as similar to something you have created. They provide functions almost always needed and welcomed by MIS personnel at any AS/400 installation. If you are new to the AS/400, I guarantee you will get excited about CL programming after you experience the power of these tools. And if you are an old hand at CL, you may have missed one of these classics. These programs are useful and the techniques valid on the S/38 as well, although some of the details will be different (e.g., the syntax of qualified object names and some outfile file and field names).
Classic Program #1: Changing Ownership If you ever face the problem of cleaning up ownership of objects on your system, you will find the CHGOBJOWN (Change Object Owner) command quite useful. You will also quickly discover that this command works for only one object at a time. Let's see . . . that means you must identify the objects that will have a new owner and then enter the CHGOBJOWN command for each of those objects. Or is there another way? When the solution includes the repetitious use of a CL command, you can almost always use a CL program to improve or automate that solution. To that end, try this first classic CL program, CHGOWNCPP. CHGOWNCPP demonstrates three of the fundamental CL programming techniques: message monitoring, string handling, and outfile processing. Let's take a quick look at how the program logic works and then examine how each technique is implemented. Program Logic. When you execute the command CHGOWN (Figure 25.1a), it invokes the command-processing program CHGOWNCPP (Figure 25.1b). A program-level message monitor traps any unexpected function check messages caused by unmonitored errors during program
execution. If it encounters an unexpected function check message, the MONMSG (Monitor Message) command directs the program to continue at the RSND_LOOP label. The CHKOBJ (Check Object) command verifies that the value in &NEWOWN is an actual user profile on the system. If the CHKOBJ command can't find the user profile on the system, a MONMSG command traps CPF9801. If this happens, an escape message is then sent to the calling program using the SNDPGMMSG command, and the CPP terminates. The DSPOBJD (Display Object Description) command generates the outfile QTEMP/CHGOWN based on the values for variables &OBJ and &OBJTYPE received from command CHGOWN. The program then processes the outfile until message CPF0864 ('End of file') is issued. For each record in the outfile, the CPP executes a CHGOBJOWN command to give ownership to the user profile specified in variable &NEWOWN. The variables &ODLBNM and &ODOBNM contain the object's library and object name, obtained from fields in the outfile file format QLIDOBJD. The value in variable &CUROWNAUT specifies whether the old owner's authority should be revoked or retained. When the CHGOBJOWN command is successful, the program sends a completion message to the calling program's message queue and reads the next record from the file. If the CHGOBJOWN command fails, the error message causes a function check, and the program-level message monitor passes control to the RSND_LOOP label. (Note: The CUROWNAUT parameter does not exist on the S/38 CHGOBJOWN command, so you would need to eliminate it, along with variable &CUROWNAUT in CHGOWNCPP.) After all records have been read, the next RCVF command generates error message CPF0864, and the command-level message monitor causes the program to branch to the FINISH label. The RSND_LOOP label is encountered only if an unexpected error occurs. This section of the program is a loop to receive the unexpected error messages and resend them to the calling program's message queue. The Technique Message Monitoring. The first fundamental technique we will examine is error/exception message handling. Monitoring for system messages within a CL program is a technique that both traps error/exception conditions and directs the execution of the program based on the error conditions detected. The CL MONMSG command provides this function. Program CHGOWNCPP uses both command-level and program-level message monitoring. A command-level message monitor lets you monitor for specific messages that might occur during the execution of a single command. For instance, in program CHGOWNCPP, MONMSG CPF9801 EXEC(DO) immediately follows the CHKOBJ command to monitor specifically for message CPF9801 ('Object not found'). If CPF9801 is issued as a result of the CHKOBJ command, the message monitor traps the message and invokes the EXEC portion of the MONMSG command -- in this instance, a DO command. Another example in the same program is the MONMSG command that comes immediately after the RCVF statement. If the RCVF command causes error message CPF0864, the message monitor traps the error and invokes the EXEC portion of that MONMSG -- in this instance, GOTO FINISH. What happens if an error occurs on a command and there is no command-level MONMSG to trap the error? If there is also no program-level MONMSG for that specific error message, the unexpected error causes function check message CPF9999, and if no program-level MONMSG for CPF9999 exists, the program ends in error. A program-level message monitor is a MONMSG command placed immediately after the last declare statement in a CL program. In our program example, there is a program-level MONMSG CPF9999 EXEC(GOTO RSND_LOOP). This MONMSG handles any unexpected error since all errors that are unmonitored at the command level eventually cause a function check. For instance, if the CHGOBJOWN command fails, an error message is issued that then generates function check message CPF9999. The program-level MONMSG traps this function check, and the EXEC command instructs the program to resume at label RNSD_LOOP and process those error messages. For more information on monitoring messages, see IBM's AS/400 manual Programming: Control Language Programmer's Guide (SC41-8077), or Appendix E of the AS/400 manual Programming: Control Language Reference (SC41-0030). String Handling. Another fundamental technique program CHGOWNCPP employs is string manipulation. The program demonstrates two forms of string handling -- substring manipulation and concatenation. The first is the
%SST (Substring) function. (%SST is a valid abbreviated form of the function %SUBSTRING -- both perform the same job.) The %SST function, which returns to the program a portion of a character string, has three arguments: the name of the variable containing the string, the starting position, and the number of characters in the string to extract. For instance, when the command CHGOWN passes the argument &OBJ to the CL program, the variable exists as a 20-character string containing the object name in positions 1 through 10 and the library name in positions 11 through 20. The CL program uses the %SST function in the CHGVAR (Change Variable) command (A in Figure 25.1b) to extract the library name and object name from the &OBJ variable into the &OBJNAM and &OBJLIB variables. The second form of string handling in this program is concatenation. The control language interface supports three distinct, built-in concatenation functions:
• • •
CAT (||): Concatenate - concatenates two string variables end to end; *TCAT (|<): Trim and concatenate -- concatenates two strings after trimming all blanks off the end of the first string; *BCAT (|>): Blank insert and concatenate - concatenates two strings after trimming all blanks off the end of the first string and then adding a single blank character to the end of the first string.
To see how these functions work, let's apply them to these variables (where /b designates a blank):
&VAR1 *CHAR 10 VALUE('John/b/b/b/b/b') and &VAR2 *CHAR 10 VALUE('Doe/b/b/b/b/b/b/b') The results of each operation are as follows:
&VAR1 || &VAR2 = John/b/b/b/b/b/bDoe &VAR1 |< &VAR2 = JohnDoe &VAR1 |> &VAR2 = John Doe The SNDPGMMSG command (B in Figure 25.1b) uses concatenation to build a string for the MSGDTA (Message Data) parameter. Notice that you can use a combination of constants and program variables to construct a single string during execution. The only limitation is that variables used with concatenation functions must be character variables because they will be treated as strings for these functions. You must convert any numeric variables to character variables before you can use them in concatenation. If the variables &ODLBNM, &ODOBNM, and &NEWOWN in the SNDPGMMSG command contain the values MYLIB, MYPROGRAM, and USERNAME, respectively, the SNDPGMMSG statement generates the message 'Ownership of object MYLIB/MYPROGRAM granted to user USERNAME.' Outfile Processing. The final fundamental technique demonstrated in program CHGOWNCPP is how to use an outfile. You can direct certain OS/400 commands to send output to a database file instead of to a display or printer. In this program, the DSPOBJD command generates the outfile QTEMP/CHGOWN. This file contains the full description of any objects selected. The file declared in the DCLF (Declare File) command is QADSPOBJ, the system-supplied file in library QSYS that serves as the externally defined model for the outfile generated by the DSPOBJD command. (Note: To get a list of the model outfiles provided by the system, you can execute the command 'DSPOBJD QSYS/QA* *FILE'.) Because file QADSPOBJ is declared in this program, the program will include the externally defined field descriptions when you compile it, allowing it to recognize and use those field names during execution. The next step in using an outfile in this program is creating the contents of the outfile using the DSPOBJD command. DSPOBJD uses the object name and type passed from command CHGOWN to create outfile QTEMP/CHGOWN. The outfile name is arbitrary, so I make a practice of giving an outfile the same name as the command or program that creates it. The program then executes the OVRDBF (Override with Database File) command to specify that the file QTEMP/CHGOWN is to be accessed whenever a reference is made to QADSPOBJ. This works because
QTEMP/CHGOWN is created with the same record format and fields as QADSPOBJ. Now when the program reads record format QLIDOBJD in file QADSPOBJ, the actual file it reads will be QTEMP/CHGOWN. These three fundamental CL techniques give you a good start in building your CL library, and the 'Change Owner of Object(s)' tool is definitely handy. You may have discovered the CHGLIBOWN (Change Library Owner) tool in library QUSRTOOL. This IBM-provided tool offers a similar function.
Classic Program #2: Delete Database Relationships The second utility is a real timesaver: the 'Delete Database Relationships' tool provided by command DLTDBR and CL program DLTDBRCPP. DLTDBR uses the same three fundamental techniques described above and adds a fourth: the IF-THEN clause. Let's take a quick look at the program logic and then discuss the IF-THEN technique. Program Logic. When you execute command DLTDBR (Figure 25.2a), the command-processing program DLTDBRCPP (Figure 25.2b) is invoked. As in CHGOWNCPP, a program-level MONMSG handles unexpected errors. The DSPDBR (Display Database Relations) command generates an outfile based on the file you specify when you execute the command DLTDBR. The CPP then processes this outfile until message CPF0864 ('End of File') is issued. For each record in the outfile, the program performs two tests as decision mechanisms for program actions. Both tests check whether or not the record read is a reference to a physical file (&WHRTYP = &PFTYPE). If the file is not a physical file, the program takes no action for that record; it just reads the next record. The first test (A in Figure 25.2b) determines whether dependencies exist for this physical file. &WHNO represents the total number of dependencies. If &WHNO is equal to zero, there are no dependencies for this file, and the program sends a message (using the SNDPGMMSG command) to that effect. The second test (B) checks whether &WHNO is greater than zero. If it is, the record represents a dependent file, and you can delete the file name specified in variables &WHRELI (dependent file library) and &WHREFI (dependent file name) with the DLTF (Delete File) command. When the DLTF is successful, the program sends a completion message to the calling program's message queue. The GOTO RCD_LOOP command sends control to the RCD_LOOP label to read the next record. If the DLTF command fails, the error message causes a function check, and the program-level message monitor directs the program to resume at the RSND_LOOP label. After all records have been read, the RCVF command generates error message CPF0864, and the commandlevel message monitor causes the program to branch to the FINISH label, where the program ends. As with the first program, you will encounter the RSND_LOOP label only if an unexpected error occurs. The Technique IF-THEN-ELSE and DO Groups. The IF-THEN clause lets you add decision support to your CL coding via the IF command, which has two parameters: COND (the conditional statement) and THEN (the action to be taken when the condition is satisfied). A simple IF-THEN statement would be
IF COND(&CODE = 'A') THEN(CHGVAR VAR(&CODE) VALUE('B')) In this example, if the value of variable &CODE is A, the CHGVAR command changes that value to B. To create code that is easier to read and interpret, it is usually best to omit the use of the keywords COND and THEN. The above example is much clearer when written as
IF (&CODE = 'A') CHGVAR VAR(&CODE) VALUE('B') Conditions can also take more complex forms, such as
IF ((&CODE = 'A' *OR &CODE = 'B') *AND (&NUMBER = 1)) GOTO CODEA
+
This example demonstrates several conditional tests. The *OR connective requires at least one of the alternatives -- (&CODE = 'A') or (&CODE = 'B') -- to be true to satisfy the first condition. The *AND connective then requires that (&NUMBER = 1) also be true before the THEN clause can be executed. If both conditions are met, the program executes the GOTO command. (For more information about how to use *AND and *OR connectives, see Chapter 2 of the AS/400 manual Programming: Control Language Programmer's Guide.) The ELSE command provides additional function to the IF command. Examine these statements:
IF (&CODE = 'A') CALL PGMA ELSE CALL PGMB The program executes the ELSE command if the preceding condition is false. You can also use the IF command to process a DO group. Examine the following statements:
IF (&CODE = 'A') DO CALL PGMA CALL PGMB CALL PGMC ENDDO If the condition in the IF command is true, the program executes the DO group until it encounters an ENDDO. The DO command also works with the ELSE command, as this example shows:
IF (&CODE = 'A') DO CALL PGMA CALL PGMB ENDDO ELSE DO CALL PGMD CALL PGME CALL PGMF ENDDO For more information about IF and ELSE commands, see the AS/400 manual Programming: Control Language Reference, (SC41-0030) or the Programming: Control Language Programmer's Guide.
Classic Program #3: List Program-File References The last of the classic CL programs and fundamental techniques, the 'Display Program References' tool, brings us face-to-face with one of the most powerful influences on CL programming -- the one and only OPNQRYF (Open Query File) command. As this program demonstrates, this classic technique is one of the richest and most powerful tools available through CL. Let's take a quick look at the program logic for this tool, provided via the LSTPGMREF (List Program References) command and the LSTPRCPP CL program. Then we can take a close look at the OPNQRYF command. Program Logic. When you execute command LSTPGMREF (Figure 25.3a), the command-processing program LSTPRCPP (Figure 25.3b) is invoked. LSTPRCPP uses the DSPPGMREF (Display Program References) command to generate an outfile based on the value you entered for the PGM parameter. The outfile LSTPGMREF then contains information about the specified programs and the objects they reference. Notice that this program does not use the DCLF statement. There is no need to declare the file format because the program will not access the file directly. You will also notice that the program uses the OVRDBF command, but the SHARE(*YES) parameter has been added. Because a CL program cannot send output to a printer, LSTPRCPP must call a high-level language (HLL) program to print the output. The OVRDBF is required so the HLL program, which references file QADSPPGM, can find outfile QTEMP/LSTPGMREF. The override must specify SHARE(*YES) to ensure that the HLL program will use the Open Data Path (ODP) created by the OPNQRYF
command instead of creating a new ODP and ignoring the work the OPNQRYF has performed. Files used with OPNQRYF require SHARE(*YES). After the DSPPGMREF command is executed, file LSTPGMREF contains records for program-file references as well as program references to other types of objects. The next step is to build an OPNQRYF selection statement in variable &QRYSLT that selects only *FILE object-type references and optionally selects the particular files named in the FILE parameter. LSTPRCPP uses IF tests to construct the selection statement. Then the CPP determines the sequence of records desired (based on the value entered for the OPT parameter in the LSTPGMREF command) and uses the OPNQRYF command to select the records and create access paths that will allow the HLL program to read the records in the desired sequence. The CL program then calls HLL program LSTPRRPG to print the selected records (I haven't provided code here -you will need to build your own version based on your desired output format). The outfile will appear to contain only the selected records, and they will appear to be sorted in the desired sequence. The Technique The OPNQRYF Command. Without doubt, one of the more powerful commands available to CL programmers is the OPNQRYF command. OPNQRYF uses the same system database query interface SQL uses on the AS/400. The command provides many functions, including selecting records and establishing keyed access paths without using an actual logical file or DDS. These two basic functions are the bread-and-butter classic techniques demonstrated in program LSTPRCPP. Record selection is accomplished with OPNQRYF's QRYSLT parameter. If you know the exact record selection criteria when you write the program, filling in the QRYSLT parameter is easy, and the selection string will be compiled with the program. But the real strength of ONPQRYF's record selection capability is that you can construct the QRYSLT parameter at runtime to match the particular user requirements specified during execution. Program LSTPRCPP demonstrates both the compile-time and runtime capabilities of OPNQRYF. When you write program LSTPRCPP, the requirement to include only references to physical files is a given. Therefore, you can use the statement CHGVAR VAR(&QRYSLT) VALUE('WHOBJT = 'F') to initially provide a value for &QRYSLT to satisfy that requirement. The &FILE value is unknown until execution time, so the code must allow this selection criteria to be specified dynamically. First, what are the possible values for the FILE parameter on command LSTPGMREF?
•
You may specify a value of *ALL. If you do, you should not add any selection criteria to the QRYSLT parameter. The &QRYSLT value would be
'WHOBJT = 'F' •
You may specify a generic value, such as IC* or AP??F*. If you enter a generic value, the CL program must determine that &FILE contains a generic name and then use OPNQRYF's %WLDCRD (wildcard) function to build the appropriate QRYSLT selection criteria. The %WLDCRD function lets you select a group of similarly named objects by specifying an argument containing a wildcard (e.g., * or ?). For instance, if you wanted to select all files beginning with the characters IC, you would use the argument IC*. An example of the &QRYSLT variable for this generic selection would be
'WHOBJT = 'F' *AND WHFNAM = %WLDCRD('IC*')' •
You may specify an actual file name. If you do, the CL program must first determine that fact and then simply use the compare function in OPNQRYF to build the value for the QRYSLT parameter. An example for this &QRYSLT variable would be
'WHOBJT = 'F' *AND WHFNAM = 'FILE_NAME' Examining the program, you will see that it performs a series of tests on the variable &FILE to determine how to build the QRYSLT parameter. If *ALL is the value for &FILE, all other IF tests are bypassed, and the program continues. If the program QCLSCAN finds the character * in the string &FILE, it uses the %WLDCRD function to build the appropriate QRYSLT parameter. If the program does not find *ALL and does not find a * in the name, the value of &FILE is assumed to represent an actual file name, and the program compares the value of &FILE to the
field WHFNAM for record selection. Obviously, the power of the QRYSLT parameter is in the hands of those who can successfully build the selection value based on execution-time selections. The second basic bread-and-butter technique is using OPNQRYF to build a key sequence without requiring additional DDS. Program LSTPRCPP tests the value of &OPT to determine whether the requester wants the records listed in *FILE (file library/file name) or *PGM (program library/program name) sequence. The appropriate OPNQRYF statement is executed based on the result of these tests (see A in Figure 25.3b). When &OPT is equal to &FILESEQ (which was declared with the value F), the OPNQRYF statement sequences the file using the field order of WHLNAM (file library), WHFNAM (file name), WHLIB (program library), WHPNAM (program name). When &OPT equals &PGMSEQ (declared with the value P), the key fields are in the order WHLIB, WHPNAM, WHLNAM, WHFNAM. No DDS is required. The HLL program called to process the opened file can provide internal level breaks based on the option selected. For more information concerning the use of the OPNQRYF command with database files, refer to IBM's Programming: Control Language Reference or Programming: Data Base Guide (SC41-9659). Classic CL programs and techniques are a part of the S/38 and now AS/400 heritage, but they're not simply oldies to be looked at and forgotten. Studying these programs and mastering these techniques will help you hone your skills and write some classic CL code of your own.
Chapter 26 - Processing Database Files with CL Once you've learned to write basic CL programs, you'll probably try to find more ways to use CL as part of your iSeries applications. In contrast to operations languages such as a mainframe's Job Control Language (JCL), which serves primarily to control steps, sorts, and parameters in a job stream, CL offers more. CL is more procedural, supports both database file (read-only) and display file (both read and write) processing, and lets you extend the operating-system command set with your own user-written commands. In this article, we examine one of those fundamental differences of CL: its ability to process database files. You'll learn how to declare a file, extract the field definitions from a file, read a file sequentially, and position a file by key to read a specific record. With this overview, you should be able to begin processing database files in your next CL program.
Why Use CL to Process Database Files? Before we talk about how to process database files in CL, let's address the question you're probably asking yourself: 'Why would I want to read records in CL instead of in an HLL program?' In most cases, you probably wouldn't. But sometimes, such as when you want to use data from a database file as a substitute value in a CL command, reading records in CL is a sensible programming solution. Say you want to perform a DspObjD (Display Object Description) command to an output file and then read the records from that output file and process each object using another CL command, such as DspObjAut (Display Object Authority) or MovObj (Move Object). Because executing a CL command is much easier and clearer from a CL program than from an HLL program, you'd probably prefer to write a single CL program that can handle the entire task. We'll show you just such a program a little later, after going over the basics of file processing in CL.
I DCLare! Perhaps the most crucial point in understanding how CL programs process database files is knowing when you need to declare a file in the program. The rule is simple: If your CL program uses the RcvF (Receive File) command to read a file, you must use the DclF (Declare File) command to declare that file to your program. DclF tells the compiler to retrieve the file and field descriptions during compilation and make the field definitions available to the program. The command has only one required parameter: the file name. To declare a file, you need only code in your program either
DclF File(YourFile) or
DclF File(YourLib/YourFile) When using the DclF command, you must remember three implementation rules. First, you can declare only one file - either a database file or a display file - in any CL program. This doesn't mean your program can't operate on other files - for example, using the CpyF (Copy File), OvrDbF (Override with Database File), or OpnQryF (Open Query File) command. It can. However, you can use the RcvF command to process only the file named in the DclF statement. Second, the DclF statement must come after the Pgm (Program) command in your program and must precede all executable commands (the Pgm and Dcl, or Declare CL Variable, commands are not executable). The third rule is that the declared file must exist when you compile the CL program. If you don't qualify the file name, the compiler must be able to find the file in the current library list during compilation.
Extracting Field Definitions When you declare a file to a CL program, the program can access the fields associated with that file. Fields in a declared file automatically become available to the program as CL variables - there's no need to declare the variables separately. When the file is externally described, the compiler uses the external record-format definition associated with the file object to identify each field and its data type and length. Figure 1 shows the DDS for sample file TestPF. To declare this file in a program, you code DclF TestPF The system then makes the following variables available to the program: Variable &Code &Number &Field
Type *Char 1 *Dec 5,0 *Char 30
Your program can then use these variables despite the fact that they're not explicitly declared. For instance, you could include in the program the statements
If
((&Code *Eq 'A') *And + (&Number *GT 10)) + ChgVar &Code ('B')
Notice that when you refer to the field in the program, you must prefix the field name with the ampersand character (&). All CL variables, including those implicitly defined using the DclF command and the file field definitions, require the & prefix when referenced in a program. What about program-described files - that is, files with no external data definition? Suppose you create the following file using the CrtPF (Create Physical File) command
CrtPF File(DiskFile) RcdLen(258) and then you declare file DiskFile in your CL program. As it does with externally defined files, the CL compiler automatically provides access to program-described files. Because there's no externally defined record format, however, the compiler recognizes each record in the file as consisting of a single field. That field is always named &FileName, where FileName is the name of the file. Therefore, if you code
DclF DiskFile your CL program recognizes one field, &DiskFile, with a length equal to DiskFile's record length. You can then extract the subfields with which you need to work. In CL, you extract the fields using the built-in function %Sst (or %Substring). The statements
ChgVar &Field1 (%Sst(&DiskFile 1 10)) ChgVar &Field2 (%Sst(&DiskFile 11 25)) ChgVar &Field3 (%Sst(&DiskFile 50 1)) extract three subfields from &DiskFile's single field. You'll need to remember two rules when using program-described files. First, you must extract the subfields every time you read a record from the file. Unlike RPG, CL has no global When the MonMsg (Monitor Message) command traps this message, control skips to ReadEnd, thus ending the loop. Unlike HLLs, CL doesn't let you reposition the file for additional processing after the program receives an end-offile message. Although you can execute an OvrDbF command containing a Position parameter after your program
receives an end-of-file message, any ensuing RcvF command simply elicits another end-of-file message. Two possible workarounds to this potential problem exist, but each has its restriction. You can use the first workaround if, and only if, you can ensure that the data in the file will remain static for the duration of the read cycles. The technique involves use of the RtvMbrD (Retrieve Member Description) command. Using this command's NbrCurRcd (CL variable for NBRCURRCD) parameter, you can retrieve into a program variable the number of records currently in the file. Then, in your loop to read records, you can use another variable to count the number of records read, comparing it with the number of records currently in the file. When the two numbers are equal, the program has read the last record in the file. Although the program has read the last record, the end-of-file condition is not yet set. The system sets this condition and issues the CPF0864 message indicating end-of-file only after attempting to read a record beyond the last record. Therefore, this technique gives you a way to avoid the end-of-file condition. You can then use the PosDbF (Position Database File) command to set the file cursor back to the beginning of the file. Simply specify *Start for the Position parameter, and you can read the file again! Remember, use this technique only when you can ensure that the data will in no way change while you're reading the file. The second circumvention is perhaps even trickier because it requires a little application design planning. Consider a simple CL program that does nothing more than perform a loop that reads all the records in a database file and exits when the end-of-file condition occurs (i.e., when the system issues message CPF0864). If you replace the statement
MonMsg (CPF0864) Exec(GoTo End) with
MonMsg If ChgVar TfrCtl EndDo
(CPF0864) Exec(Do) (&Stop *Eq 'Y') GoTo End &Stop ('Y') Pgm(YourPgm) Parm(&Stop)
where YourPgm is the name of the program containing the command, the system starts the program over again, thereby reading the file again. Notice that with this technique, you must add code to the program to prevent an infinite loop. In addition to the changes shown above, the program should accept the &Stop parameter. Fail to add these groups of code, and each time the system detects end-of-file, the process restarts. You also must add code to ensure that only those portions of the code that you want to be executed are executed. When possible, if you need to read a database file multiple times, we advise you to construct your application in such a way that you can call multiple CL programs (or one program multiple times, as appropriate). Each of these programs (or instances of a program) can then process the file once. This approach is the clearest and least errorprone method.
File Positioning One well-kept secret of CL file processing is that you can use it to retrieve records by key . . . sort of. The OvrDbF command's Position parameter lets you specify the position from which to start retrieving database file records. You can position the file to *Start or *End (you can also use the PosDbF command to position to *Start or *End), to a particular record using relative record number, or to a particular record using a key. To retrieve records by key, you supply four search values in the Position parameter: a key-search type, the number of key fields, the name of the record format that contains the key fields, and the key value. The key-search type determines where in the file the read-by-key begins by specifying which record the system is to read first. The key-search value specifies one of the following five key-search types:
• •
*KeyB (key-before) — The first record retrieved is the one that immediately precedes the record identified by the other Position parameter search values. *KeyBE (key-before or equal) — The first record retrieved is the one identified by the search values. If no record matches those values, the system retrieves the record that matches the largest previous value.
• • •
*Key (key-equal) — The first record retrieved is the one identified by the search values. (If your CL program calls an HLL program that issues a read-previous operation, the called program will retrieve the preceding record.) *KeyAE (key-after or equal) — The first record retrieved is the one identified by the search values. If no record matches those values, the system retrieves the record with the next highest key value. *KeyA (key-after) — The first record retrieved is the one that immediately follows the record identified by the search values.
As a simple example, let's assume that file TestPF has one key field, Code, and contains the following records: Code A B C E
Number 1 100 50 27
Field Text in Record 1 Text in Record 2 Text in Record 3 Text in Record 4
The statements
OvrDbF Position(*Key 1 TestPFR 'B') RcvF RcdFmt(TestPFR) specify that the record to be retrieved has one key field as defined in DDS record format TestPFR (Figure 1) and that the key field contains the value B. These statements will retrieve the second record (Code = B) from file TestPF. If the key-search type were *KeyB instead of *Key, the same statements would cause the RcvF command to retrieve the first record (Code = A). Key-search types *KeyBE, *KeyAE, and *KeyA would cause the RcvF statement to retrieve records 2 (Code = B), 2 (Code = B), and 3 (Code = C), respectively. Now let's suppose that the program contains these statements:
OvrDbF Position(&KeySearch 1 TestPFR 'D') RcvF RcdFmt(TestPFR) Here's how each &KeySearch value affects the RcvF results:
• • • • •
*KeyB — returns record 3 (Code = C) *KeyBE — returns record 3 (Code = C) *Key — causes an exception error because no match is found *KeyAE — returns record 4 (Code = E) *KeyA — returns record 4 (Code = E)
Using the Position parameter with a key consisting of more than one field gets tricky, especially when one of the key fields is a packed numeric field. You must code the key string to match the key's definition in the file, and if any key field is other than a character or signed-decimal field, you must code the key string in hexadecimal form. For example, suppose the key consists of two fields: a one-character field and a five-digit packed numeric field with two decimal positions. You must code the key value in the Position parameter as a hex string equal in length to the length of the two key fields together (i.e., 1 + 3; a packed 5,2 field occupies three positions). For instance, the value
Position(*Key 2 YourFormat X'C323519F') tells the system to retrieve the record that contains values for the character and packed-numeric key fields of C and 235.19, respectively. As we've mentioned, a CL program can position the database file and then call an HLL program to process the records. For instance, the CL program can use OvrDbF's Position parameter to set the starting point in a file and then call an RPG program that issues a read or read-previous to start reading records at that position.
Having this capability doesn't necessarily mean you should use it, though. One of our fundamental rules of programming is this: Make your program explicit and its purpose clear. Thus, we avoid using the OvrDbF or PosDbF command to position a file before we process it with an HLL program when we can more explicitly and clearly position the file within the HLL program itself. There's just no good reason to hide the positioning function in a CL program that may not clearly belong with the program that actually reads the file. However, when you process a file in a CL program, positioning the file therein can simplify the solution.
What About Record Output? Just about the time you get the hang of reading database files, you suddenly realize that your CL program can't perform any other form of I/O with them. CL provides no direct support for updating, writing, or printing records in a database file. Some programmers use command StrQMQry (Start Query Management Query) to execute a query management query or use the RunSQLStm (Run SQL Statement) command to effect one of these operations from within CL. To use these techniques, you must first create the query management query or enter the SQL source statements to execute with RunSQLStm.
A Useful Example Now that you know how to process database files in a CL program, let's look at a practical example. Security administrators would likely find a program that prints the object authorities for selected objects in one or more libraries useful. Figure 3A shows the command definition for the PrtObjAut (Print Object Authorities) command, which does just that. Figure 3B shows PrtObjAut's command processing program (CPP), PrtObjAut1. Notice that the CPP declares file QADspObj in the DclF statement. This IBM-supplied file resides in library QSys and is a model for the output file that the DspObjD command creates. In other words, when you use DspObjD to create an output file, that output file is modeled on QADspObj's record format and associated fields. In the CPP, the DspObjD command creates output file ObjList, whose file description includes record format QLiDObjD and fields from the QADspObj file description. Because we declare file QADspObj in the program, that's the file we must process. (Remember: You can declare only one file in the program, and file ObjList did not exist at compile time.) The CPP uses the OvrDbF command to override QADspObj to newly created file ObjList in library QTemp. When the RcvF command reads record format QLiDObjD, the override causes the RcvF to read records from file ObjList. As it reads each record, the CL program substitutes data from the appropriate fields into the DspObjAut command and prints a separate authority report for each object represented in the file. We're sure you'll find uses for the CL techniques you've learned in this article. Processing database files in CL is a handy ability that, at times, may be just the solution you need.
Chapter 27 - CL Programs and Display Files In Chapter 26, I talked about processing database files using a CL program. I discussed declaring a file, extracting field definitions (both externally described and program-described), and processing database records. In this chapter, I want to examine how CL programs work with display files. CL is an appropriate choice for certain situations that require displays. For example, CL works well with display files for menus because CL is the language used to override files, modify a user's library list, submit jobs, and check authorities -- all common tasks in a menu environment. CL is also a popular choice for implementing a friendly interface at which users can enter parameters for commands or programs that print reports or execute inquiries. For example, a CL program can present an easily understood panel to prompt the user for a beginning and ending date; the program can then format and substitute those dates into a STRQMQRY (Start Query Management Query) command to produce a report covering a certain time period. When you want users to enter substitution values for use in an arcane command such as OPNQRYF (Open Query File), it is almost imperative that you let them enter selections in a format they understand (e.g., a prompt screen) and then build the command string in CL. It is much easier to build and execute complex CL commands in CL than it is in other languages, especially RPG/400 and COBOL/400.
CL Display File Basics As with a database file, you must use the DCLF (Declare File) command to tell your CL program which display file you want to work with (for more details about declaring a file, see Chapter 26). Declaring the file lets the compiler locate it and retrieve the field and format definitions. Figure 27.1 shows the DDS for a sample display file, USERMENUF, and Figure 27.2 shows part of a compiler listing for a CL program that declares USERMENUF. The default for DCLF's RCDFMT parameter, *ALL, tells the compiler to identify and retrieve the descriptions for all record formats in the file. Notice that the field and format definitions immediately follow the DCLF statement on the compiler listing. If your display file has many formats and you plan to use only one or a few of them, you can specify up to 50 different record formats in the RCDFMT parameter instead of using the *ALL default value. Doing so reduces the size of the compiled program object by eliminating unnecessary definitions. After you declare a display file, you can output record formats to a display device using the SNDF (Send File) command, read formats from the display device using the RCVF (Receive File) command, or perform both functions with the SNDRCVF (Send/Receive File) command. These commands parallel RPG/400's WRITE, READ, and EXFMT opcodes, respectively. For instance, to present a record format named PROMPT on the display, you could code your CL as
SNDF RCDFMT(PROMPT) RCVF RCDFMT(PROMPT) or as
SNDRCVF RCDFMT(PROMPT) To send more than one format to the screen at once (e.g., a standard header format, a function key format, and an input-capable field), you use a combination of the SNDF and SNDRCVF commands as you would use a combination of WRITE and EXFMT in RPG/400:
SNDF RCDFMT(HEADER) SNDF RCDFMT(FKEYS) SNDRCVF RCDFMT(DETAIL Notice that the RCDFMT parameter value in each statement specifies the particular format for the operation. If there is only one format in the file, you can use RCDFMT's default value, *FILE, and then simply use the SNDF, RCVF, or SNDRCVF command without coding a parameter.
CL Display File Examples Let's look at an example of how to use CL with a display file for a menu and a prompt screen. Figure 27.3 shows a menu based on the DDS in Figure 27.1. From the DDS, you can see that record format MENU displays the list of menu options, and record formats MSGF and MSGCTL control the IBM-supplied message subfile function that sends messages to the program message queue. Record format PROMPT01 is a panel that lets the user enter selection values for Batch Report 1.
Figure 27.4 shows CL program USERMENU, the program driver for this menu. As you can see, USERMENU sets up work variable &pgmq, displays the menu, and then, depending on user input, either executes the code that corresponds to the menu option selected or exits the menu.
The sample menu's menu options, option field, and function key description are all part of the MENU record format on the DDS. To display these fields to the user and allow input, program USERMENU uses the SNDRCVF command (C in Figure 27.4). Should the user enter an invalid menu option, select an option (s)he is not authorized to, or encounter an error, the program displays the appropriate message at the bottom of the screen by displaying message subfile record format MSGCTL (D). (I discuss this record format in more detail in a moment.) Figure 27.5 shows a completion message at the bottom of the sample menu.
The message subfile is a special form of subfile whose definition includes some predefined variables and special keywords. The message subfile record format is format MSGSFL (B in Figure 27.1). The keyword SFLMSGRCD(23) tells the display file to display the messages in this subfile beginning on line 23 of the panel. You can specify any line number for this keyword that is valid for the panel you are displaying. The associated SFLMSGKEY keyword and the IBM-supplied variable MSGKEY support the task of retrieving a message from the program message queue associated with the SFLPGMQ keyword (i.e., the message queue named in variable PGMQ) and displaying the message in the form of a subfile. The CL program assigns the value USERMENU to variable &pgmq (A in Figure 27.4), thus specifying that the program message queue to be displayed is the one associated with program USERMENU. MSGCTL, the next record format in the DDS, uses the standard subfile keywords (e.g., SFLSIZ, SLFINZ, SLFDSP) along with the SFLPGMQ keyword. This record format establishes the message subfile for this display file with a SFLSIZ value of 10 and a SFLPAG value of 1. In other words, the message subfile will hold up to 10 messages and will display one message on each page. Because of the value of the SFLMSGRCD keyword in the MSGSFL format, the message will be displayed on line 23. You can alter the SFLMSGRCD and SFLPAG values to display as many messages as you like and have room for on the screen. If more than one page of messages exists, the user can scroll through the pages by pressing Page up and Page down. You might be asking, 'What does program USERMENU have to do to fill the message subfile?' The answer: Absolutely nothing! This fact often confuses programmers new to message subfiles because they can't figure out how to load the subfile. You can think of the message subfile as simply a mechanism by which you can view the messages on the program message queue. By changing the value of variable &pgmq to USERMENU, I specified which program message queue to associate with the message subfile. That's all it takes. Immediately after D in Figure 27.4, you can see that I change indicator 40 (variable &in40) to [email protected] and then output format MSGCTL using the SNDF command. In the DDS, indicator 40 controls the SFLINZ and SFLEND keywords (C in Figure 27.1) to initialize the subfile before loading it and to display the appropriate + or blank to let the user know whether more subfile records exist beyond those currently displayed. (You can specify SFLEND(*MORE) if you prefer to have the message subfile use the 'More...' and 'Bottom' subfile controls after the last record, but be sure your screen has a blank line at the bottom so that these subfile controls can be displayed.) When the program outputs the MSGCTL format, the PGMQ and MSGKEY variables coded in the MSGSFL record format cause all messages to be retrieved from the program message queue and presented in the subfile. The user can move the cursor onto a message and press the Help key to get secondary text, when it is available, and can scroll through all the error messages in the subfile. At B in Figure 27.4, the RMVMSG command clears *ALL messages from the current program queue (i.e., queue USERMENU). Clearing the queue at the beginning of the program ensures that old messages from a previous invocation do not remain in the queue.
Figure 27.6 shows a prompt screen a user might receive to specify selections for a menu option that submits a report program to batch. The user keys the appropriate values and presses Enter to submit the report. If the program encounters an error when validating the values, the display file uses an error subfile to display the error message at the bottom of the screen, like the error message in Figure 27.7.
You use the ERRSFL keyword in the DDS (A in Figure 27.1) to indicate an error subfile. An error subfile provides a different function than a message subfile. The error subfile automatically presents any error messages generated as a result of DDS message or validity-checking keywords (e.g., ERRMSG, SFLMSG, CHECK, VALUES). The purpose of the error subfile is to group error messages generated by these keywords for a particular record format, not to view messages on the program message queue. (For more information about error subfiles, see the Data Description Specifications Reference, SC41-9620.)
Considerations The drawbacks to using CL for display file processing are CL's limited database file I/O capabilities and its lack of support for user-written subfiles. As I explained in Chapter 26, CL can only read database files. The fact that you cannot write or update database file records greatly reduces CL's usefulness in an interactive environment. The lack of support for user-written subfiles also limits its usefulness in applications that require user interaction. But in many common situations, CL's strengths more than offset these limitations. CL's command processing, message handling, and string manipulation capabilities make it a good choice for menus, prompt screens, and other nondatabase-related screen functions. While not always appropriate, for many basic interactive applications CL offers a simple alternative to a high-level language for display file processing. With this knowledge under your belt, you can choose the best and easiest language for applications that use display files.
Chapter 28 - OPNQRYF Fundamentals In this chapter, I give you the foundation you need to use the OPNQRYF (Open Query File) command, and then I leave you to discover the rewards as you apply this knowledge to your own applications. OPNQRYF's basic function is to open one or more database files and present records in response to a query request. Once opened, the resulting file or files appear to high-level language (HLL) programs as a single database file containing only the records that satisfy query selection criteria. In essence, OPNQRYF works as a filter that determines the way your programs see the file or files being opened. You can use the OPNQRYF command to perform a variety of database functions: joining records from more than one file, grouping records, performing aggregate calculations such as sum and average, selecting records before or after grouping, sorting records by one or more key fields, and calculating new fields using numeric or character string operations. One crucial point to remember when using OPNQRYF is that you must use the SHARE(*YES) file attribute for each file opened by the OPNQRYF command. When you specify SHARE(*YES), subsequent opens of the same file will share the original open data path and thus see the file as presented by the OPNQRYF process. If OPNQRYF opens a file using the SHARE(*NO) attribute, the next open of the file will not use the open data path created by the OPNQRYF command, but instead will perform another full open of the file. Don't assume the file description already has the SHARE(*YES) value when you use the OPNQRYF command. Instead, always use the OVRDBF (Override with Database File) command just before executing OPNQRYF to explicitly specify SHARE(*YES) for each file to be opened. Be aware that the OPNQRYF command ignores any
parameters on the OVRDBF command other than TOFILE, MBR, LVLCHK, WAITRCD, SEQONLY, INHWRT, and SHARE.
The Command Figure 28.1 shows the entire OPNQRYF command. OPNQRYF has five major groups of parameters (specifications for file, format, key field, join field, and mapped field) and a few extra parameters not in a group. Using the OPNQRYF command is easier once you master the parameter groups. There are some strong, but awkwardly structured, parallels between OPNQRYF parameters and specific SQL concepts. For instance, the file and format specifications parallel the more basic functions of the SQL SELECT and FROM statements; the query selection expression parallels SQL's WHERE statement; the key field specifications parallel SQL's ORDER BY statement; and the grouping field names expression parallels the GROUP BY statement. If you compare OPNQRYF to SQL (page 351), you'll see that the OPNQRYF command is basically a complicated SQL front end that offers a few extra parameters.
Start with a File and a Format For every query, there must be data -- and for data, there must be a file. OPNQRYF's file specifications parameters identify the file or files that contain the data. A simple OPNQRYF command might name a single file, like this:
OPNQRYF FILE(MYLIB/MYFILE) ... This partial command identifies MYLIB/MYFILE as the file to be queried. Notice that the FILE parameter in Figure 28.1 has three separate parameter elements: the qualified file name, data member, and record format. A specified file must be a physical or logical file, an SQL view, or a Distributed Data Management file. In the sample command above, I specify the qualified file name only and do not enter a specific value for the second and third elements of the FILE parameter. Therefore, the default values of *FIRST and *ONLY are used for the member and record format, respectively. You can select a particular data member to be queried by supplying a member name. The default value of *ONLY for record format tells the database manager to use the only record format named in file MYFILE in our example. When you have more than one record format, you must use the record format element of the FILE parameter to name the particular record format to open. You can enter a plus sign in the '+ for more values' field and enter multiple file specifications to be dynamically joined (as opposed to creating a permanent join logical file on the system). When joining more than one record format, you must enter values in the join field specifications parameter (JFLD) to specify the field the database manager will use to perform the join. The FORMAT parameter specifies the format for records made available by the OPNQRFY command. The fields defined in this record format must be unique from those named in the FILE or MAPFLD parameter. When you use the default value of *FILE for the FORMAT parameter, the record format of the file defined in the FILE parameter is used for records selected. You cannot use FORMAT(*FILE) when the FILE parameter references more than one file, member, or record format. To return to our example, if you key
OPNQRYF FILE(MYLIB/MYFILE) ... the record format of file MYFILE would be used for the records presented by the OPNQRYF command. On the other hand, if you use the command
OVRDBF FILE(MYJOIN) TOFILE(MYLIB/MYFILE) SHARE(*YES) with this OPNQRYF command
OPNQRYF FILE(MYLIB/MYFILE) FORMAT(MYJOIN)
the database manager uses the record format for file MYJOIN. The FORMAT parameter can specify a qualified file name and a record format (e.g., (MYLIB/MYJOIN JOINR)), or it can simply name the file containing the format to be used (e.g., (MYJOIN)). Although you can select (via the QRYSLT parameter) any fields defined in the record format of the file named in the FILE parameter, OPNQRYF will make available only those fields defined by the record format named in the FORMAT parameter. In the previous example, the HLL program would open file MYJOIN, and the OVRDBF command would redirect the open to the queried file, MYLIB/MYFILE. The format for MYJOIN would present records from MYFILE. Later, in the discussion of field mapping, I'll explain why you might want to do this. Because this chapter is only an introduction to OPNQRYF, I won't talk any more about join files. Instead, let's focus on creating queries for single file record selection, sorting, mapping fields, and HLL processing.
Record Selection As I said earlier, the record selection portion of the OPNQRYF command parallels SQL's WHERE statement. The QRYSLT parameter provides record selection before record grouping occurs (record grouping is controlled by the GRPFLD parameter). The query selection expression can be up to 2,000 characters long, must be enclosed in apostrophes (because it comprises a character string for the command to evaluate), and can consist of one or more logical expressions connected by *AND or *OR. Each logical expression must use at least one field from the files being queried. The OPNQRYF command also offers built-in functions that you can include in your expressions (e.g., %SST, %RANGE, %VALUES, and %WILDCARD). This simple logical expression
QRYSLT('DLTCDE = 'D') instructs the database manager to select only records for which the field DLTCDE contains the constant value D. A more complex query might use the following expression:
QRYSLT('CSTNBR *EQ %RANGE(10000 49999) *AND + CURDUE *GT CRDLIM *AND CRDFLG *EQ 'Y') In this example, CSTNBR (customer number), CURDUE (current due), and CRDLIM (credit limit) are numeric fields, and CRDFLG (credit flag) is a character field. The QRYSLT expression uses the %RANGE function to determine whether the CSTNBR field is in the range of 10000 to 49999 and then checks whether CURDUE is greater than the credit limit. Finally, it tests CRDFLG against the value Y. When all tests are true for a record in the file, that record is selected. You can minimize trips to the manual by remembering a few rules about the QRYSLT parameter. First, enclose all character constants in apostrophes or quotation marks (e.g., 'char-constant' or 'char-constant'). For example, consider the following logical expression comparing a field to a character constant:
CRDFLG 'EQ 'Y' If you want to substitute runtime CL variable &CODE for the constant, you would code the expression as:
'CRDFLG *EQ ' *CAT &CODE *CAT '' After substitution and concatenation, quotation marks enclose the value supplied by the &CODE variable, and the expression is valid. Second, differentiate between upper and lower case when specifying character variables. Character variables in the QRYSLT parameter are case-sensitive; in other words, you must either specify a 'Y' or a 'y' or provide for both possibilities. Numeric constants and variables cause undue anxiety for newcomers to the OPNQRYF command. Look again at this example:
QRYSLT('CSTNBR *EQ %RANGE(10000 49999) *AND
+
CURDUE *GT CRDLIM *AND CRDFLG *EQ 'Y') Two of the logical expressions use numeric fields or constants. In the first expression
'CSTNBR *EQ %RANGE(10000 49999)' notice there are no apostrophes or quotation marks around the numeric constants. Although these numbers appear in a character string (the QRYSLT parameter), they must appear as numbers for the system to recognize and process them, which brings us to the third QRYSLT parameter rule: Don't enclose numeric or character variables in quotation marks if the value of a variable should be evaluated as numeric. The second logical expression
CURDUE *GT CRDLIM compares two fields defined in the record format or mapped fields. Again, there are no quotation marks around the names of these numeric fields. A dragon could rear its ugly head when you create a dynamic query selection in a CL or HLL program. Suppose you want to let the user enter the range of customer numbers to select from rather than hard-coding the range. To build a dynamic QRYSLT, you must use concatenation, and concatenation can only be performed on character fields. However, you would probably require the user to enter numeric values so you could ensure that all positions in the field are numeric. This means that the variables that define the range of customer numbers must be converted to characters before concatenation, but later they must appear as numbers in the QRYSLT parameter so they can be compared to the numeric CSTNBR field. Figure 28.2 shows one way to create the correct QRYSLT value. Suppose the user enters the numeric values at a prompt provided by display file USERDSP. First, you use the CHGVAR (Change Variable) command to move these numeric values into character variables &LOWCHR and &HIHCHR. You can use the character variables and concatenation to build the QRYSLT string in variable &QRYSLT. When the substitution is made, the numeric values appear without quotation marks, just as though the numbers were entered as constants. The GRPSLT parameter functions exactly like the QRYSLT parameter, except the selection is performed after records have been grouped. The same QRYSLT functions are available for the GRPSLT expression, and the same rules apply.
Key Fields Besides selecting records, you can establish the order of the records OPNQRYF presents to your HLL program by entering one or more key fields in the key field specifications. The KEYFLD parameter consists of several elements. You must specify the field name, whether to sequence the field in ascending or descending order, whether or not to use absolute values for sequencing, and whether or not to enforce uniqueness. Let's look at a couple of examples. The following OPNQRYF command:
OPNQRYF FILE(MYLIB/MYFILE) QRYSLT('....') KEYFLD(CSTNBR) would cause the selected records to appear in ascending order by customer number because *ASCEND is the default for the key field order. The command
OPNQRYF FILE(MYLIB/MYFILE) QRYSLT('....') + KEYFLD((CURBAL *DESCEND) (CSTNBR)) would present the selected records in descending order by current balance and then in ascending order by customer number. Any key field you name in the KEYFLD parameter must exist in the record format referenced by the FORMAT parameter. The key fields specified in the KEYFLD parameter can be mapped from existing fields, so long as the referenced field definition exists in the referenced record format. The KEYFLD default value of *NONE tells the
database manager to present the selected records in any order. Entering the value *FILE tells the query to use the access path definition of the file named in the FILE parameter to order the records.
Mapping Virtual Fields One of the richer features of the OPNQRYF command is its support of field mapping. The mapped field specifications let you derive new fields (known as 'virtual' fields in relational database terms) from fields in the record format being queried. You can map fields using a variety of powerful built-in functions. For example, %SST returns a substring of the field argument, %DIGITS converts numbers to characters, and %XLATE performs character translation using a translation table. You can use the resulting fields to select records and to sequence the selected records. Look at the following OPNQRYF statement:
OPNQRYF FILE(INPDTL) FORMAT(DETAIL) QRYSLT('LINTOT *GT 10000')+ KEYFLD((CSTNBR) (INVDTE)) MAPFLD((LINTOT 'INVQTY * IPRICE')) Fields INVQTY (invoice item quantity) and IPRICE (invoice item price) exist in physical file INPDTL. Mapped field LINTOT (line total) exists in the DETAIL format, which is used as the format for the selected records. As each record is read from the INPDTL file, the calculation defined in the MAPFLD parameter ('INVQTY * IPRICE') is performed, and the value is placed in field LINTOT. The database manager then uses the value in LINTOT to determine whether to select or reject the record.
OPNQRYF Command Performance Whenever possible, the OPNQRYF command uses an existing access path for record selection and sequencing. In other words, if you select all customer numbers in a specific range and an access path exists for CSTNBR, the database manager will use that access path to perform the selection, thus enhancing the performance of the OPNQRYF command. However, if the system finds no access path it can use, it creates a temporary one; and creating an access path takes a long time at the machine level, especially if the file is large. Likewise, when you specify one or more key fields in your query, the database manager will use an existing access path if possible; otherwise, the database manager must create a temporary one, again degrading performance. Overall, the OPNQRYF command provides flexibility that is sometimes difficult to emulate using only HLL programming and the native database. However, OPNQRYF is a poor performer when many temporary access paths must be created to support the query request. You may also need to weigh flexibility against performance to decide which record-selection method is best for a particular application. To help you make a decision, you can use these guidelines:
• • •
If the application is interactive, use OPNQRYF sparingly; and, unless the file is relatively small (i.e., fewer than 10,000 records), ensure that existing access paths support the selection and sequencing. If the application is a batch application run infrequently or only at night, you can use OPNQRYF without hesitation, especially if it eliminates the need for logical files used only to support those infrequent or night jobs. If the application runs frequently and in batch during normal business hours, use OPNQRYF when existing access paths support the selection and sequencing or when the files are relatively small. Use native database and HLL programming when the files are large (greater than 10,000 records) or when many (more than three or four) temporary access paths are required.
The next time a user requests a report that requires more than a few selections and whose records must be in four different sequences, use the OPNQRYF command to do the work and write one HLL program to do the reporting... But remember, to be on the safe side, run the report at night!
Sidebar: SQL Special Features
Chapter 29 - Teaching Programs to Talk Speak, program! Speak!' That's one way to try to get your program to talk (perhaps success is more likely if you reward good behavior with a treat). However, to avoid finding you actually barking orders, I want to introduce SNDUSRMSG (Send User Message), an OS/400 command you can use to 'train' your programs to communicate.
The SNDUSRMSG command exists for the sole purpose of communicating from program to user and includes the built-in ability to let the user talk back. In Chapter 7, I covered the commands you can use to send impromptu messages from one user to another: SNDMSG (Send Message), SNDBRKMSG (Send Break Message), and SNDNETMSG (Send Network Message). Programs can also use these commands to send an informational message to a user, but because these commands provide no means for the sending program to receive a user response, their use for communication between programs and users is limited. In contrast, the SNDUSRMSG command lets a CL program send a message to a user or a message queue and then receive a reply as a program variable.
Basic Training Figure 29.1 shows the SNDUSRMSG command screen. The message can be an impromptu message or one you've defined in a message file. To send an impromptu message, just type a message of up to 512 characters in the MSG parameter. To use a predefined message, enter a message ID in the MSGID parameter. The message you identify must exist in the message file named in the MSGF parameter. The MSGDTA parameter lets you specify values to take the place of substitution variables in a predefined message. For example, message CPF2105
(Object &1 in &2 type *&3 not found) has three data substitution variables: &1, &2, and &3. When you use the SNDUSRMSG command to send this message, you can also send a MSGDTA string that contains the substitution values for these variables. If you supply these values in the MSGDTA string:
'CSTMAST
ARLIB
FILE
'
the message appears as
Object CSTMAST in ARLIB type *FILE not found If you do not supply any MSGDTA values, the original message is sent without values (e.g., Object in type * not found). The character string specified in the MSGDTA parameter is valid only for messages that have data substitution variables. It is important that the character string you supply is the correct length and that each substitution variable is positioned properly within that string. The previous example assumes that the message is expecting three variables (&1, &2, and &3) and that the expected length of each variable is 10, 10, and 7, respectively, making the entire MSGDTA string 27 characters long. How do I know that? Because each system-defined message has a message description that includes detailed information about substitution variables, and I used the DSPMSGD (Display Message Description) command to get this information. Every AS/400 is shipped with QCPFMSG (a message file for OS/400 messages) and several other message files that support particular products. You can also create your own message files and message IDs that your applications can use to communicate with users or other programs. For more information about creating and using messages, see the AS/400 Control Language Reference (SC41-0030) and the AS/400 Control Language Programmer's Guide (SC41-8077). The next parameter on the SNDUSRMSG command is VALUES, which lets you specify the value or values that will be accepted as the response to your message, if one is requested. When you specify MSGTYPE(*INQ) and a CL variable in the MSGRPY parameter (discussed later), the system automatically supplies a prompt for a response when it displays the message. The system then verifies the response against the valid values listed in the VALUES parameter. If the user enters an invalid value, the system displays a message saying that the reply was not valid and resends the inquiry message. To make sure the user knows what values are valid, you should list the valid values as part of your inquiry message. In the DFT parameter, you can supply a default reply to be used for an inquiry message when the message queue that receives the message is in the *DFT delivery mode or when an unanswered message is deleted from the message queue. The default value in the SNDUSRMSG command overrides defaults specified in the message description of predefined messages. The system uses the default value when the message is sent to a message
queue that is in the *DFT delivery mode, when the message is inadvertently removed from a message queue without a reply, or when a system reply list entry is used that specifies the *DFT reply. Oddly enough, this value need not match any of the supplied values in the VALUES parameter. This oddity presents some subtle problems for programmers. If the system supplies a default value not listed in the VALUES parameter, it is accepted. However, if a user types the default value as a reply, and the default is not listed in the VALUES parameter, the system will notify the user that the reply was invalid. To avoid such a mess, I strongly recommend that you use only valid values (those listed in the VALUES parameter) when you supply a default value. The MSGTYPE parameter lets you specify whether the message you are sending is an *INFO (informational, the default) or *INQ (inquiry) message. Both kinds appear on the destination message queue as text, but an inquiry message also supplies a response line and waits for a reply. The TOMSGQ parameter names the message queue that will receive the message. You can enter the name of any message queue on the local system, or you can use one of the following special values:
• • •
* -- instructs the system to send the message to the external message queue (*EXT) if the job is interactive or to message queue QSYS/QSYSOPR if the program is being executed in batch. *SYSOPR -- tells the system to send the message to the system operator message queue, QSYS/QSYSOPR. *EXT -- instructs the system to send the message to the job's external message queue. Inquiry messages to batch jobs will automatically be answered with the default value, or with a null value (*N) if no default is specified. Keep in mind that although messages can be up to 512 characters long for first-level text, only the first 76 characters will be displayed when messages are sent to *EXT.
The TOUSR parameter is similar to TOMSGQ but lets you specify the recipient by user profile instead of by message queue. You can enter the recipient's user profile, specify *SYSOPR to send the message to the system operator at message queue QSYS/QSYSOPR, or enter *REQUESTER to send the message to the current user profile for an interactive job or to the system operator message queue for a batch job. One problem emerges when using the SNDUSRMSG command to communicate with a user from the batch job environment. In the interactive environment, both the TOUSR and TOMSGQ parameters supply values that let you communicate easily with the external user of the job. In the batch environment, the only values provided for TOUSR and TOMSGQ direct messages to the system operator as the external user. There are no parameters to communicate with the user who submitted the job. The CL code in Figure 29.2 solves this problem. When you submit a job, the MSGQ parameter on the SBMJOB (Submit Job) command tells the system where to send a job completion message. You can retrieve this value using the RTVJOBA (Retrieve Job Attributes) command and the SBMMSGQ and SBMMSGQLIB return variables. The program in Figure 29.2 uses the RTVJOBA command to retrieve the name of the message queue and tests variable &type to determine whether the current job is a batch job ([email protected]). If so, SNDUSRMSG can send the message to the message queue defined by the &sbmmsgq and &sbmmsgqlib variables. If the job is interactive, the SNDUSRMSG command can simply direct the message to the external user by specifying TOUSR(*REQUESTER). You can use the MSGRPY parameter to specify a CL character variable (up to 132 characters long) to receive the reply to an inquiry message. Make sure that the length of the variable is at least as long as the expected length of the reply; if the reply is too short, it will be padded with blanks to the right, but if the reply exceeds the length of the variable, it will be truncated. The first result causes no problem, whereas a truncated reply may cause an unexpected glitch in your program. An inquiry message reply must be a character (alphanumeric) reply. If your application requires the retrieval of a numeric value, it is best to use DDS and a CL or high-level language (HLL) program to prompt the user for a reply. This approach ensures that validity checking is performed for numeric values. Alas, the SNDUSRMSG command also exhibits another oddity: If you don't specify a MSGRPY variable but do specify MSGTYPE(*INQ), the command causes the job to wait for a reply from the message queue but doesn't retrieve the reply into your program. The last parameter on the SNDUSRMSG command is TRNTBL, which lets you specify a translation table to process the response automatically. The default translation table is QSYSTRNTBL, which translates lowercase
characters (X'81' through X'A9') to uppercase characters. Therefore, you can check only for uppercase replies (e.g., Y or N) rather than having to code painstakingly for all lowercase and uppercase possibilities (e.g., Y, y, N, n).
Putting the Command to Work Figure 29.3 shows how the SNDUSRMSG command might be implemented in a CL program. Notice that SNDUSRMSG is first used for an inquiry message. The message is sent to *REQUESTER to make sure the entire message text is displayed on the queue. The job determines whether or not the daily report has already been run for that day and, if it has, prompts the user to verify that the report should indeed be run again. The program explicitly checks for a reply of Y or N and takes appropriate action. Some people might argue that this is overcoding, because if you specified VALUES('Y' 'N') and you check for Y first, you can assume that N is the only other possibility. Although you can make assumptions, it is best if all the logical tests are explicit and obvious to the person who maintains the program. Also notice that the SNDUSRMSG command is used again in Figure 29.3 to send informational messages that let the user know which action the program has completed (the completion of the task or the cancellation of the request to process the daily report, depending on the user reply). You will find that supplying informational program-to-user messages will endear you to your users and help you avoid headaches (e.g., multiple submissions of the same job because the user wasn't sure the first job submission worked).
Knowing When To Speak As shown in the CL program example, using SNDUSRMSG to prompt the user for a simple reply makes good use of the command's capabilities. This function is somewhat different from prompting for data when you submit a job. I don't recommend using the SNDUSRMSG command to retrieve data for program execution (e.g., branch number, order number range, date range), because SNDUSRMSG offers minimal validity checking and is not as user-friendly as a DDS-coded display file prompt can be. Instead, you should create prompts for data as display files (using DDS) and process them with either a CL or HLL program. In a nutshell, the SNDUSRMSG command is best suited to sending an informational message to the user to relate useful information (e.g., 'Your job has been submitted. You will receive a message when your job is complete.') or to sending an inquiry message that lets the user choose further program action. The SNDUSRMSG command can teach your programs to talk, but the vocabulary associated with this command is specific to these two tasks. Now that you know how to train your program to talk to users, you can save the biscuits for the family pooch. The next challenge: teaching your programs to communicate with each other! You'll be able to master that after I explain how to use the SNDPGMMSG (Send Program Message) command, which lets you send messages from program to program with information such as detected program errors and requirements for continued processing. Who says you can't teach an old dog new tricks?
Chapter 30 - Just Between Us Programs In Chapter 7, I explained that you can use the SNDMSG (Send Message), SNDBRKMSG (Send Break Message), or SNDNETMSG (Send Network Message) command to communicate with someone else on your AS/400. In Chapter 29, I showed how to use one of these commands or the SNDUSRMSG (Send User Message) command to have a program send a message to a user. But when you want to establish communications between programs, none of these commands will do the job; you need the SNDPGMMSG (Send Program Message) and RCVMSG (Receive Message) commands. Now I want to introduce the SNDPGMMSG command (see Chapter 31 for a discussion of the RCVMSG command). Program messages are normally used for one of two reasons: to send error messages to the calling program (so it knows when a function has not been completed) or to communicate the status or successful completion of a process to the calling program. In this chapter, you'll learn how a job stores messages, how to have one program send a message to another, what types of messages a program can send, and what actions they can require a job to perform. But first, you need to understand the importance of job message queues.
Job Message Queues All messages on the AS/400 must be sent to and received from a message queue. User-to-user and program-touser messages are exchanged primarily via nonprogram message queues (i.e., either a workstation or a user message queue). OS/400 creates a nonprogram message queue when a workstation device or a user profile is created. You can also use the CRTMSGQ (Create Message Queue) command to create nonprogram message queues. For example, you might want to create a message queue for communication between programs that aren't part of the same job. Or you might want to create a central message queue to handle all print messages. Both users and programs can send messages to and receive them from nonprogram message queues. Although programs can use nonprogram message queues to communicate to other programs, OS/400 provides a more convenient means of communication between programs in the same job. For each job on the system, OS/400 automatically creates a job message queue that consists of an external message queue (*EXT, through which a program communicates with the job's user) and a program message queue (for each program invocation in that job). Figure 30.1 illustrates a sample job message queue. OS/400 creates an external message queue when a job is initialized and deletes the queue when the job ends. OS/400 also creates a program message queue when a program is invoked and deletes it when the program ends (before removing the program from the job invocation stack). The job message queue becomes the basis for the job log produced when a job is completed. The job log includes all messages from the job message queue, as well as other essential job information. (For more information about job logs, see 'Understanding Job Logs'.)
The SNDPGMMSG Command Figure 30.2 shows the parameters associated with the SNDPGMMSG command prior to OS/400 V2R3. Because of the introduction of ILE (Integrated Language Environment) support in OS/400 V2R3, the SNDPGMMSG command now includes some additional parameter elements that address specific ILE requirements (see the section 'ILE-Induced Changes'). You can use the SNDPGMMSG command in a CL program to send a program message to a nonprogram or program message queue. You can enter an impromptu message (up to 512 characters long) on the MSG parameter, or you can use the MSGID, MSGF, and MSGDTA parameters to send a predefined message. (To review predefined messages, see Chapter 29, 'Teaching Programs to Talk.') The TOPGMQ parameter is unique to the SNDPGMMSG command and identifies the program queue to which a message will be sent. TOPGMQ consists of two values: relationship and program. The first value specifies the relationship between the target program and the sending program. For this value, you can specify *PRV (indicating the message is to go to the target program's caller or requester), *SAME (the message is to be sent to the target program itself), or *EXT (the message is to go to the target job's external message queue). The second value specifies the target program and can be either the name of a program within the sending program's job or the special value *, which tells OS/400 to use the sending program's name. The default value for the TOMSGQ parameter, *TOPGMQ, tells the system to refer to the TOPMGQ parameter to determine the destination of the message. Let's look at the job message queues shown in Figure 30.1. Assuming that PGM_D is the active program, let's suppose PGM_D executes the following SNDPGMMSG command:
SNDPGMMSG MSG('Test message') TOMSGQ(*SYSOPR) MSGTYPE(*INFO)
+
PGM_D would send the message 'Test message' to the system operator's workstation message queue because the value *SYSOPR was specified for the TOMSGQ parameter. In the following SNDPGMMSG command,
SNDPGMMSG MSG('Test message') TOMSGQ(*TOPGMQ) + TOPGMQ(*SAME *) MSGTYPE(*INFO) the parameter TOMSGQ(*TOPGMQ) tells OS/400 to use the TOPGMQ parameter to determine the message destination. Because TOPGMQ specifies *SAME for the relationship and * for the target program, the system sends the message 'Test message' to program message queue PGM_D. Now consider the command
SNDPGMMSG MSG('Test message') + TOPGMQ(*PRV *) MSGTYPE(*INFO) In this case, the message is sent to program message queue PGM_C, because PGM_C is PGM_D's calling program (*PRV). (Notice that this time I chose not to specify the TOMSGQ parameter, but to let it default to *TOPGMQ.) As on the SNDUSRMSG command, SNDPGMMSG's TOUSR parameter lets your program send a message to a particular user profile. You can specify *ALLACT for the TOUSR parameter to send a message to each active user's message queue. Although this value provides an easy way for a program to send a message to all active users, it does not guarantee that users immediately see the message. Each user message queue processes the message based on the DLVRY attribute specified for the message queue. ILE-Induced Changes In OS/400 V2R3, the SNDPGMMSG TOPGMQ parameter is expanded to include ILE (Integrated Language Environment) support. Figure 30.3 presents the new TOPGMQ parameter structure, which contains two elements. The first element, 'relationship,' works just the same as in V2R2. The second element is now called 'Call stack entry identifier' and is expanded to multiple fields that help identify the exact program message queue to receive the message. The first entry field is the 'Call stack entry' field and is similar to the V2R2 implementation. This field represents the name of the program or procedure message queue. If this entry is a procedure name, the name can be a maximum of 256 characters. The system will begin searching for this procedure in the most recently called program or procedure. If more qualifications are needed to correctly identify the procedure message queue, you can use the next two items, 'Module name' and 'Bound program name,' to specifically point to the exact procedure message queue. The module identifies the module into which the procedure was compiled. The bound program name identifies the program name into which this procedure was bound.
Note that, when using the new SNDPGMMSG command, it is in this third item that you can enter the single value '*EXT' to tell OS/400 to send the message to the external message queue of the current job. Prior to V2R3, you entered this special value in the 'relationship' parameter element of the SNDPGMMSG command.
Message Types The next parameter on the SNDPGMMSG command is MSGTYPE. You can use six types of messages in addition to the informational and inquiry message types that you can create with the SNDMSG and SNDUSRMSG commands. Figure 30.4 lists the message types and describes the limitations (message content and destination) and normal uses of each. Each message type has a distinct purpose and communicates specific kinds of information to other programs or to the job's user. You can send an informational message (*INFO) to any user, workstation, or program message queue. Because inquiry messages (*INQ) expect a reply, you can send an inquiry message only to a nonprogram message queue (i.e., a user or workstation message queue) or to the current job's external message queue.
A completion message (*COMP) is usually sent to inform the calling program that the requested work is complete. It can be sent to a program message queue or to the job's external message queue. Diagnostic messages (*DIAG) are sent to program or external message queues to describe errors detected during program execution. Typically, escape messages follow diagnostic messages, telling the calling program that diagnostic messages are present and that the requested function has failed. You can send a request message (*RQS) to any message queue as a command request. You must use an impromptu message on the MSG parameter to send the request. (For more information about request messages and request message processing, see Chapter 8 of the Control Language Programmer's Guide, SC41-8077). An escape message (*ESCAPE) specifically identifies the error that caused the sending program to fail. An escape message can be sent only to a program message queue, and the escape message terminates the sending program, returning control to the calling program. MSGTYPE(*ESCAPE) cannot be specified if the MSG parameter is specified -- in other words, all escape messages must be predefined. Status messages (*STATUS) describe the status of the work that the sending program performs. When a program sends a status message to an interactive job's external message queue, the message is displayed on the workstation screen, processing continues, and the sending program does not require a response. When a status message is sent to a program message queue, the message functions as a warning message. If the program receiving the message monitors for this message (using the MONMSG (Monitor Message) command, which I will discuss in Chapter 31, the message functions as an escape message by terminating the sending program. If the program receiving the status message does not monitor for that message, the system immediately returns control to the sending program. OS/400 uses notify messages (*NOTIFY) to describe a condition in the sending program that requires a correction or a reply. If the notify message is sent to an interactive job's external message queue, the message acts like an inquiry message and waits for a reply, which the sending program can then receive. When a notify message is sent to a program message queue, the message functions as a warning. If the program receiving the notify message monitors for it, the message causes the sending program to end, and control returns to the receiving program. If the receiving program doesn't monitor for the message, or if the message is sent to a batch job's external message queue, the default reply for that message is sent, and control returns to the sending program. You can either define the default reply in the message description or specify it on the system reply list.
The Receiving End The next parameter on the SNDPGMMSG command is RPYMSGQ, which lets you specify the program or nonprogram message queue to which the reply should go. The only valid values are *PGMQ, which specifies that the reply is to go to the sending program's message queue, or a qualified nonprogram message queue name. You can receive or remove a specific message by using a key value to identify that message. The KEYVAR parameter specifies the CL return variable containing the message key value of the message sent by the SNDPGMMSG command. To understand how key variables work, examine the following CL statement:
SNDPGMMSG MSG('Test message') TOPGMQ(*PRV *) MSGTYPE(*INFO) KEYVAR(&MSGKEY)
+ +
The SNDPGMMSG command places the message on the calling program's message queue, and OS/400 assigns to that message a unique message identifier that is returned in the &MSGKEY variable. In the example
RMVMSG PGMQ(*PRV *) + MSGKEY(&MSGKEY) CLEAR(*BYKEY) the RMVMSG (Remove Message) command uses the &MSG KEY value to remove the correct message from the queue. The return variable must be defined as TYPE(*CHAR) and LEN(4).
Program Message Uses Now that you're acquainted with SNDPGMMSG parameters, let's look at a few examples that demonstrate how to use this command. The following is a sample diagnostic message:
SNDPGMMSG MSGID(CPF9898) + MSGF(QSYS/QCPFMSG) + MSGDTA('Output queue' |> + &outqlib |< + '/' || + &outq |> + 'not found') + TOPGMQ(*PRV) MSGTYPE(*DIAG) In this example, I have concatenated constants (e.g., output queue and /) and two variables (&outqlib and &outq) to construct the diagnostic message 'Output queue &outqlib/&outq not found.' The current program sends this message to the calling program, which can receive it from the program message queue after control returns to the calling program. As I mentioned in my discussion of the MSGTYPE parameter, you must supply a valid message ID for the MSGID keyword when you send certain message types (to review which types require a message ID, see Figure 30.3). Because this means you cannot simply use the MSG parameter to construct text for these message types, OS/400 provides a special message ID, CPF9898, to handle this particular requirement. The message text for CPF9898 -&1. -- means that substitution variable &1 will supply the message text, which you can construct using the MSGDTA parameter. Notice that the message text in the preceding example is constructed in the MSGDTA parameter. When the program sends the message, the MSGDTA text becomes the message through substitution into the &1 data variable. (For a more complete explanation of message variables, see Chapter 29, 'Teaching Programs to Talk;' the Programming: Control Language Reference, SC41-0030; and the Programming: Control Language Programmer's Guide, SC41-8077.) The next example is an escape message that might follow such a diagnostic message:
SNDPGMMSG MSGID(CPF9898) MSGF(QCPFMSG) MSGDTA('Operations ended in error.' |> 'See previously listed messages') TOPGMQ(*PRV) MSGTYPE(*ESCAPE)
+ + + + +
OS/400 uses an escape message to terminate a program when it encounters an error. When a program sends an escape message, the sending program is immediately terminated, and control returns to the calling program. In the following example, the current program sends a completion message to the calling program to confirm the successful completion of a task.
SNDPGMMSG MSGID(CPF9898) + MSGF(QCPFMSG) + MSGDTA('Copy of spooled files is complete') + TOPGMQ(*PRV) + MSGTYPE(*ESCAPE) The following sample status message goes to the job's external message queue and tells the job's external user what progress the job is making.
SNDPGMMSG MSGID(CPF9898) + MSGF(QCPFMSG) + MSGDTA('Copy of spooled files in progress') + TOPGMQ(*EXT) +
MSGTYPE(*STATUS) When you send a status message to an interactive job's external message queue, OS/400 displays the message on the screen until another program message replaces it or until the message line on the display is cleared. Although you may be ready to send messages to another program, you have only half the picture. In Chapter 31, you will learn how programs receive and manipulate messages, and I'll give you some sample code that contains helpful messaging techniques.
Chapter 31 - Hello, Any Messages? On the AS/400, sending and receiving program messages functions much like phone mail. Within a job, each program, as well as each job, has its own 'mailbox.' One program within the job can leave a message for another program or for the job; each program or job can 'listen' to messages in its mailbox; and programs can remove old messages from the mailbox. In Chapter 30, I explained how programs can send messages to other program message queues or to the job's external message queue. In this chapter, we look at the 'listening' side of the equation -- the RCVMSG (Receive Message) and MONMSG (Monitor Message) commands.
Receiving the Right Message You can use the RCVMSG command in a CL program to receive a message from a message queue and copy the message contents and attributes into CL variables. Why would you want to do this? You may want to look for a particular message in a message queue to trigger an event on your system. Or you may want to look for messages that would normally require an operator reply, and instead, have your program supply the reply. Or you may want to log specific messages received at a message queue. Whatever the reason, the place to begin is the RCVMSG command. Figure 31.1 lists the RCVMSG command parameters. The first six parameters -- PGMQ (program queue), MSGQ (message queue), MSGTYPE (message type), MSGKEY (message key), WAIT (time to wait), and RMV (remove message) -- determine which message your program will receive from which message queue and how your program processes a message. Figure 31.2 illustrates a job message queue comprised of the job's external message queue and five program message queues. For our purposes, each message queue contains one message. Let's suppose that PGM_D is the active program and that it issues the following command: RCVMSG
Because no specific parameter values are provided, OS/400 would use the following default values for the first six parameters:
RCVMSG PGMQ(*SAME *) MSGQ(*PGMQ) MSGTYPE(*ANY) MSGKEY(*NONE) WAIT(0) RMV(*YES)
+ + + + +
The PGMQ parameter of the pre-V2R3 RCVMSG Command, which consists of two values -- relationship and program -lets you receive a message from any program queue active within the same job or from the job's external message queue (see Figure 31.3). The first value specifies the relationship between the program named in the second value and the receiving program. You can specify one of three values:
• • •
*PRV to indicate the program is to receive the message from the message queue of the program that called the program named in the second value *SAME to indicate the program is to receive the message from the message queue of the named program *EXT to indicate the program is to receive the message from the job's external message queue
The value for the second element of the PGMQ parameter can be either the name of a program within the current program's job or the special value *, which tells OS/400 to use the current program's name. In the example above, because the PGMQ value is (*SAME *), PGM_D would receive a message from the PGM_D message queue. According to Figure 31.2, there is only one message to receive -- 'First message on PGM_D queue.' In our example, the value MSGTYPE(*ANY), combined with the value MSGKEY(*NONE), instructs the program to receive the first message of any message type found on the queue regardless of the key value (for more information about the MSGTYPE and MSGKEY parameters see 'RCVMSG and the MSGTYPE and MSGKEY Parameters,'). The value WAIT(0) in the example tells the program to wait 0 seconds for a message to arrive on the message queue. You can use the WAIT parameter to specify a length of time in seconds (0 to 9999) that RCVMSG will wait for the arrival of a message. (You can also specify *MAX, which means the program will wait indefinitely to receive a message.) If RCVMSG finds a message immediately, or before the number of seconds specified in the WAIT value elapses, RCVMSG receives the message. If RCVMSG finds no message on the queue during the WAIT period, it returns blanks or zeroed values for any return variables. The last parameter value in the sample command, RMV(*YES), tells the program to delete the message from the queue after processing the command. You can use RMV(*NO) to instruct OS/400 to leave the message on the queue after RCVMSG receives the message. OS/400 then marks the message as an 'old' message on the queue. A program can receive an old message again only by using the specific message key value to receive the message or by using the value *FIRST, *LAST, *NEXT, or *PRV for the MSGTYPE parameter. Note on the V2R3 RCVMSG Command Parameter Changes As with the SNDPGMMSG command, the RCVMSG command parameters also changed in V2R3 to accommodate ILE. For more information about the new parameter items you see for the PGMQ parameter in Figure 31.3, see Chapter 30. Because this book is at an introductory level and many of you will not use ILE until ILE RPG and/or ILE COBOL is available, I will not discuss these parameter changes in detail.
Receiving the Right Values All the remaining RCVMSG parameters listed in Figure 31.1 provide CL return variables to hold copies of the actual message data or message attributes. You normally use the RCVMSG command to retrieve the actual message text or attributes to evaluate that message and then take appropriate actions. For example, the following command:
RCVMSG MSGQ(MYMSGQ) MSGTYPE(*COMP) RMV(*NO) MSG(&MSG) MSGDTA(&MSGDTA) MSGID(&MSGID)
+ + + + + +
SENDER(&SENDER) retrieves the actual message text, the message data, the message identifier, and the message sender data into the return variables &MSG, &MSGDTA, &MSGID, and &SENDER, respectively. After RCVMSG is processed, the program can use these return variables. The program may receive messages looking for a particular message identifier. In this particular example, the current program might be looking for a particular completion message on a nonprogram message queue (MYMSGQ) to determine whether or not a job has completed before starting another job. Notice the SENDER parameter used in this example. When you create a return variable for the SENDER parameter, the variable must be at least 80 characters long and will return the following information: Positions 1 through 26 identify the sending job: 1-10 = job name 11-20 = user name 21-26 = job number Positions 27 through 42 identify the sending program: 27-38 = program name 39-42 = statement number Positions 43 through 55 provide the date and time stamp of the message: 43-49 = date (Cyymmdd) 50-55 = time (hhmmss) Positions 56 through 69 identify the receiving program (when the message is sent to a program message queue): 56-65 = program name 66-69 = statement number Positions 70 through 80 are reserved for future use. The SENDER return variable can be extremely helpful when processing messages. For example, during the execution of certain programs, it is helpful to know the name of the calling program without having to pass this information as a parameter or hardcoding the name of the program into the current program. You can use the technique in Figure 31.4 to retrieve that information. The current program sends a message to the calling program. The current program then immediately uses RCVMSG to receive that message from the *PRV message queue. Positions 56 through 65 of the &SENDER return value contain the name of the program that received the original message; thus, you have the name of the calling program. Another RCVMSG command parameter that you will use frequently is RTNTYPE (return message type). When you use RCVMSG to receive messages with MSGTYPE(*ANY), your program can use a return variable to capture and interrogate the message type value. For instance, in the following command:
RCVMSG PGMQ(*SAME *) MSGTYPE(*ANY) MSG(&MSG) RTNTYPE(&RTNTYPE)
+ + +
the variable &RTNTYPE returns a code that provides the type of the message that RCVMSG is receiving. The possible codes that are returned are: 01 Completion 02 Diagnostic 04 Information 05 Inquiry 08 Request
10 Request with prompting 14 Notify 15 Escape 21 Reply, not checked for validity 22 Reply, checked for validity 23 Reply, message default used 24 Reply, system default used 25 Reply, from system reply list As you can see, IBM did not choose to return the 'word' values (e.g., *ESCAPE, *DIAG, *NOTIFY) that are used with the MSGTYPE parameter on the SNDPGMMSG (Send Program Message) command but instead chose to use codes. However, when you write a CL program that must test the RTNTYPE return variable, you should avoid writing code that appears something like
IF (&rtntype = '02') DO ... ENDDO ELSE IF (&rtntype = '15') DO ... ENDDO Instead, to make your CL program easier to read and maintain, you should include a standard list of variables, such as the CL code listed in Figure 31.5, in the program. Then, you can change the code above to appear as
IF (&rtntype = [email protected]) DO ... ENDDO ELSE IF (&rtntype = [email protected]) DO ... ENDDO
Monitoring for a Message The MONMSG command is available only in a CL program. It provides a technique to trap error/exception conditions by monitoring for escape, notify, and status messages. It also provides a technique to direct the execution of the program based on the particular error conditions detected. Figure 31.6 lists the MONMSG command parameters. You can use the MSGID parameter to name from one to 50 specific or generic message identifiers for which the command will monitor. A specific message identifier is a message ID that represents only one message, such as CPF9802, which is the message ID for the message 'Not authorized to object &2.' A generic message is a message ID that represents a group of messages, such as CPF9800, which includes all messages in the CPF9801 through CPF9899 range. Thus, the command
MONMSG CPF9802 + EXEC(GOTO ERROR) monitors for the specific message CPF9802, whereas the command
MONMSG CPF9800 + EXEC(GOTO ERROR) monitors for all escape, notify, and status messages in the CPF9801 through CPF9899 range. The second parameter on the MONMSG command is the CMPDTA parameter. You have the option of using this parameter to specify comparison data that will be used to check against the message data of the message trapped by the MONMSG command. If the message data matches the comparison data (actually only the first 28 positions are compared), the MONMSG command is successful and the action specified by the EXEC parameter is taken. For example, the command
MONMSG CPF9802 CMPDTA('MAINMENU') EXEC(DO) monitors for the CPF9802 message identifier, but only executes the command found in the EXEC parameter if the CMPDTA value 'MAIN MENU' matches the first eight positions of the actual message data of the trapped CPF9802 message. The EXEC parameter lets you specify a CL command that is processed when the MONMSG traps a valid message. If no EXEC value is found, the program simply continues with the next statement found after the MONMSG command. You can use the MONMSG command to monitor for messages that might occur during the execution of a single command. This form of MONMSG use is called a command-level message monitor. It is placed immediately after the CL command that might generate the message and might appear as
CHKOBJ &OBJLIB/&OBJ &OBJTYPE MONMSG CPF9801 EXEC(GOTO NOTFOUND) MONMSG CPF9802 EXEC(GOTO NOTAUTH) The MONMSG commands here monitor only for messages that might occur during the execution of the CHKOBJ command. You should use this implementation to anticipate error conditions in your programs. When a commandlevel MONMSG traps a message, you can then take the appropriate action in the program to continue or end processing. For example, you might code the following:
DLTF QTEMP/WORKF MONMSG CPF2105 to monitor for the CPF2105 'File not found' message. In this example, if the CPF2105 error is found, the program simply continues processing as if no error occurred. That may be appropriate for some programs. Now, examine the following code:
CHKOBJ MONMSG CRTPF ENDDO CLRPFM
QTEMP/WORK *FILE CPF9801 EXEC(DO) FILE(QTEMP/WORK) RCDLEN(80) QTEMP/WORK
This code uses the MONMSG command to determine whether or not a particular file exists. If the file does not exist, the program uses the CRTPF (Create Physical File) command to create the file. The program then uses the CLRPFM (Clear Physical File Member) command to clear the existing file (if the program just created the new file, the member will already be empty). In addition to using the command-level message monitor to plan for errors from specific commands, you can use another form of MONMSG to catch other errors that might occur. This form of MONMSG use is called a programlevel message monitor, and you must position it immediately after the last declare statement and before any other CL commands. Figure 31.7 illustrates the placement of a program-level message monitor.
When you implement a program-level message monitor, I recommend that you use the message identifier CPF9999 instead of the widely used CPF0000. Using CPF9999 provides two important functions over CPF0000. First, CPF9999 catches some messages that CPF0000 will not catch because CPF9999 is the 'Function Check' error, which occurs only after some other program error, including errors triggered by CPFxxxx escape messages, MCHxxxx escape messages (machine errors), and escape messages from other message identifier groups. CPF0000 only monitors for actual CPFxxxx messages. Second, the CPF9999 'Function Check' message provides the actual failing statement number, which is not available from the CPFxxxx error message. Specifying the CPF9999 message ID as the program-level message monitor makes this additional information available.
Working with Examples Figure 31.8 is a portion of a CL program that provides several examples of program message processing to help you tie together the information I've presented here and in Chapter 30. The program contains a standard list of DCL statements for defining variables used in normal message processing. You may want to place these variables in a source member that you can copy into CL programs as needed. (Remember, you have to use your editor to do the copying because CL has no /COPY equivalent.) The program-level message monitor (A in Figure 31.8) is coded to handle any unexpected errors. If an unexpected error occurs during program execution, this MONMSG causes execution to continue at the label GLOBAL_ERR. At GLOBAL_ERR, the program first prevents an infinite loop by testing to determine whether the program has already initiated the error-handling process. An infinite loop might occur when an unexpected error occurs during the error-handling process that the program-level MONMSG controls. The &msg_flag controls the overall message process. The program sets the value of &msg_flag to [email protected] and continues at label CLEAN_UP. Your programs should have a mechanism for cleaning up any temporary objects, whether the program ends normally or abnormally with an error. After processing statements at CLEAN_UP, the program continues at label RSND_BGN. If the &msg_flag variable is [email protected], the program has found an error condition. The program then sends each message on the current program message queue to the calling program (which in turn might continue to send the messages back in the program stack to the command processor or some other program that either ends abnormally or displays the messages to the user who requested the function). Notice that RCVMSG is used to receive each message from the program queue (D). The MONMSG CPF0000 is used here to catch any error that might occur during the RCVMSG command process and immediately go to the end of the program without attempting to receive any other messages. As the program receives each message, the return variable &RTNTYPE is tested and only the messages that have a &RTNTYPE of [email protected] or [email protected] are processed. The SNDPGMMSG command sends each processed message to the calling program message queue as a *DIAG message. Finally, at the RSND_END label, the program sends one generic escape message 'Operation ended in error ....' to the calling program. That escape message terminates the current program and returns control to the calling program. The sample code in Figure 31.8 contains several examples of command-level message monitors. The first example is the MONMSG CPF9801 that follows the CHKOBJ command (B). If the CPF9801 'Object not found' message is trapped, the program first removes this message by using RCVMSG with RMV(*YES), and then sends a more meaningful message to the program queue using the SNDPGMMSG command. Notice that the value for the TOPGMQ parameter on the SNDPGMMSG command is *SAME to direct the message to the current program queue. The &msg_flag is set to [email protected] and GOTO CLEAN_UP sends control of the program to the CLEAN_UP label, where cleanup and then error message processing occurs. Another example of the command-level message monitor is the MONMSG CPF0864 that appears immediately after the RCVF statement (C). If the MONMSG traps the CPF0864 'End of file' message, the program removes this message from the current program message queue using RCVMSG with RMV(*YES). Because the 'End of file' message is expected and not an error, it is appropriate to remove that message from the program queue to prevent confusion in debugging any errors. Next the program uses the GOTO RCD_END statement to pass control of the program to the RCD_END label where the program sends a normal completion message to the calling program message queue.
What Else Can You Do with Messages? Now that you understand the mechanics, you may want to know what else you can do with messages. Listed below are three possible solutions using messages:
• • •
Create a message break-handling program for your message queue. See Chapter 8. Create a request message processor (a command processor like QCMD). See Chapter 8 of the Control Language Programmer's Guide (SC41-8077). Use the SNDPGMMSG and RCVMSG commands to send and receive data strings between programs. For instance, you might send a string of order data to a message queue where the order print program uses RCVMSG to receive and print the order data. This avoids having to submit a job or call a program. The order print program simply waits for messages to arrive on the queue. This functions much like data queue processing, but is simplified because you can display message information (you cannot display a data queue without writing a special program to perform that task).
These are only examples of how you might use messages to perform tasks on the system. With the mechanics under your belt, it's time for you to explore how you can use messages to enhance your own applications.
Chapter 32 - OS/400 Commands OS/400 commands -- friend or foe? That's the big question for anyone new to the AS/400. It is certainly understandable to look at the IBM-supplied system commands and wonder just how many there are, why so many are needed, and how you are ever going to remember them all. You might easily decide that the procedures you've already memorized on another system are certainly better and fail to see why IBM would think the OS/400 commands could possibly be helpful! Well, after recently trying to navigate my way around an HP3000, I can empathize with you. I kept thinking, 'Why didn't Hewlett-Packard think to provide the WRKSPLF (Work with Spooled Files) command, or why not say DSPFD (Display File Description) instead of this 'LISTF ,2' stuff?' Anyway, after stumbling around for days, calling everyone I could think of, and scouring the books for information, I finally managed to memorize a few of the needed commands and complete the 'short' job I had set out to do. So if you get frustrated when you find the procedures you are accustomed to have been twisted into something that seems foreign, remember that being uncomfortable doesn't mean you're incompetent; it only makes you feel that way! With that said, and realizing that many of you need to master the AS/400 sooner or later, let me introduce OS/400 commands and give you a few helpful tips and suggestions for customizing system commands to make them seem more friendly.
Commands: The Heart of the System The command is at the heart of the AS/400 operating system. Whether you are working with an output queue, creating an object, displaying messages, or creating a subsystem, you are using an OS/400 command. When you select an option from an OS/400 menu or from a list panel display, you are executing a command. Let me give you a couple of examples. Figure 32.1 shows the AS/400 User Tasks menu. Next to each menu option I have added the command the system executes when you select that option. You can simply key in the command to achieve the same results. In Figure 32.2 you see the familiar Work with Output Queue display. Below the screen format, I have listed the available options and the command the system executes for each. For instance, if you enter a '6' next to a spooled file entry on the list, the system releases that spooled file. If you are familiar with the system commands, you can type in RLSSPLF (Release Spooled File), prompt it, and fill in the appropriate parameters to accomplish the same thing. Obviously, typing in the RLSSPLF command is much more time consuming than entering a '6' in the appropriate blank. However, this example is not typical of all OS/400 commands. In many cases, it's quicker and easier to key in the command than it is to use the menus. To know which technique to use, it's helpful to have a firm grasp of how commands are organized and how they can be used, and to know which commands are worth learning. Before I continue with this chapter, let me say something about how system commands are organized and named. OS/400 commands consist basically of a verb and a noun (e.g., CRTOUTQ -- Create Output Queue), and more than two-thirds of the existing commands are constructed using just 10 verbs (CRT, CHG, DLT, ADD, RMV, DSP, WRK, CPY, STR, and END). This is good news if you are worried about remembering all the commands. I recommend that you first familiarize yourself with the various objects that can exist on the system. Once you understand most of those objects, you can quickly figure out what verbs can operate upon each object type. For example, you can't delete a job, but you can cancel one. For help identifying and using OS/400 commands, try using one or more of the following resources:
• • •
On any command line, press F4 (Prompt). OS/400 will present you with a menu of the major command groups. You can choose menu options to find and select the command you need. On any command line, type 'GO CMDxxx', where you fill in the xxx with either a verb or an object (e.g., GO CMDPTF for PTF-related commands, GO CMDWRK for 'work with' commands). OS/400 will present you with a list of those commands. Type a command on the command line and press F1 (Help). OS/400 offers online help for all CL commands.
• • •
Execute the SLTCMD (Select Command) Command to find commands using a generic name (e.g., WRK*, STR*). If you are on V2R3 (or beyond), you can enter a generic name directly on the command line (e.g., WRK*, STR*, CRTDEV*) and press Enter. OS/400 will present you with a list of commands that begin with the same letters you specify before the asterisk. If you are on V2R3 (or beyond), use InfoSeeker. You access InfoSeeker by pressing F11 on any Help Display Panel, by typing STRSCHIDX (Start Search Index) on a command line and pressing Enter, or by selecting option 20 from the Information Assistant menu (to get this menu, type 'GO INFO'). InfoSeeker helps you find further command help and related information. Refer to the IBM reference guide Programming: Reference Summary (SX41-0028).
Tips for Entering Commands By putting a little time and effort into learning a few phrases in this new language, you'll be comfortable and productive with day-to-day tasks on the AS/400. Once you've become acquainted with some of the most frequently used commands, it's often easier to key them in on the system command line than it is to go through the menus. Following these tips for entering commands will help ensure correct syntax and get you up to speed:
• • •
Be sure to enter values for required parameters. Specify values for positional parameters unless you want to use the default values. When entering parameter values positionally (i.e., without keywords), key them in the same order as they appear in the command syntax diagram. If you exceed the number of allowed parameters, an error message is issued. The number of allowed positional parameters is designated in the syntax diagram by a 'P' in a box. If the symbol does not appear in the syntax diagram, you can code all parameters positionally.
Keeping the above guidelines in mind, let's practice a few commands. First, consider the DSPOBJD (Display Object Description) command. Type 'DSPOBJD' and press F4 to prompt the command. In the resulting screen (Figure 32.3), the line next to 'Object' will be in bold, indicating that Object is a required parameter. Now press F11, and you will see the screen shown in Figure 32.4. Notice that the keywords appear beside each field (e.g., OBJ for object name and OBJTYPE for object type). The OBJ keyword requires a qualified value, which means that you must supply the name of the library in which the object is found. The default value *LIBL indicates that if you don't enter a specific library name, the system will search for the object in the job's library list. Notice that the keyword OUTPUT is not in bold, showing that it is an optional parameter. The default value for OUTPUT is an asterisk (*), which instructs the system to display the results of the command on the screen. Now you can key in the values QGPL and QSYS for the object name and the library name, respectively, and the value *LIB for the OBJTYPE parameter. Then press Enter, and the screen displays the object description for library QGPL, which exists in library QSYS. Now, using only the command line, type in the same command as follows:
DSPOBJD QSYS/QGPL *LIB or
DSPOBJD QGPL *LIB Either command meets the syntax requirements. Keywords aren't needed because all the parameters used are positional, and the order of the values is correct. Suppose you type
DSPOBJD QGPL *LIB *FULL Will this work? Sure. In this example, you have entered, in the correct order, values for the two required parameters and the value (*FULL) for the optional, positional parameter (DETAIL). What if you want to direct the output to the printer, and you type
DSPOBJD QGPL *LIB *FULL *PRINT
Will this work? No! You have to use the keyword (OUTPUT) in addition to the value (*PRINT), because OUTPUT is beyond the positional coding limit. Let's say you skip *FULL and just enter
DSPOBJD QGPL *LIB OUTPUT(*PRINT) Because you haven't specified a value for the positional parameter DETAIL, you would get the description specified by the default value (*BASIC). Most of the time you will probably prompt commands, but learning how to enter a few frequently used commands with minimal keystrokes can save you time. For example, which would be faster: to prompt WRKOUTQ just to enter the output queue name, or to enter 'WRKOUTQ outq_name'? Should you prompt the WRKJOBQ (Work with Job Queue) command just to enter the job queue name, or should you simply enter 'WRKJOBQ job_name'? In both cases you will save yourself a step (or more) if you simply enter the command.
Customizing Commands Taking our discussion one step further, let's explore how you might create friendlier versions of certain useful system commands. Why would you want to? Well, some (translation: 'many') IBM-supplied commands are long, requiring multiple keystrokes. You might want to shorten the commands you use most often. For example, you could shorten the command WRKSBMJOB (Work with Submitted Jobs) to WSJ or JOBS. The command WRKOUTQ could become WO, and the command DSPMSG (Display Messages) could become MSG. How can you accomplish this without renaming the actual IBM commands or having to create your own command to execute the real system command? Easy! Just use the CRTDUPOBJ (Create Duplicate Object) command. Before trying this command, take a few minutes to look over the CRTDUPOBJ command description in Volume 3 of IBM's Programming: Control Language Reference manual. Then create a library to hold all your new customized versions of IBM-supplied commands. Don't place the new command in library QSYS or any other system-supplied library: New releases of OS/400 replace these libraries, and your modified command will be lost. You should name your new library USRCMDLIB, or CMDLIB, or anything that describes the purpose of the library; and you should include the new library in the library list of those who will use your modified commands. When the destination library is ready, use the CRTDUPOBJ command to copy the commands you want to customize into the new library. CRTDUPOBJ lets you duplicate individual objects; or you can duplicate objects generically (i.e., by an initial character string common to a group of objects, followed by an asterisk), or all objects in a particular library, or multiple object types. To rename the WRKOUTQ command, enter
CRTDUPOBJ WRKOUTQ QSYS *CMD USRCMDLIB WO In this example, WRKOUTQ, QSYS, and *CMD are values for required parameters that specify the object, the originating library, and the object type, respectively. If you prompt for the parameters, enter
CRTDUPOBJ OBJ(WRKOUTQ) FROMLIB(QSYS) OBJTYPE(*CMD) + TOLIB(USRCMDLIB) NEWOBJ(WO) Either of these commands places the new command (WO) into library USRCMDLIB. When you duplicate an object, all the object's attributes are duplicated. This means that the command processing program for WO is the same as for WRKOUTQ, so the new command functions just the same as the IBM-supplied command.
Modifying Default Values The final touch for tailoring commands is to modify certain parameter default values when you know that you will normally use different standard values for those parameters. You may want to change default values for the CRTxxx (Create) commands especially. For example, for every physical file created, you may want to specify the SIZE parameter as (1000 1000 999). Or you may want the SHARE parameter to contain the value *YES rather than the IBM-supplied default *NO. You can change these defaults by using one of two methods. The first method requires that everyone who uses a command remember to specify the desired values instead of the defaults for certain parameters. Although you can place such requirements in a data processing handbook or a standards guide, this method relies on your staff to either remember the substitute values or look up the values each time they need to key them in.
The other method for modifying the default values of IBM-supplied commands is to use the CHGCMDDFT (Change Command Default) command. Take a few minutes to read the command description in IBM's Programming: Control Language Reference, Volume 2. CHGCMDDFT simply modifies the default values that will be used when the command is processed. For instance, to make the changes mentioned above for the CRTPF (Create Physical File) command, you would type
CHGCMDDFT CMD(CRTPF) NEWDFT('SIZE(1000 1000 999) SHARE(*YES)') You could use CHGCMDDFT to enhance the WO command you created earlier. Suppose that you usually use the WO command to work with your own output queue. Why not change the default value of *ALL for the OUTQ parameter to be the name of your own output queue? Then, rather than having to type
WO your_outq you can simply type 'WO' (of course, this personalized command should only exist in your library list). If you want to work with another output queue, you can still type in the queue name to override the default value. See? Commands can be fun! To modify system command parameter defaults using the CHGCMDDFT command, you should duplicate the command into a different library. Then change the command defaults and, if you have retained the CL command names rather than renaming the commands, list the library before QSYS on the system library list. When you use the CHGCMDDFT or CRTDUPOBJ command to customize CL commands, you should create a CL source program that performs those changes. Then whenever a new release of OS/400 is installed, you should run the CL program, thus duplicating or modifying the new version of the system commands. The system commands on the new release might have new parameters, different command processing programs, or new default values. Using CHGCMDDFT is an effective way to control standards. However, you should be cautious when using this command because it affects all uses of the changed command (e.g., a vendor-supplied software package might be affected by a change you make). You might want to use a good documentation package to find all uses of specific commands and to evaluate the risk of changing certain default values. You can modify your user profile attribute USROPT to include the value *CLKWD if you want the CL keywords to be displayed automatically when you prompt commands (rather than having to press F11 to see them). To modify this user profile attribute, someone with the proper authority should enter the CHGUSRPRF (Change User Profile) command as follows:
CHGUSRPRF user_profile USROPT(*CLKWD) For more information about the USROPT keyword, see IBM's Programming: Control Language Reference. The AS/400 provides a function-rich command structure that lets you maneuver through the operations of your system. I don't happen to believe that everyone should be able to enter every command without prompting or using any keywords. But I am convinced that having a good working knowledge of the available OS/400 commands not only will help you save time, but also will make you more productive on the system.
Chapter 33 - OS/400 Data Areas I'd like to have a dollar for every time I've put something in a special place only to forget where I put it. Someone living in my old house in Florida will someday find that special outlet adaptor I never used. He will probably also find the casters I removed from the baby bed, a stash of new golf balls, and several little metal doohickeys I removed from the back of my PS/2 when I installed feature cards. I hope he gets some use out of them! This tendency to misplace things also finds its way into the world of computer automation; but fortunately for those of us who need a place to keep some essential chunk of information, OS/400 provides a simple solution. If you ever write applications that use data such as the next available order number, the current job step in progress for a long-running job, a software version identification number, or a serial number, you should know about OS/400 data areas.
A data area is an AS/400 object you can create to store information of a limited size. A data area exists independently of programs and files and therefore can be created and deleted independently of any other objects on the system. Data areas typically are used to store some incremental number. For instance, a payroll application might use a data area to store the next available check number. Each time the application writes or records a check, it can get the next check number from the data area and then update the data area to reflect the use of that check number. Another use for data areas is to emulate the S/36's IF-ACTIVE feature, which lets you check a program's execution status. You can create and name a data area for each program whose status you need to know. For example, if PRP101 is a program to be checked, you can create a data area named PRP101 in a user library. Then you can modify program PRP101 to acquire a shared update (*SHRUPD) lock on the data area using the ALCOBJ (Allocate Object) command. An application needing to check the execution status of program PRP101 can simply try to acquire an exclusive (*EXCL) lock on the data area. If the allocation attempt fails, it means the data area object is currently allocated by program PRP101, indicating that the program is active.
Creating a Data Area The best way to acquaint you with data areas is to walk you through the process of creating one. To create a data area object named MYDTAARA in library QGPL and initialize it with the value 'ABCDEFGHI', you would type the following command:
CRTDTAARA DTAARA(QGPL/MYDTAARA) TYPE(*CHAR) LEN(10) VALUE('ABCDEFGHIJ') TEXT('Data Area to + store ABCDEFGHIJ')
+
This data area object can contain 10 characters of data and can be referenced by any user or program authorized to use library QGPL. Notice that you can specifically identify the data area with the TEXT parameter on the CRTDTAARA (Create Data Area) command. Just as you may forget where you've placed a special object in your home, you can easily forget what you've created a data area for. Wise use of the TEXT parameter can help you effectively document data area objects. Besides CRTDTAARA, other OS/400 commands associated with data areas are the DSPDTAARA (Display Data Area), CHGDTAARA (Change Data Area), RTVDTAARA (Retrieve Data Area), and DLTDTAARA (Delete Data Area) commands. Suppose you wanted to display the data area we just created. You would execute the following command:
DSPDTAARA QGPL/MYDTAARA The system would then display the description and contents of the data area on your workstation. You can use the CHGDTAARA command interactively or from within a program. The DTAARA parameter on this command lets you either replace the contents of a data area or change only a portion (substring) of the data area. For example, the command
CHGDTAARA DTAARA(QGPL/MYDTAARA) VALUE('123') would replace the entire contents of the data area. If the value being placed into the return variable is shorter than the return variable, the value is padded to the right with blanks. Therefore, the new value of the data area would be '123 '. However, the command
CHGDTAARA DTAARA(QGPL/MYDTAARA (1 3)) VALUE('123') replaces only the first three positions of the data area with the value '123'. Thus, the original value of MYDTAARA would be modified to '123DEFGHIJ'. The RTVDTAARA command provides a simple way for CL programs to retrieve the data area value. Because the command provides return variables, it can be executed only from within a CL program. Here again, the DTAARA parameter lets you reference all or only a portion of the data area. The CL program statement
RTVDTAARA DTAARA(QGPL/MYDTAARA) RTNVAR(&ALL)
would retrieve the entire contents of MYDTAARA ('ABCDEFGHIJ') into return variable &ALL, starting in the left-most position of the return variable. Now consider the following CL program statement:
RTVDTAARA DTAARA(QGPL/MYDTAARA (4 2)) RTNVAR(&JUST2) This RTVDTAARA command retrieves from MYDTAARA a substring of two characters, starting with position 4. The variable &JUST2 would return the value 'DE'. One performance tip to remember when using the RTVDTAARA command is that it is more efficient to retrieve the entire data area into a single CL variable and use several CHGVAR (Change Variable) commands to pull substrings from that variable than it is to execute several RTVDTAARA commands to retrieve multiple substrings. Every RTVDTAARA command must access the data area -- a time-consuming operation. To delete this data area from your system, type the command
DLTDTAARA QGPL/MYDTAARA You can also use high-level language programs to retrieve and modify data area values. Figure 33.1 provides sample RPG/400 code to retrieve information from a data area named INPUT. In this example, the program implicitly reads and locks the data area when the program is initialized and then implicitly writes and unlocks it when the program ends. During program execution, the INPUT data area data structure defines fields internally to the program. RPG's DEFN, IN, and OUT opcodes provide a method for explicitly retrieving and updating a data area and for explicitly controlling the lock status of a data area object. For more information about how to use these opcodes with data areas, see IBM's AS/400 Languages: RPG/400 Reference (SC09-1349) and AS/400 Languages: RPG/400 User's Guide (SC09-1348).
Local Data Areas A local data area (LDA) is a special kind of data area automatically created for each job on the system. The LDA is a character-type data area 1,024 characters long and initialized with blanks. As long as the job is running, the LDA is associated with that job. When one job submits another job, the system creates an LDA for the submitted job and copies the contents of the submitting job's LDA into it. Thus, you can pass a data string from a given job to every job it submits. Unlike the data area object discussed earlier, the LDA is dependent on a particular job; it cannot be displayed or modified by any other job on the system. You cannot manually create or delete an LDA, nor does it have an associated library (not even QTEMP). The LDA is simply maintained as part of the job's process access group (a group of internal objects that is associated with a job and that holds essential information about that job). The RTVDTAARA statements in Figure 33.2 retrieve two substrings from the LDA, putting the first into variable &FIELD1 and the second into &FIELD2. The CHGDTAARA command replaces positions 101 through 150 of the LDA with the value of variable &NEWVAL. You can perform any number of retrievals and changes on the LDA; however, keep in mind that only one copy of the LDA exists for each job.
The LDA is often used to store static information that must be available to many different programs executed within a job. For example, when an employee signs on to a workstation, an initial program might retrieve information relating to that employee (e.g., branch number or employee number) and put it into the LDA. Any subsequent programs the job invokes that require this information can simply retrieve it from the LDA rather than performing additional file I/O.
Group Data Areas If an interactive job becomes a group job (via the CHGGRPA (Change Group Attributes) command), the system creates an additional data area called the group data area (GDA). Similar to the LDA, the GDA is a blank-initialized character-type data area, but it is only 512 characters long. The GDA is accessible from any job in the group and is deleted when the last job in the group has ended or when the job is no longer part of a group job. You cannot create or delete a GDA (although you can modify it), and it has no associated library. Another unique limitation is that you cannot use it in a substring function on a parameter. However, you can retrieve the entire GDA and then use the CHGVAR command to reference particular portions of the data. For jobs that run as group jobs, the GDA simply provides additional temporary storage (beyond the LDA that exists for each job). These are the basics you need to begin exploring OS/400 data areas. As you begin to think of reasons to use data areas on your system, you may want to look at data areas that already exist there. Check your libraries to see whether your software provider has supplied any data areas. If so, how are they used? You may discover that you have used OS/400 data areas all along.
Table of Contents Please note that Starter Kit for the AS/400, Second Edition is copyright 1994. Although much of its content is still valid, much is also out-of-date. The good news is that iSeries NEWS technical editor Gary Guthrie has been working on an updated edition: Starter Kit for the IBM iSeries and AS/400. We've posted sample chapters of the new book here in place of the old ones. (Updated chapters are clearly labeled as such in the Table of Contents.) New Edition Now Available The new Starter Kit for the IBM iSeries and AS/400, co-authored by Gary Guthrie and Wayne Madden, is now available from 29th Street Press (April 2001). Completely updated for the iSeries and expanded to cover new topics such as TCP/IP and Operations Navigator, the new book includes a CD containing all the sample code and utilities presented in the book. For more information or to order, visit the iSeries Network Store.
Acknowledgments
Introduction
SETUP Chapter 1: Before the Power is On Before You Install Your System Develop an Installation Plan Plan Education Prepare Users for Visual and Operational Differences Develop a Migration Plan Develop a Security Plan System Security Level Password Format Rules Identifying System Users Develop a Backup and Recovery Plan Establish Naming Conventions What Next?
Chapter 2: That Important First Session
Signing On for the First Time Establishing Your Work Environment Now What?
Chapter 3: Access Made Easy What Is a User Profile? Creating User Profiles USRPRF (User Profile) PASSWORD (User Password) PWDEXP (Set Password to Expired) STATUS (Profile Status) USRCLS (User Class) and SPCAUT (Special Authority) Initial Sign-On Options System Value Overrides Group Profiles JOBD (Job Description) SPCENV (Special Environment) Message Handling Printed Output Handling Documenting User Profiles Maintaining User Profiles Flexibility: The CRTUSR Command Making User Profiles Work for You
Chapter 4: Public Authorities What Are Public Authorities? Creating Public Authority by Default Limiting Public Authority Public Authority by Design Object-Level Public Authority
Chapter 5: Installing a New Release Planning is Preventive Medicine The Planning Checklist Step 1: Is Your Order Complete? Step 2: Manual or Automatic? Step 3: Permanently Apply PTFs Step 4: Clean Up Your System Step 5: Is There Enough Room? Step 6: Document System Changes Step 7: Get the Latest Fixes Step 8: Save Your System Installation-Day Tasks Step 9: Resolve Pending Operations Step 10: Shut Down the INS Step 11: Verify System Integrity Step 12: Check System Values Ready, Set, Go! Final Advice
Chapter 6: Introduction to PTFs When Do You Need a PTF? How Do You Order a PTF? SNDPTFORD Basics Ordering PTFs on the Internet How Do You Install and Apply a PTF? Installing Licened Internal Code PTFs Installing Licensed Program Product PTFs Verifying Your PTF Installation How Current Are You?
Developing a Proactive PTF Management Strategy Preventive Service Planning Preventive Service Corrective Service
AS/400 OPERATIONS Chapter 7: Getting Your Message Across: User to User Sending Messages 101 I Break for Messages Casting Network Messages Sending Messages into History
Chapter 8: Secrets of a Message Shortstop by Bryan Meyers Return Reply Requested A Table of Matches Give Me a Break Message Take a Break It's Your Own Default
Chapter 9: Print Files and Job Logs How Do You Make It Print Like This? Where Have All the Job Logs Gone?
Chapter 10: Understanding Output Queues What Is an Output Queue? How To Create Output Queues Who Should Create Output Queues? How Spooled Files Get on the Queue How Spooled Files Are Printed from the Queue A Different View of Spooled Files How Output Queues Should Be Organized
Chapter 11: The V2R2 Output Queue Monitor The Old Solution A Better Solution The STRTFROUTQ Utility To Compile These Utilities A Data Queue Interface Facelift RCVDTAQE CLRDTAQ
Chapter 12: AS/400 Disk Storage Cleanup Automatic Cleanup Procedures Manual Cleanup Procedures Enhancing Your Manual Procedures
Chapter 13: All Aboard the OS/400 Job Scheduler! by Bryan Meyers Arriving on Time Running on a Strict Schedule Two Trains on the Same Track Derailment Dangers
Chapter 14: Keeping Up With the Past System Message Show and Tell History Log Housekeeping Inside Information
SYSTEM MANAGEMENT Chapter 15: AS/400 Save and Restore Basics by Debbie Saugen
Designing and Implementing a Backup Strategy Implementing a Simple Backup Strategy Implementing a Medium Backup Strategy Implementing a Complex Backup Strategy An Alternative Backup Strategy The Inner Workings of Menu SAVE Entire System (Option 21) System Data Only (Option 22) All User Data (Option 23) Setting Save Option Defaults Printing System Information Saving Data Concurrently Using Multiple Tape Devices Concurrent Saves of Libraries and Objects Concurrent Saves of DLOs (Folders) Concurrent Saves of Objects in Directories Save-While-Active How Does Save-While-Active Work? Save Commands That Support the Save-While-Active Option Backing Up Spooled Files Recovering Your System Availability Options [sidebar] Preparing and Managing Your Backup Media [sidebar]
Chapter 16: Backup Without Downtime by Debbie Saugen An Introduction to BRMS Getting Started with BRMS Saving Data in Parallel with BRMS Online Backup of Lotus Notes Servers with BRMS Restricted-State Saves Using BRMS Backing Up Spooled Files with BRMS Including Spooled File Entries in a Backup List Restoring Spooled Files Saved Using BRMS The BRMS Operations Navigator Interface Terminology Differences Functional Differences Backup and Recovery with BRMS OpsNav Backup Policies Creating a BRMS Backup Policy Backing Up Individual Items Restoring Individual Items Scheduling Unattended Backup and Restore Operations System Recovery Report BRMS Security Functions Security Options for BRMS Functions, Components, and Items Media Management BRMS Housekeeping Check It Out
WORK MANAGEMENT Chapter 17: Defining a Subsystem Getting Oriented Defining a Subsystem Main Storage and Subsystem Pool Definitions Starting a Subsystem
Chapter 18: Where Jobs Come From Types of Work Entries Conflicting Workstation Entries Job Queue Entries Communications Entries Prestart Job Entries
Autostart Job Entry Where Jobs Go
Chapter 19: Demystifying Routing Routing Data for Interactive Jobs Routing Data for Batch Jobs Routing Data for Autostart, Communications, and Prestart Jobs The Importance of Routing Data Runtime Attributes Is There More Than One Way to Get There? Do-It-Yourself Routing
FILE BASICS Chapter 20: File Structures Structural Fundamentals Data Members: A Challenge Database Files Source Files Device Files DDM Files Save Files
Chapter 21: So You Think You Understand File Overrides Anatomy of Jobs Override Rules Scoping an Override Overriding the Same File Multiple Times The Order of Applying Overrides Protecting an Override Explicitly Removing an Override Miscellanea Important Additional Override Information Overriding the Scope of Open File Non-File Overrides Overrides and Multi-Threaded Jobs File Redirection Surprised?
Chapter 22: Logical Files Record Format Definition/Physical File Selection Key Fields Select/Omit Logic Multiple Logical File Members Keys to the AS/400 Database
Chapter 23: File Sharing Sharing Fundamentals Sharing Examples
BASIC CL PROGRAMMING Chapter 24: CL Programming: You're Stylin' Now! CL Coding Suggestions
Chapter 25: CL Programming: The Classics Classic Program #1: Changing Ownership The Technique Classic Program #2: Delete Database Relationships The Technique Classic Program #3: List Program-File References The Technique
Chapter 26: CL Programs and Database Files Why Use CL to Process Database Files? I DCLare! Extracting Field Definitions Reading the Database File File Positioning What About Record Output? A Useful Example
Chapter 27: CL Programs and Display Files CL Display File Basics CL Display File Examples Considerations
Chapter 28: OPNQRYF Fundamentals The Command Start with a File and a Format Record Selection Key Fields Mapping Virtual Fields OPNQRYF Command Performance SQL Special Features OPNQRYF Special Features
Chapter 29: Teaching Programs to Talk Basic Training Putting the Command to Work Knowing When To Speak
Chapter 30: Just Between Us Programs Job Message Queues The SNDPGMMSG Command ILE-Induced Changes Message Types The Receiving End Program Message Uses Understanding Job Logs
Chapter 31: Hello, Any Messages? Receiving the Right Message Note on the V2R3 RCVMSG Command Parameter Changes Receiving the Right Values Monitoring for a Message Working with Examples What Else Can You Do with Messages? RCVMSG and the MSGTYPE and MSGKEY
OTHER CONCEPTS Chapter 32: OS/400 Commands Commands: The Heart of the System Tips for Entering Commands Customizing Commands Modifying Default Values
Chapter 33: OS/400 Data Areas Creating a Data Area Local Data Areas Group Data Areas
Chapter 1 - Before the Power Is On
With the AS/400, IBM has tried to graft the S/36's ease of use onto the S/38's integrated database and productivity features. In many respects, Big Blue has succeeded -- the AS/400 provides extensive help text, highly developed menu functions, on-line education, and electronic customer support. But the machine's friendliness stops short of 'plug and go' installation, especially for shops converting from a non-IBM system or migrating from an IBM system other than a S/38. Even S/38 migration is not completely plug and go, although the AS/400 has inherited many S/38 characteristics: a complex structure of system objects used to support security, work environment, performance tuning, backup, recovery, and other functions. These objects let you configure a finely tuned and productive machine, but they do not readily lend themselves to education on the fly. As a result, the AS/400 requires thought, foresight, planning, and preparation for a successful installation. Believe me, I know. I have experienced the AS/400 planning and installation process as both a customer and a vendor, and I'd like to share what I've learned by suggesting a step-by-step approach for planning, installing, and configuring your AS/400. First I discuss the steps you can and should take before your system arrives. In subsequent chapters, I take you through your first session on the machine, address how to establish your work environment, and show you how to customize your system. I have outlined the installation process in the AS/400 setup checklist in Figure 1.1. You might want to use this checklist as the cover page to a notebook you could put together to keep track of your AS/400 installation.
Before You Install Your System The first step in implementing anything complex -- especially a computer system -- is thorough planning. A successful AS/400 installation begins long before your system rolls in the door. The first section of the setup checklist in Figure 1.1 lists tasks you should complete before you install your system -- preferably even before it arrives. These items may seem like a great deal of work before you ever see your system, but this work will save you and your company time and trouble when you finally begin installing, configuring, securing, and using your new system. Let's look at each item in this section of the checklist individually.
Develop an Installation Plan A good installation plan serves as a road map. It guides you and your staff and keeps you focused on the work ahead. Figure 1.2 shows a sample installation plan that lists installation details and lets you track the schedule and identify the responsible person for each task. [Although the installation plan includes important considerations about the physical installation -- e.g., electrical, space, and cooling requirements -- these requirements are well documented in IBM manuals, and I do not discuss them here. For details about physical installation, refer to the AS/400 Physical Planning Guide -- Version 2 (GA41-0001), the AS/400 Physical Planning Guide and Reference -- Version 2 (GA41-9571), the AS/400 Migrating from S/36 Planning Guide -- Version 2 (GC41-9623), or the AS/400 Migrating from S/38 Planning Guide -- Version 2 (GC41-9624).] An overall installation plan helps you put the necessary steps for a successful AS/400 setup into writing and tailor them to your organization's specific needs. The plan also helps you identify and involve the right people and gives you a schedule to work with. Identifying and involving the right people is critical to creating an atmosphere that assures a smooth transition to your new system. Management must commit itself to the installation process and must understand and agree to the project's priority. Other pending MIS projects should be examined and assigned a priority based on staff availability in light of the AS/400's installation schedule. Management and the departments you serve must understand and agree on these scheduling changes. On the MIS side, your staff must commit to learning about the AS/400 in preparation for installation and migration. Your staff must also commit itself to completing all assigned tasks, many of which (e.g., time spent verifying the migration or conversion of programs and data) may require extra hours. The time frame outlined in your installation plan will probably change as the delivery date nears. But even as the schedule changes and is refined, it provides a frame of reference for the total time you need to install, configure, and migrate to the new system. You must also answer an important question as part of your plan: Can you run the old and new systems parallel for a period of time? If you can run parallel, you can greatly reduce the time needed for the installation process.
Running parallel also reduces the risk factor involved in your migration and conversion process.
Plan Education I can hear you now: 'We don't have time for classes! We're too busy to commit our people to any education.' I'm sure this will be your response to the suggestion that you plan for training now. I'm also sure that those statements are absolutely true. But education is a vital part of a successful AS/400 installation. Realistically, then, you must schedule key personnel for education. What key groups of personnel need training? The end users, for one. Their education should focus on PC Support and on the AS/400 Office products they will work with. But you and your operations and programming staff will also need some training. If you move to an AS/400 from a S/36, you will see the familiar sign-on screen, the friendly menu format, and the extensive help text associated with the S/36. But the AS/400 also has some unfamiliar territory: You must learn new security concepts, how to modify your work environment to improve performance, and how to control printer output. Training in relational database design and implementation will improve the applications you migrate or write, and learning something about the AS/400's fast-path commands will help you feel more at home and productive in the native environment. If you are moving from a S/38, you will recognize the fast-path commands (with some minor changes), the command entry display (once you find it), the relational database, the work environment objects, and the security concepts. However, you will need additional knowledge about how to implement new security options, the 'current library,' the Programming Development Manager (PDM), available menus, and other new concepts. You'll also have to learn about the new program products and operations on the AS/400. If all this sounds complicated, then you're getting the point: You need system-specific education for a smooth transition to the AS/400. Where can you get such education? Begin by asking your vendor for educational offerings. If you buy from a third party, training support will vary from vendor to vendor. You can also arrange to attend courses at an IBM Guided Learning Center. Another place to get AS/400 education is on the AS/400 itself. To supplement vendor training support, each AS/400 comes with Tutorial System Support (TSS) installed. This on-line tutorial help provides self-paced lessons for programmers, clerical workers, executives, systems analysts, and others (Figure 1.3 lists the various audience paths available by using TSS lessons). You may be able to begin TSS training before your AS/400 arrives by working through your hardware or software vendor.
You can also find a variety of educational offerings in seminars, automated courses, study guides, one-on-one training sessions, and classroom training courses. The key to successful education is matching education to the user. Matching ensures productive use of the time employees spend away from their daily duties.
Prepare Users for Visual and Operational Differences It would be nice if you could assure all your users that they will not find anything different when they sign on to the AS/400 for the first time, but you probably can't. You would be wise to give some thought to the visual and operational differences and explain them to your users in advance. For example, S/38 users used to a single-level sign-on (just entering a password) may be surprised (and unhappy) to find they must sign on to the AS/400 with both a user profile name and a password. Consequently, you could find yourself waist deep in phone calls and complaints on your first day of operation unless you tell your users what to expect. A communication describing the user profile and password and their roles on the system would go a long way toward smoothing the transition for such users. You may encounter another potential problem in the panel interface differences between your former system and the SAA-compliant AS/400. Command key differences, print-control screen differences, help screen differences, and others may cause some initial concern and confusion among your users. The Operation Assistant (OA) interface provided for end-user interaction with the AS/400 is friendly, but telling your users about these
differences before installation will prepare them, head off many complaints, and protect your position.
Develop a Migration Plan The next step in pre-installation planning is to develop a migration or conversion plan. Converted applications almost always make better use of system resources than migrated applications, but you can successfully operate in the AS/400's S/36 or S/38 environment for as long as you need to. Although your goal ultimately should be to 'go native,' most shops choose to migrate first. Migration eases the transition considerably, particularly for S/36 shops, and allows conversion to proceed at a more leisurely pace. For this reason, I recommend most shops migrate first and then convert as time permits. Even if you buy software written for the AS/400 and use your software vendor's expertise to migrate the data, you must still migrate user profiles, your system configuration, and any custom software or utilities on your system. A migration plan organizes this process and, as you carry out the plan, helps you become familiar with the AS/400 and the new features it offers. Figure 1.4 shows a sample migration plan. The key to a successful S/36 migration is knowing what will migrate and what won't. The S/36 Migration Aid software identifies objects that will not migrate to the S/36 environment and keeps audit trails of what has and has not been migrated. The sooner you know what will not migrate, the sooner you can start developing AS/400 solutions for those objects.
One common problem in S/36 migration is expecting all applications to run better in the AS/400's S/36 environment. Unfortunately, the AS/400 cannot cure bad software. Badly written software that runs poorly on your S/36 will still run poorly in the AS/400's S/36 environment. In fact, the AS/400 may accentuate poor performance. IBM has made a commitment to maintain the S/36 environment on the AS/400. Nevertheless, you can -- and should -- gradually convert from the S/36 environment as you find applications that conversion will improve. Successful S/38 migration also begins with the Migration Aid software. As with the S/36, the Migration Aid identifies the objects and products that will not migrate and helps keep track of the migration process. The key to understanding the S/38 migration process is knowing that all S/38 objects are 'object compatible' with the AS/400. Migration is thus a relatively simple process in which you save the objects from the S/38 and restore them onto the AS/400. When a S/38 object is restored onto the AS/400, the system attaches the suffix '38' to the object attribute, as shown in Figure 1.5. The AS/400 uses the suffix to identify the proper environment for the object. For example, when the AS/400 executes a CL program (e.g., SAMPLECL in Figure 1.5), the system uses S/38 environment commands in response to the suffix on the object attribute. If you were to remove the suffix and attempt to recompile the CL program, you would get errors on any S/38 commands that do not exist in the same form on the AS/400 (e.g., DSPOUTQ, DSPACTJOB).
Whether you migrate from a S/36 or a S/38, running parallel for a while greatly reduces the risk involved. You can migrate your applications in stages, testing and verifying each program as you go. If you can't run parallel, you must complete your migration process on the first try, a much trickier proposition. In this case, I recommend that you seek an experienced outside source for assistance in the migration and conversion process. If you decide to begin conversion immediately, be sure you know what you're getting into. Depending on your current system, conversion could involve one week to six months of work for your staff. With S/36 conversions, for example, your staff must work through a complete education plan before even beginning to tackle the conversion process. Again, a good outside consultant, used in a way that provides educational benefits for your staff, could be an immense help. True, you could simply pay a consultant to convert your database and programs for you, but that approach doesn't educate your staff about the new system.
Also, let me offer you a warning: If you plan to replace your existing system and completely remove it before installing your AS/400, you are absolutely asking for trouble! If you find yourself forced into such a scenario, get help. Hire a consultant who has successfully migrated systems to the AS/400.
Develop a Security Plan With your migration plan in writing, you are ready to tackle a security plan. Imagine for a moment that you have your AS/400 fully installed and smoothly running -- and that you haven't altered the security settings yet. In this case, the system is at security level 10, and anyone who turns on a workstation, receives a sign-on screen, and presses Enter has full access to all system objects and functions. Obviously, you need a security plan, and you need to implement it as soon as possible after your system is installed.
System Security Level Figure 1.6 shows a basic security plan. The first and most significant step in planning your security is deciding what level you need. The AS/400 provides five levels of security: 10, 20, 30, 40, and 50. Security Level 10 -- As I implied, system security level 10 might more aptly be called security level zero, or 'physical security only': At level 10, the physical security measures you take, such as locking the door to the computer room, are all you have. If a user has access to a workstation with a sign-on screen, (s)he can simply press Enter, and the system will create a user profile for the session and allow the user to proceed. The profile the system creates in this case has *ALLOBJ (all object) special authority, which is sufficient for the user to modify or delete any object on the system. Although user profiles are not required at level 10, you could still create and assign them and ask each user to type in her assigned user profile at sign-on. You could then tailor the user profiles to have the appropriate special authorities -- you could even grant or revoke authorities to objects. But there is no way to enforce the use of those assigned profiles, and thus no way to enforce restricted special authorities or actual resource security. Level 10 provides no security. Security Level 20 -- Security level 20 adds password security. At level 20, a user must have a user profile and a valid password to gain access to the system. Level 20 institutes minimum security by requiring that users know a user profile and password, thus deterring unauthorized access. However, as with level 10, the default special authorities for each user class include *ALLOBJ special authority, and therefore resource security is, by default, bypassed. Although you can tailor the user profile, the inherent weakness of level 20 remains: the fact that, by default, resource security is not implemented. The *ALLOBJ special authority assigned by default to every user profile bypasses any form of resource security. To implement resource security at level 20, you must remove the *ALLOBJ special authority from any profiles that do not absolutely require it (only the security officer and security administrator need *ALLOBJ special authority). You must then remember to remove this special authority every time you create a new user profile. This method of systematically removing *ALLOBJ authority is pointless since, by default, level 30 security does this for you. On a production system, you must be able to explicitly authorize or deny user authority to specific objects. Therefore, level 20 security is inadequate in the initial configuration, requiring you to make significant changes to mimic what level 30 provides automatically. Security Level 30 -- Level 30 by default supports resource security (users do not receive *ALLOBJ authority by default). Resource security allows objects to be accessed only by users who have authority to them. The authority to work with, create, modify, or delete objects must be either specifically granted or received as a result of existing default public authority. All production systems should be set at security level 30 or higher (levels 40 or 50). Production machines require resource security to effectively safeguard corporate data, programs, and other production objects and to prevent unintentional data loss or modification. Security Level 40 -- The need for level 40 security centers on a security gap on the S/38 that the AS/400 inherited. This gap allowed languages that could manipulate Machine Interface (MI) objects (i.e., MI itself, C/400, and Pascal) to access objects to which the user was not authorized by stealing an authorized pointer from an unsecured object. In other words, an MI program could access an unsecured object and use its authorized pointer as a passkey to an unauthorized object.
To level 30's resource security, level 40 adds operating system integrity security. System integrity security strengthens level 30 security in four ways:
• • • •
By providing program states and object domains By preventing use of restricted MI instructions By validating job initiation authority By preventing restoration of invalid or modified programs
You might wonder what level 40 buys you. In truth, most systems today could run at level 30 and face no significant problems. But in the future, as you purchase more third-party software and as more systems participate in networks, operating-system integrity will become more important. Level 40 provides the security necessary to prevent a vendor or individual from creating or restoring programs on your system that might threaten system integrity at the MI level, thus ensuring an additional level of confidence when you work with products created by outside sources. Yet, if the need arises to create a program that infringes upon system integrity security, you can explicitly change the security level to 30. The advantage of using level 40 is that you control that decision. During installation, set your system level to 30, and monitor the security audit journal for violations that level 40 guards against. If you find none, go to level 40 security. If violations are logged, review them to determine their source. Some packaged software (e.g., some system tools) will require access to restricted MI instructions and will fail. In these cases, you can ask the vendor when his product will be compatible with level 40 and decide what to do based on his response. Security Level 50 -- IBM introduced security level 50 in OS/400 Version 2, Release 3. The primary purpose of security level 50 is to enable OS/400 to comply with the Department of Defense C2 security requirements. IBM added specific features into OS/400 to comply with DOD C2 security as well as to further enhance the system integrity security introduced in level 40. In addition to all the security features/functions found at all prior OS/400 security levels (e.g., 30, 40), level 50 adds
• • • • •
Restricting user domain object types (*USRSPC, *USRIDX, and *USRQ) Validating parameters Restricting message handling between user and system state programs Preventing modification of internal control blocks Making the QTEMP library a temporary object
If your shop requires DOD C2 compliance, you can get more information concerning security level 50 and other OS/400 security features (e.g., auditing capabilities) in two new AS/400 publications: Guide to Enabling C2 Security (SC41-0103) and A Complete Guide to AS/400 Security and Auditing: Including C2, Cryptography, Communications, and PC Implementation (GG24-4200).
Password Format Rules Your next task in security planning is to determine rules for passwords. In other words, what format restrictions should you have for passwords? Without format requirements, you are likely to end up with passwords such as 'joe,' 'sue,' 'xxx,' and '12345.' But are these passwords secret? Will they safeguard your system? You can strengthen your security plan's foundation by instituting some rules that encourage users to create passwords that are secret, hard to guess, and regularly changed. However, you also must remember that sometimes 'hard to guess' translates into 'hard to remember' -- and then users simply write down their passwords so they won't forget. The following password rules will help establish a good starting point for controlling password formats: Rule 1 is that passwords must be a minimum of seven characters and a maximum of 10 characters. This rule deters users who lack the energy to think past three characters when conjuring up that secret, unguessable password. Rule 2 builds on Rule 1: Passwords must have at least one digit. This rule makes passwords become more than just a familiar name, word, or place.
Rule 3 can deter those who think they can remember only one or two characters and thus make their password something like 'XXXXX6' or 'X1X.' Rule 3 simply states that passwords cannot use the same character more than once. On a similar note, Rule 4 states that passwords cannot use adjacent digits. This prevents users from creating passwords such as '1111,' '1234,' or even using their social security number. With these four rules in place, you can feel confident that only sound passwords will be used on the system. But you can enhance your password security still further with one additional rule. Rule 5 says that passwords should be assigned a time frame for expiration. You can set this time frame to allow a password to remain effective for from one to 366 days, thus ensuring that users change their passwords regularly. Passwords are a part of user profiles, which you will create to define the users to the system after the AS/400 is installed. Laying the groundwork for user profiles is the next concern of your security plan.
Identifying System Users Before you install the new machine, you should identify the people who will use the system. Obtain each user's full name and department and the basic applications the user will require on the system. Some users, such as operators and programmers, will need to control jobs and execute save/restore functions on the system. Other users, such as accounts receivable personnel, only need to manipulate spooled files and execute applications from menus. Once you identify the users and determine which system functions they need access to, you can assign each user to one of the following classes (the authorities discussed with each class are granted when the system security level is set to 30 or 40):
• • • • •
SECOFR (security officer) grants the user all authorities: all object, security administrator, save system, job control, service, spool control, and audit authorities (each of these special authorities is explained below). SECADM (security administrator) grants security administrator, save system, and job control authorities. PGMR (programmer) grants save system and job control authorities. SYSOPR (system operator) grants save system and job control authorities. USER (user) grants no special authorities.
Your MIS staff members normally will have either the SYSOPR or the PGMR user class. Your end users should all reside in the USER user class. The USER class carries no special authorities, which is appropriate for most users. They can work within their own job and work with their own spooled files. One rule of thumb when assigning classes is that you should never set up your system such that a user performs regular work with SECOFR authority. The AS/400 has a special QSECOFR profile; when the security officer must perform a duty, the person responsible should sign on using the QSECOFR profile to perform the needed task. Using security officer authority to perform normal work is like playing with a loaded gun. As you plan user profiles, you also need to consider the special authorities you want to grant to the user profiles and user classes. Special authorities allow users to perform certain system functions; without special authority, the functions are unavailable to the user. The AS/400 provides six special authorities:
• • • • • • •
ALLOBJ (all object authority) lets users access any system object. This authority alone, however, does not allow the users to create, modify, or delete user profiles. SECADM (security administrator authority) allows users to create and change user profiles. SAVSYS (save system authority) lets users save, restore, and free storage for all objects. JOBCTL (job control authority) allows users to change, display, hold, release, cancel, and clear all jobs on the system. The user can also control spooled files in output queues where OPRCTL(*YES) is specified. SERVICE (service authority) means users can perform functions from the System Service Tools, a group of executable programs used for various service functions (e.g., line traces and run diagnostics). SPLCTL (spool control authority) allows users to delete, display, hold, and release their own spooled files and spooled files owned by other users. AUDIT (audit authority) allows users to start and stop security auditing as well as control security auditing characteristics.
When you use security level 30, 40, or 50, the AS/400 automatically assigns special authorities based on user class as shown in Figure 1.7. When you create user profiles, you can use the special authorities parameter to
override the authorities granted by the user class, allowing you to tailor authorities as appropriate for specific users. For instance, a user profile might have a user class of SYSOPR, which grants the user special authorities for job control and save/restore functions. By entering only *SAVSYS for the special authorities parameter, you can instruct the system to grant only this special authority, ignoring the normal defaults for the *SYSOPR user class.
You must also plan specific authorities, which control the objects a user can work with (e.g., job descriptions, data files, programs, menus). Going through the remainder of the pre-installation security planning process -- checking your applications for security provisions -- will also help you decide which users need which specific authorities and help you finish laying the groundwork for user profiles on your new system.
Develop a Backup and Recovery Plan Although it may seem premature to plan for backup and recovery on your as-yet-undelivered AS/400, I assure you it is not. First, you should not assume that the backup and recovery plan for your existing system will still work with the AS/400. Second, the AS/400 has a variety of powerful backup and recovery options that you may not be familiar with. Some of these options are difficult and time-consuming to install if you wait until you've migrated your applications and data to the new system. Checksum is a case in point. The AS/400's single-level storage minimizes disk head contention and eliminates the need to track and manage the Volume Table of Contents. But single-level storage can also create recovery problems. Because single-level storage fragments objects randomly among all the system's disks, the loss of any one disk can result in damage to every object on the system. After complete backup, checksum is your best protection against this weakness in single-level storage. With checksum, you configure disk units (i.e., one disk actuator arm and its associated storage) into checksum sets, with no more than one unit from each disk device in a single checksum set. Then, if a disk fails, the system can compare the data in the failed unit in each checksum set with the data in the other (intact) units and can reconstruct the data on the failed unit. This description of how checksum works is (obviously) not complete, but should give you an idea of how valuable it can be. Because checksum installation on an installed system requires that you save your entire system and reload everything, don't pass up this opportunity to consider installing checksum when you install your new AS/400. An auxiliary storage pool (ASP) is another of those features that are much easier to implement when you install your system rather than later. An ASP is a group of disk units. Your AS/400 will be delivered with only the system ASP (ASP 1) installed. Figure 1.8a shows auxiliary storage configured only as the system ASP. The system ASP holds all system programs and most user data. You can customize your disk storage configuration by partitioning some auxiliary storage into one or more user ASPs (Figure 1.8b). Like checksum, user ASPs provide protection from disk failures, because you can segregate specific user data or backup data onto user ASPs. Thus, if you lose a disk unit in the system ASP, your restore time is reduced to a minimum time of restoring the operating system and the objects in the system ASP, while data residing in the user ASPs will be available without any restore. If you lose a disk unit in a user ASP, your restore time will include only the time it takes to restore the user data in that user ASP. You can use user ASPs for journaling and to hold save files. Journaling automatically creates a separate copy of file changes as they occur, thus letting you recover every change made to journaled files up to the instant of the failure. If you have on-line data entry -- such as orders taken over the phone -- that lacks backup files for the data entered, you should strongly consider journaling as a part of your backup and recovery plan. Although you do not need user ASPs to implement journaling, they do make recovery (which is difficult under the best of circumstances) easier. If you do not journal to a user ASP, you should save your journal receivers (i.e., the objects that hold all file changes recorded by journaling) to media regularly and frequently. User ASPs also protect save files from disk failures. A save file is a special type of physical file to which you can target your backup operation. Save files have two major advantages over backing up to media. The first is that you can back up unattended, since you don't have to change diskette magazines or tapes. The second advantage is
that backing up to disk is much faster than backing up to tape or diskette. The major (and probably obvious) disadvantage is that save files require additional disk storage. Nevertheless, save files are worthwhile in many cases; and when they are, isolating save files in a user ASP provides that extra measure of protection. User ASPs are required as part of the disk-mirroring feature the AS/400 offers. User data is placed on various user ASPs. Each ASP uses a set of mirrored disk drives. The mirroring protects the user data in the ASP, and the fact that ASPs are used protects the larger system from a complete loss due to any one single disk failure. While disk mirroring has a substantial initial investment for the additional disk drives, the protection offered is significant for companies that rely on providing 24-hour service. One last option to consider is RAID protection. IBM and other AS/400 DASD vendors currently offer either RAID 1 or RAID 5 disk protection. RAID 1 is similar to OS/400's system mirroring option, except that the disk subsystem handles all the necessary read/write operations instead of OS/400. You duplicate each disk drive to protect against a single disk drive failure. If one disk fails, the system still has access to the mirrored disk. RAID 5 protection is similar to OS/400's checksum; however, the disk subsystem handles all the read/write operations. RAID 5 stores parity information on additional disk space and uses that parity information to reconstruct the data in the event that one of the disks in a RAID 5 set fails. The point of this discussion is that you need to plan ahead and decide which type of disk protection you will employ so you can be ready to implement your plan when the system is first delivered, when the disk drives are not yet full of information you would have to save before making any storage configuration changes. For more information about save/restore, and an introduction to a working save/restore plan, see Chapters 15 and 16, 'AS/400 Save and Restore Basics,' and 'Backup Without Downtime.'
Establish Naming Conventions Naming conventions vary greatly from one MIS department to the next. The conventions you choose should result in names that are syntactically correct and consistent, yet easily remembered and understood by end users and programmers alike. A good standard does more than simply help you name files, programs, and other objects; it also helps you efficiently locate and identify objects and devices on your system. If your naming conventions are in place before you install your system, they will help installation and migration go smoothly and quickly. The naming convention you choose should be meaningful and should allow for growth of your enterprise. Let's look at an example:
• • •
You have three locations for order entry: Orlando, Florida; Atlanta, Georgia; and Montgomery, Alabama. You have five order entry clerks at each location. You have one printer at each location.
You could let the AS/400 automatically configure all your workstations and printers, which would result in names such as W1, W2, and P1, or DSP02, DSP03, and PRT02. But, by configuring the devices yourself and assigning meaningful names, your devices can have names such as GADSP01, GADSP02, ALDSP01, ALDSP02, FLPRT01. Because these names contain a two-letter abbreviation for the state, they are more meaningful and useful than the names the AS/400 would assign automatically. But this convention would pose a problem if you had two offices in the same state. So instead, to allow for growth of the enterprise, you might incorporate the branch office number into the names, resulting in names such as C01DSP01 to identify a control unit for branch office 01, display station 01. Such a naming convention would help your operations personnel locate and control devices in multiple locations. You will also need a standard for naming user profiles. There are those who believe that a user profile name should be as similar as possible to the name of the person to whom it belongs (e.g., WMADDEN, MJONES, MARYM, JOHNZ). This method can work well when there are only a few end users. Under such a strategy, only one profile is needed per user, which simplifies design and administration of the security system and lets operations personnel identify employees by their user profiles. The drawback to this method is that it results in profiles that are easily guessed and thus provides a door for unauthorized sign-ons, leaving only the password to guess. A friend of mine was bragging about his new LAN one evening and wanted to show me how it worked, but he did not know his user profile or password. We were sitting at his secretary's desk, so I asked him what her name was. Within one minute we were signed on using her first name as the profile and her initials as the password. Good guess? No. Bad profile and password.
Another opinion holds that user-profile names should be completely meaningless (e.g., SYS23431, [email protected], 2LR50M3ZT4) and should be maintained in some type of user information file. The use of meaningless names makes profiles difficult to guess and does not link the name to a department or location that might change as the employee moves in the company. The user information file documents security-related information such as the individual to whom the profile belongs and the department in which the user works. This method is the most secure; but it often meets with resistance from the users, who find their profiles difficult to remember. A third approach is to use a naming standard that aids system administration. Under this strategy, each user profile name identifies the user's location and perhaps function in order to sharpen the ability to audit the system security plan. For instance, if you monitor the history log or use the security journal for auditing, this approach enables you to quickly identify users and the jobs they're doing. To implement this strategy, your naming convention should incorporate the user's location or department and a unique identifier for the user's name. For example, if John Smith works as one of the order entry clerks at the Georgia location, you might assign one of the following profiles: GAJSMITH In this profile, the first two letters represent the location (GA for Georgia), and the remainder consists of the first letter of the user's first name followed by as much of the last name as will fit in the remaining seven characters. GAOEJES This example is similar, but the branch is followed by the department (OE) and the user's initials. This method provides more departmental information while reducing the unique name identifier to initials. B12OEJES This example is identical to the second, but the Georgia branch is numbered (B12). When profile names provide this type of information, programs in your system that supply user menus or functions can resolve them at run time based on location, department, or group. As a result, both your security plan and your initial program drivers can be dynamic, flexible, and easily maintained. In addition, auditing is more effective because you can easily spot departmental trends; and user profile organization and maintenance are enhanced by having a naming standard to follow. However, such profiles are less secure than meaningless profiles because they are easy to guess once someone understands the naming scheme. This leaves only the password to guess, thus rendering the system less secure. As you will discover in Chapter 3, I also believe in maintaining user profiles in a user information file. Such a file makes it easy to maintain up-to-date user-profile information such as initial menus, initial values for programs (e.g., initial branch number, department number), and the user's full name formatted for use in outgoing invoices or order confirmations. When a user transfers to another location or moves to a new department, you should deactivate the old profile and assign a new one to maintain a security history. A user information file helps you keep what amounts to a user profile audit trail. Furthermore, your applications can retrieve information from the file and use it to establish the work environment, library list, and initial menu for a user. A final consideration in choosing a naming convention for user profiles is whether or not your users will have access to multiple systems. If they will, you can simplify Display Station Passthrough functions by using the same name for each user's profile on all systems. To do this, you must consider any limitations the other systems in the network place on user profile names and apply those limitations in creating the user profiles for your system. For instance, another platform in your network may limit the number of characters allowed for user profile names. To allow your user profiles to be valid across the network, you will have to abide by that limitation. You need to determine what user profile naming convention will work best for your environment. For the most secure environment, a 'meaningless' profile name is best. User profiles that consist of the end user's name are the least secure and are often used in small shops where everyone knows (and is on good terms with) everyone else. A convention that incorporates the user's location and function is a compromise between security and system management and implementation that suits many shops.
What Next? Okay, you have made it this far. You have planned and prepared, and then planned some more. You have planned education, scheduled classes, and started to prepare your users for the differences they will encounter with the AS/400. You have planned for migration, security, and backup and recovery, and you know how you will name the objects on your system. You feel ready to begin the installation. But after your vendor helps you install
the hardware, how do you go about implementing all those carefully made plans? In the next chapter, I'll go into what happens once the power is on.
Chapter 2 - That Important First Session Your shiny new AS/400 is out of the box. The microcode is all there, the operating system is installed, and all your program products are loaded on the system. The vendor has finished installation and is packing up the tools. Up to this point (if you have done your homework) you have committed, planned, and planned some more for your new AS/400. Planning is a significant portion of the total installation process, but it isn't nearly as much fun as that moment when you turn on the power, watch the little lights start blinking, hear the low hum of the disk drives, and bring the magical screen to life -- giving you access to your new toy (I mean business machine). That's the moment you live for as a midrange MIS professional!
Signing On for the First Time Once the power is on, you might think your previous S/3X experience would let you just feel your way around the system menus and functions. But that's not the case. My experiences with AS/400 installation have taught me that you should take some immediate steps (Figure 2.1) to put your carefully made plans into action. User ASPs and checksum configuration. First, examine your backup and recovery plan to see whether you have decided to use Auxiliary Storage Pools (ASPs) or checksum. If so, grab your vendor installation team before they leave because the preloaded software on your system is about to be destroyed! As I discussed in Chapter 1, the AS/400 has a S/38-like single-level storage architecture that spreads objects (i.e., programs and data) in auxiliary storage equally over the disk to increase performance during retrieval. When you create a user ASP, you remove a segment of a disk or one or more disk units from the single-level storage area. Therefore, you lose a portion of your objects, and the system must re-initialize the system ASP and start from scratch. This same situation exists when you reserve storage on your disk unit for checksum operations. Thus, after creating a user ASP or checksum, you must reload the microcode, the operating system, and each program product. Work with the installation team to create user ASP(s), to implement checksum, and to reload everything afterward. (Make sure you have all the software product tapes you need. With the advent of preloaded software, the software media may not have been shipped to you with the system.) Reconfiguring your storage and reloading your software may be a pain, but it is much easier during installation than when your machine is working in its production environment. And if ASPs or checksum are part of your backup plans, you can begin breathing easier knowing you are already prepared for disasters. Verify software installation and PTF levels. Next, verify that the program products you ordered are installed on the system. The vendor should assist you in loading these program products if they are not already preloaded on the system. (If you don't have your program products and manuals, make sure you follow up on their delivery.) Then determine whether or not the latest available cumulative Program Temporary Fix (PTF) release is installed on your system. The vendor should know which is the latest PTF level available and can help you determine whether or not that level exists on your system. If you don't have the latest release, order the tape now so you can apply the PTFs before you move your AS/400 into the production phase of installation. For more information about PTFs and installing PTFs, see Chapter 6, 'Introduction to PTFs.' Signing on. With ASPs and checksum configured and the latest PTFs installed, you are now ready to sign on to your AS/400. Use the user profile QSECOFR to sign on, and enter QSECOFR -- the preset password for that profile. But don't start playing with your new system yet! You have some important chores to do during your first session. Set the security level. Your AS/400 is shipped with the security level set at 10. With level 10, anyone who turns on a workstation, receives a sign-on screen, and presses the Enter key has full access to all system objects and functions. Obviously, you need to reset the security level as the first step in implementing your security plan. In the previous chapter, I strongly suggested that you operate your machine at a minimum of security level 30. Don't wait until you move into a production environment; by then, switching levels will be too much trouble for you and a pain for your users. Change the security level now by keying in the command
CHGSYSVAL SYSVAL(QSECURITY) VALUE(XX) where XX is either 30, 40, or 50. The change will take effect when you IPL the system. Because you must perform IPLs to implement a number of settings on your AS/400, you might as well practice one now to put level 30 into action. Make sure the key is in the AUTO position and then power down the system with an automatic restart by keying in
PWRDWNSYS OPTION(*IMMED) RESTART(*YES) When the system is re-IPLed, you can feel confident your AS/400 will operate in a secure environment. Enforce password format rules. The next important step in implementing your security plan is setting the system values that control password generation. You should already have decided on the password rules, and changing the system values to enforce those rules is relatively easy. In Chapter 1, I recommended five rules to guarantee the use of secure passwords on your system. To implement Rule 1, passwords must be a minimum of seven characters and a maximum of 10 characters, enter the code
CHGSYSVAL SYSVAL(QPWDMINLEN) VALUE(7) CHGSYSVAL SYSVAL(QPWDMAXLEN) VALUE(10) The system value QPWDMINLEN (Password Minimum Length) sets the minimum length of passwords used on the system, and system value QPWDMAXLEN (Password Maximum Length) specifies the maximum length of passwords used on the system. To implement Rule 2, passwords must have at least one digit, enter
CHGSYSVAL SYSVAL(QPWDRQDDGT) VALUE('1') Setting the system value QPWDRQDDGT to 1 requires all passwords to include at least one digit. For Rule 3, passwords cannot use the same character more than once, enter
CHGSYSVAL SYSVAL(QPWDLMTREP) VALUE('1') Setting the system value QPWDLMTREP (Limit Character Repetition) to 1 prevents characters from being repeated in immediate succession within a password. For Rule 4, passwords cannot use adjacent digits, enter
CHGSYSVAL SYSVAL(QPWDLMTAJC) VALUE('1') This prevents users from creating passwords with adjacent numbers, such as their social security number or phone number. Implement Rule 5, passwords should be assigned a time frame for expiration, by entering the command
CHGSYSVAL SYSVAL(QPWDEXPITV) VALUE(60) System value QPWDEXPITV (Password Expiration Interval) specifies the length of time in days that a user's password remains valid before the system instructs the user to change passwords. The value can range from 1 to 366. The password expiration interval can also be set individually for user profiles using the PWDEXPITV parameter of the user profile. This is helpful because there are certain profiles, such as the QSECOFR profile, that are particularly sensitive and should require a password change more often for additional security. Change system-supplied passwords. OS/400 provides several user profiles that serve various system functions. Some of these profiles do not have passwords, which means you cannot sign on as that user profile. For example, the default-owner user profile QDFTOWN doesn't have a password because the profile receives ownership of
objects when no other owner is available. However, every AS/400 is shipped with passwords for the systemsupplied profiles listed below, and these passwords are preset to the profile name (e.g., the preset password for the QSECOFR profile is QSECOFR). Therefore, you must change the passwords for these profiles:
• • • • • •
QSECOFR (security officer) QPGMR (programmer) QUSER (user) QSYSOPR (system operator) QSRVBAS (basic service representative) QSRV (service representative)
To enter new passwords, sign on as the QSECOFR profile and execute the following command for each of the above user profiles:
CHGUSRPRF USRPRF(user_profile) PASSWORD(new_password) This can also be accomplished using the SETUP menu provided in OS/400. Type GO SETUP and then select the 'Change Passwords for IBM-supplied Users' option (option 11) to work with the panel shown in Figure 2.2. You can assign a password of *NONE (you cannot change QSECOFR password to *NONE), or you can assign new passwords that conform to the password rules you have just implemented. After changing the passwords for the system-supplied profiles, it would be wise to write the new passwords down and store them in a safe place for future reference. Set auto-configuration control. After you have taken steps to secure your system, the next important action concerns the system value QAUTOCFG, which controls device auto-configuration and helps you establish your naming convention. When your system is delivered, the system value QAUTOCFG is preset to 1, which allows the system to configure devices (e.g., terminals) automatically when the power is turned on. The system identifies the device type, creates a description for that device, and assigns a name to the device. Having QAUTOCFG set to 1 is necessary because the AS/400 then configures itself for your initial sign-on session. When the QAUTOCFG system value is set at its default value of 1, auto-configured devices are named according to the standard specified in the system value QDEVNAMING. The possible values for QDEVNAMING are *STD or *S36. If the system value is left at the default value of *STD, the AS/400 assigns device names according to its own standard (e.g., DSP01 and DSP02 for workstations; PRT01 and PRT02 for printers). If the option *S36 is specified, the AS/400 automatically names devices according to S/36 naming conventions (e.g., W1 and W2 for workstations; P1 and P2 for printers). Although automatic configuration gives you an easy way to configure new devices (you can plug in a new terminal, attach the cable, and -- 'Poof!' -- the system configures it), it can frustrate your efforts to establish a helpful naming convention for your new machine. Therefore, after the system has been IPLed and the initial configuration is complete, you should reset the value of QAUTOCFG to 0, which instructs the system not to auto-configure devices. You can reset auto-configuration by executing the command
CHGSYSVAL SYSVAL(QAUTOCFG) VALUE('0') This change takes effect when you re-IPL the system. (If you haven't done so already, you should re-IPL the system now to put into effect the changes you have made for security level, password rules, and autoconfiguration.) You must now configure devices yourself when needed. Admittedly, configuring devices is much more of a pain than letting the system configure for you. But I recommend this approach because it usually requires more planning, better logic, better structure, a better naming convention, and better documentation. Configuring devices is beyond the scope of this chapter, but the subject is well documented in IBM's AS/400 Device Configuration Guide (SC41-8106). Setting general system values. Several times now, you have set AS/400 system values. A system value is an object type found in library QSYS, and the AS/400 has many of these useful objects to control basic system functions. To further familiarize you with your new system, let's take a look at a few of the most significant system
values. (You can use the WRKSYSVAL (Work with System Values) command to examine and modify system values.) QABNORMSW is not a value that you modify; the system itself maintains the proper value. When your system IPLs, this system value contains a 0 if the previous end of system was NORMAL (meaning you powered the system down and there was no error). However, if the previous end of system was ABNORMAL (meaning there was a power outage that caused system failure, some hardware error that stopped the system, or any other abnormal termination of the system), this system value will be 1. The benefit of this system value is that during IPL, your initial start-up program can check this value. If the value is 1, meaning the previous end of system was ABNORMAL, you might want to handle the IPL and the start-up of the user subsystems differently. QCMNRCYLMT controls the limits for automatic communications recovery. This system value is composed of two numbers. The first number controls how many attempts will be made at error recovery. The second number indicates how many seconds will expire between attempts at recovery. The initial values are '0' '0'. This instructs the system to perform no error recovery when a communications line or control unit fails. If left in this mode, the operator will be prompted with a system message asking whether error recovery should be attempted. The values '5' '5' would instruct the system to attempt recovery five times and wait five seconds between those attempts. Only at the end of those attempts would the operator be prompted with a system message if recovery has not been established. A word about the use of QCMNRCYLMT: If you decide to use the system error recovery by setting this system value, you will add some work overhead to the system, because error recovery has a high priority on the AS/400. In other words, if a communications line or control unit fails and error recovery kicks in, you will see a spike in your response time. If you experience severe communications difficulties, reset this system value to the initial value of '0' '0' and respond manually to the failure messages. QMAXSIGN specifies the number of invalid sign-on attempts to allow before varying that device off. The initial value is 15, but I recommend a value of 3 for tighter security. Setting QMAXSIGN to 3 means that after three unsuccessful attempts at signing onto the system (because of using an invalid user profile or password), the system will disable either the device or user profile being used (the action performed depends upon the value of the QMAXSGNACN system value). You will have to enable the device or user profile again to make it available. QPRTDEV specifies which printer device is the default system printer. When a user profile is created, the output will default to this printer (unless a particular output queue or printer device is specified). The initial value is PRT01. If you have a printer device named SYSPRINT, you can change the value of QPRTDEV to SYSPRINT. These are just a few of the system values available on the AS/400. For a list of system values and their initial values, consult IBM's AS/400 Programming: System Reference Summary (SX41-0028), or its AS/400 Programming: Work Management Guide (SC41-8078). It is worth your time to read about each of these values and determine which ones need to be modified for your particular installation.
Establishing Your Work Environment Okay, you have covered a lot of ground so far. You've made the system secure, reset the auto-configuration value, and looked at some general system values. But it's not time for fun and games yet. Now you should establish your work environment. When the system is shipped, your work environment is simple. Memory is divided into the machine pool, subsystem QBASE, and subsystem QSPL. The system uses the machine pool to interface with the hardware. Subsystem QBASE is a memory pool used to execute all the interactive, batch, and communications jobs. QSPL is the spooling subsystem that provides the operating environment (memory and processing priorities and parameters) for programs that read jobs onto job queues to wait for processing and write files from an output queue to an output device. While this simple arrangement is functional, it may not be effective or efficient. For example, if the system value setting the machine pool size is too low, performance is slow; if the value is too high, you waste memory. Thus, you need to customize your work environment for your organization. Let's look at the most important work management objects. QMCHPOOL is the system value that specifies the amount of memory allocated to the machine pool. Examine this value and compare it with the calculated value you arrive at based on the configuration you are operating. Figure 2.3 shows the formula for calculating the machine pool size, and Figure 2.4 shows a sample calculation that
assumes you have an AS/400 with a main storage size of 32 MB, an estimated 150 active jobs, four SDLC communications lines, two controllers on each line, save/restore operations, and one Token-Ring adapter. The resulting machine pool size is 4,918 KB, which you might round off to 5,000 KB. Fudging a little on the calculations won't hurt if you monitor the performance of this pool under normal work loads and adjust either way. (For basic pool performance tuning information, see Chapter 13 of the Work Management Guide).
QBASPOOL specifies the minimum size of the base storage pool. Memory not allocated to any other storage pool stays in the base storage pool. This pool supports system jobs (e.g., SCPF, QSYSARB, QSYSWRK, QSPLMAINT, and subsystem monitors) and system transients (such as file OPEN/CLOSE operations). Enter the WRKSYSSTS (Work System Status) command to see the amount of storage the machine has reserved for these functions (the reserved value will appear on the display as RESERVED). You can use this value as a minimum value for QBASPOOL, but I recommend being a little more generous. For example, if the reserved size is 1,600 KB, you should set the QBASPOOL value higher (a good rule of thumb is to add 400 KB for each activity level QBASPOOL supports) because many more system jobs will be active under normal working conditions. As with QMCHPOOL, monitor the value of QBASPOOL to make sure it remains adequate. QBASACTLVL sets the maximum activity level of the base storage pool. The initial value for QBASACTLVL depends on your AS/400 model. This default value should be adequate; however, if you elect to run batch jobs in this pool (instead of creating a separate private pool for batch processing), you should make sure that you adjust this value to allow for one activity level for each batch job that you will allow to process simultaneously. Monitor the performance of the base pool to determine whether additional memory or another activity level is required. QMAXACTLVL sets the maximum activity level of the system by specifying the number of jobs that can compete at the same time for main storage and processor resources. By examining each subsystem, you can establish the total number of activity levels; this value must at least equal that number or be set higher. I suggest you set the QMAXACTLVL value to five above the total number of activity levels allowed in all subsystems, which will let you increase activity levels for individual subsystems for tuning purposes without having to increase QMAXACTLVL. However, if the number of subsystem activity levels exceeds the value in QMAXACTLVL, the system executes only the number of levels specified in QMAXACTLVL, resulting in unnecessary waiting for your users. Therefore, you must increase QMAXACTLVL if you increase the total number of activity levels in your subsystems or if you add subsystems. QACTJOB is the system value that specifies the initial number of active jobs for which the system should allocate storage during IPL. The amount of storage allocated for each active job is approximately 110 KB (this is in addition to the auxiliary storage allocated due to the QTOTJOB system value, discussed below.) I suggest you set this number to approximately 10 percent above the average number of active jobs (i.e., any user or system job that has started executing but has not ended) that you expect to have on the system. For example, if you have an average of 50 active jobs, set the QACTJOB value at 55. Setting QACTJOB and QTOTJOB to values that closely match your requirements helps the AS/400 correctly allocate resources for your users at the system start-up time instead of continually having to allocate more work space (e.g., for jobs or workstations) and provides more efficient performance. QTOTJOB specifies the initial number of jobs for which the system should allocate auxiliary storage during IPL. The number of jobs is the total possible jobs on the system at any one time (e.g., jobs in the job queue, active jobs, and jobs having spooled output in an output queue). QADLACTJ specifies the additional number of active jobs for which the system should allocate storage when the number of active jobs in the QACTJOB system value is exceeded. Setting this value too low may result in delays if your system needs additional jobs, and setting it too high increases the time needed to add the additional jobs.
QADLTOTJ specifies the additional number of jobs for which the system should allocate auxiliary storage when the initial value in QTOTJOB is exceeded. As with QADLACTJ, setting this value too low may result in delays and interruptions when your system needs additional jobs, and setting it too high slows the system when new jobs are added. You will need to document changes to these objects. I suggest you record any commands that change the work management system values (or any other IBM-supplied objects) by keying the same commands into a CL program that can be run each time a new release of the operating system is loaded. This ensures that your system's configuration remains consistent. Establishing your subsystems. Selecting your controlling subsystem is the next task in establishing your work environment. When your system is shipped, the controlling subsystem for operations is QBASE. It supports interactive, batch, and communications jobs in the same memory storage pool. When the system IPLs, QBASE is started and an autostart job also starts the spool subsystem QSPL. This default configuration is simple to manage because these two subsystems are used apart from the machine pool and the base pool. However, I recommend implementing separate subsystems for each type of job to provide separate memory pools for each activity. One memory pool can support all activities. But when long-running batch jobs run with interactive workstations that compete for the same memory, system performance is poor and the fight for activity levels and priority becomes hard to manage. My experience with AS/400s has taught me that establishing separate subsystems for batch, interactive, and communications jobs gives you much more control. Using QCTL as the controlling subsystem establishes separate subsystems for batch, interactive, and communications jobs and can be the basis for various customized subsystems. Use the following command to change the controlling subsystem from QBASE to QCTL:
CHGSYSVAL SYSVAL(QCTLSBSD) VALUE('QCTL QGPL') or you can use the WRKSYSVAL command to modify the system value. (The above CHGSYSVAL command changes the value of QCTLSBSD, which is the system value that specifies what the controlling subsystem will be.) This will be effective after the next IPL. Although the QCTL subsystem only supports sign-on at the console, QCTL also begins an auto-start job at IPL. The auto-start job then starts four system-supplied subsystems: QINTER, QBATCH, QCMN, and QSPL (the descriptions for these subsystems are in the QGPL library). The QINTER subsystem supports interactive jobs, QBATCH supports batch jobs, QCMN supports communication jobs, and QSPL still supports its normal functions as the spooling subsystem. You can thus allocate memory to each subsystem based on the need for each type of job and set appropriate activity levels for each subsystem. No system values control the memory pools and activity levels for individual subsystems, but the subsystem description contains the parameters that control these functions. For example, when you create a subsystem description with the CRTSBSD (Create Subsystem Description) command, you must specify the memory allocation and the number of activity levels. You can find more information about subsystem descriptions in Chapters 17, 18, and 19, and in the Work Management Guide, and more information about the CRTSBSD command in the Control Language Reference (SC41-0030). Making QCTL the controlling subsystem will also help if you decide to create your own subsystems. For instance, if your system supports large numbers of remote and local users, you may want to further divide the QINTER subsystem into one subsystem for remote interactive jobs and another for local interactive jobs. Thus, you can establish appropriate execution priorities, time slices, and memory allocations for each type of job and greatly improve performance consistency. Retrieving and modifying the start-up program QSTRUP. When you IPL your system, the controlling subsystem QBASE or QCTL, whichever you decide to use, submits an auto-start job that runs the program specified in the system value QSTRUPPGM. The initial value for that system value is QSTRUP QSYS. This program starts the appropriate subsystems and starts the print writers on your system. However, you may want to modify QSTRUP to perform custom functions. For instance, you may have created additional subsystems that need to be started at IPL, or you may want to run a job that checks the QABNORMSW system value each time the system is started. Retrieve the CL source code for QSTRUP (Figure 2.5) by executing the command
RTVCLSRC PGM(QSYS/QSTRUP) SRCFIL(QGPL/QCLSRC)
After retrieving the source, use the SEU editor to change QSTRUP to perform other start-up functions for you. Figure 2.6 shows a sample user-modified start-up program that uses QCTL as the controlling subsystem for the additional subsystems of QPGMR, QREMOTE, and QLOCAL. The sample program also checks the status of the QABNORMSW system value. Once you have modified QSTRUP, recompile the program into library QSYS under a different name or to a different library. (I suggest you leave the program in library QSYS, just in case someone deletes the library that contains your new start-up program.) Then change QSTRUPPGM to use your new program. Make sure you test your new start-up program before replacing the original program.
Now What?
Chapter 3 - Access Made Easy If you have followed my recommendations about AS/400 setup to this point, you've carefully planned for installation, education, migration, security, backup, and recovery before you ever received your system. You've established consistent and meaningful naming conventions for system objects and have established your work environment. Now that you have powered on the AS/400, it's time to start thinking about putting it to work. The next step is to set up user profiles. IBM supplies a few user profiles with which to maintain the AS/400, such as QSECOFR (Security Officer), QDFTOWN (Default Owner), and QSRV (Service Profile used by the Customer Engineer). In addition to these profiles, you need profiles for your users so they can sign on to the system and access their programs and data. For this aspect of setting up your AS/400, you first need to understand user profiles and their attributes. With that knowledge you can, if you wish, turn over to a program the job of creating profiles for your users.
What Is a User Profile? To the AS/400, a user profile is an object. While the object's name (e.g., WDAVIS or PGMR0234) is what you normally think of as the user profile, a user profile is much more than a name. The attributes of a user profile object define the user to the system, enabling it to establish a custom initial session (i.e., job) for that user at signon. To make the best use of user profiles, you must understand those attributes and how they can help you control access to your system. You create a user profile using the CRTUSRPRF (Create User Profile) command. Only the security officer profile (QSECOFR) or another profile that has *SECADM (security administrator) special authority can create, change, or delete user profiles. You should restrict authority to the CRTUSRPRF (Create User Profile), CHGUSRPRF (Change User Profile), and DLTUSRPRF (Delete User Profile) commands to those responsible for the creation and maintenance of user profiles on your system. The CRTUSRPRF and CHGUSRPRF commands have a parameter for each user profile attribute. If you prompt the CRTUSRPRF command and then press F10, the system will display the command's parameters (Figure 3.1). But before you create any user profiles, you should first decide how to name them. In Chapter 1, I stressed the importance of developing a strategic naming convention for user profiles. Once you have performed this task, you are ready to create a user profile for each person who needs access to your system.
Creating User Profiles Figure 3.1 represents all the available parameters for creating a user profile. Except for the user profile name (USRPRF) parameter, each parameter has a default value that will be accepted unless you supply a specific value to override that default. Following are the key user profile parameters that you will frequently change to customize a user profile. USRPRF (User Profile) The first parameter is USRPRF, which contains the user profile name you decided on. This is a required parameter and you will enter the name of the user profile you are creating. PASSWORD (User Password)
As I mentioned in Chapter 1, passwords should be secret, hard to guess, and regularly changed. You cannot ensure that users keep their passwords secret, but you can help make them hard to guess by controlling password format, and you can make sure passwords are changed regularly. This discussion assumes you allow users to select and maintain their own passwords. No one in MIS needs to know user passwords. The AS/400 does not allow even the security officer to view existing passwords. This would violate the first rule of passwords -- that they be secret! The PASSWORD parameter lets you specify a value of *NONE, a value of *USRPRF, or the password itself. *NONE, which means that the user profile cannot sign on to the system, is recommended for group profiles, profiles of users who are on vacation and do not need access for a period of time, users who have been terminated but cannot be deleted at the time of termination, and for other situations in which you want to ensure that a profile is not used. The default value, *USRPRF, dictates that the password be the same as the user profile name. You should not use PASSWORD(*USRPRF); otherwise, you will forfeit the layer of security provided by having a password that differs from the user profile name. You can control the format of passwords by using one or more of the password-related system values discussed in Chapter 2 or by creating your own password validation program (see the discussion of the QPWDVLDPGM in IBM's Security Reference manual (SC41-8083). The format you impose should encourage users to create hard-toguess passwords but should not result in passwords that are so cryptic users can't remember them without writing them down within arm's reach of the keyboard. As I said in Chapter 1, I suggest the following guidelines:
• • • •
Enforce a minimum length of at least seven characters (use the QPWDMINLEN system value). Require at least one digit (use the QPWDRQDDGT system value). Do not allow adjacent numbers in a password (use the QPWDLMTAJC system value). Do not allow an alphabetic character to be repeated in a password (use the QPWDLMTREP system value).
To ensure that users change their passwords regularly, use system value QPWDEXPITV to specify the maximum number of days a password will remain valid before requiring a change. A good value for QPWDEXPITV is 60 or 90 days, which would require all users system-wide to change passwords every two or three months. You can specify a different password expiration interval for selected individual profiles using CRTUSRPRF's PWDEXPITV parameter, which I'll discuss later in this chapter. PWDEXP (Set Password to Expired) The PWDEXP parameter lets you set the password for a specific user profile to the expired state. When you create new user profiles, you may want to specify PWDEXP(*YES) to prompt new users to choose a secret password the first time they sign on. The same is true when you reset passwords for a users who forgot theirs. STATUS (Profile Status) This parameter specifies whether a user profile is enabled or disabled for sign-on. When the value of STATUS is *ENABLED, the system allows the user to sign on to the system. If the value is *DISABLED, the system does not allow the user to sign on until an authorized user re-enables it (changes the value to *ENABLED). The primary use of this parameter is in conjunction with the QMAXSGNACN system value. If QMAXSGNACN is set to 2 or 3, the system will disable a profile that exceeds the maximum number of invalid sign-on attempts (the QMAXSIGN system value determines the maximum number of sign-on attempts allowed). When a user is disabled, the system changes the value of status to *DISABLED. An authorized user must reset the value to *ENABLED before the user profile can be used again. USRCLS (User Class) and SPCAUT (Special Authority) These two parameters work together to specify the special authorities granted to the user. Special authorities allow users to perform certain system functions, such as save/restore functions, job manipulation, spool file manipulation, and user profile administration (see the discussion of user classes and special authorities in Chapter 1). The USRCLS parameter lets you classify users by type. Figure 3.2 shows the five classes of user recognized on the AS/400: *SECOFR (security officer), *SECADM (security administrator), *PGMR (programmer), *SYSOPR
(system operator), and *USER (user). These classes represent the groups of users that are typical for an installation. By specifying a user class for each user profile, you can classify users based upon their role on the system. When you assign user profiles to classes, the profiles inherit the special authorities associated with their class. Figure 3.2 also shows the default special authorities associated with each user class under security levels 30, 40, and 50. While you can override these special authorities using the SPCAUT (Special Authority) parameter, often the default authorities are sufficient. The default for the SPCAUT parameter is *USRCLS, which instructs the system to refer to the user class parameter and assign the predetermined set of special authorities that appear in Figure 3.2. You can override this default by typing from one to five individual special authorities you want to assign to the user profile. After sending a message that the special authorities assigned do not match the user class, the system will create the user profile as you requested. Here are two examples:
CRTUSRPRF USRPRF(B12ICJES) PASSWORD(password) USRCLS(*PGMR) User profile B12ICJES will have *SAVSYS and *JOBCTL special authorities.
CRTUSRPRF USRPRF(B12ICJES) PASSWORD(password)USRCLS(*PGMR)+ SPCAUT(*NONE) In this case, user profile B12ICJES will be in the *PGMR class but will have no special authorities. Figure 3.3 lists the values allowed for the SPCAUT parameter and what each means. Special authorities should be given to only a limited number of user profiles because some of the functions provided are powerful and exceed normal object authority. For instance, *ALLOBJ special authority gives the user unlimited access to and control over any object on the system -- a user with *ALLOBJ special authority can perform any function on any object on your system. The danger in letting that power get into the wrong hands is clear. Generally speaking, no profile other than QSECOFR should have *ALLOBJ authority. This is why the security level of any development or production machine should be at least 30, where resource security and *ALLOBJ special authority can be controlled with confidence. Your security implementation should be designed so it does not require *ALLOBJ authority to administer most functions. Reserve this special authority for QSECOFR, and use that profile to make any changes that require that level of authority. The *SECADM special authority is helpful in designing a security system that gives users no more authority than they need to do their job. *SECADM special authority enables the user profile to create and maintain the system user profiles and to perform various administrative functions in OfficeVision/400. Using *SECADM, you can assign an individual to perform these functions without having to assign that profile to the *SECOFR user class. The *SAVSYS special authority lets a user profile perform save/restore operations on any object on the system without having the authority to access or manipulate those objects. *SAVSYS shows clearly how the AS/400 lets you grant only the authority a user needs to do a job. What would it do to your system security if your operations staff needed *ALLOBJ special authority to perform save/restore operations? If that were the case, system operators could access such sensitive information as payroll and master files. *SAVSYS avoids that authorization problem while providing operators with the functional authority to perform save/restore operations. *SERVICE is another special authority that should be guarded. Having *SERVICE special authority enables a user profile to use the System Service Tools. These tools provide the capability to trace data on communications lines and actually view user profiles and passwords being transferred down the line when someone signs on to the system. These tools also provide the capability to display or alter any object on your system. So be stingy with *SERVICE special authority. The QSRV, QBASSRV, and QSECOFR profiles provided with OS/400 have *SERVICE authority. You should check whether or not your systems still have the default passwords for the system profile QSRV or QBASSRV. If
they do, change the passwords to *NONE, and assign a password only when a Customer Engineer needs to use one of these profiles.
Initial Sign-On Options CURLIB (Current Library) INLPGM (Initial Program) INLMNU (Initial Menu) LMTCPB (Limit Capabilities) Three user profile parameters work together to determine the user's initial sign-on options. The CURLIB, INLPGM, and INLMNU parameters determine the user profile's current library, initial program, and initial menu, respectively. Why are these parameters significant to security? They establish how the user interacts with the system initially, and the menu or program executed at sign-on determines the menus and programs available to that user. Let's look at a couple of examples: Example 1 Consider the user profile USER, which has the following values:
Current library Initial program Library . . . Initial menu . Library . . .
. . . . to call . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. CURLIB . INLPGM . . INLMNU .
ICLIB *NONE ICMENU ICLIB
When USER signs on to the system, the current library is set to ICLIB and the user receives menu ICMENU in library ICLIB. Any other menus or programs that can be accessed through ICMENU and to which USER is authorized are also available. Example 2
Current library Initial program Library . . . Initial menu . Library . . .
. . . . to call . . . . . . . . . . . .
. . . . .
. . . . .
. . . . .
. CURLIB . INLPGM . . INLMNU .
ICLIB ICUSERON SYSLIB *SIGNOFF
When USER signs on to the system, ICLIB is the current library in the library list, and program ICUSERON in library SYSLIB is executed. Again, any other menus or programs accessible through ICUSERON and to which the user is authorized are also available. The value of *SIGNOFF for the INLMNU parameter is worth some discussion. When a user signs on, OS/400 executes the program, if any, specified in the INLPGM parameter. If the user or user program has not actually executed the SIGNOFF command when the initial program ends, the system executes the menu, if any, specified in parameter INLMNU. Thus, if the default value MAIN were given for INLMNU and program SYSLIB/ICUSERON were to end without signing the user off, the system would present the main menu. When *SIGNOFF is the value for INLMNU, OS/400 signs the user off the system. The CURLIB, INLPGM, and INLMNU parameters are significant to security because users can modify their value at sign-on. Users can also execute OS/400 commands from the command line provided on AS/400 menus. Obviously, allowing all users these capabilities is not a good idea from a security point of view, and this is where the LMTCPB parameter enters the picture. LMTCPB controls the user's ability to
• •
define (using the CHGUSRPRF command) or change (at sign-on) his own initial program, define (using the CHGUSRPRF command) or change (at sign-on) his own initial menu,
•
• •
define (using the CHGUSRPRF command) or change (at sign-on) his own current library, define (using the CHGUSRPRF command) or change (at sign-on) his own attention key program, execute OS/400 or user-defined commands from the command line on AS/400 native menus.
Figure 3.4 shows the effect of the possible values for the LMTCPB parameter. You will notice that LMTCPB(YES) prevents changing any of these values or executing any commands.
Production systems usually enforce LMTCPB(*YES) for most user profiles. The profiles that typically need LMTCPB(*NO) are MIS personnel who frequently use the command line from OS/400 menus. These user profiles can still be secured from sensitive data using resource security. Although you could specify LMTCPB(*PARTIAL) for those MIS personnel and thus ensure that they cannot change their initial program, they could still change their initial menu, which would be executed at the conclusion of the initial program.
System Value Overrides DSPSGNINF (Display Sign-on Information) PWDEXPITV (Password Expiration Interval) LMTDEVSSN (Limit Device Sessions) The system values QDSPSGNINF, QPWDEXPITV, and QLMTDEVSSN can be overridden through user profile parameters that control these functions. You'll notice that each of these parameters has a default value of *SYSVAL. The default lets the system value control these functions. To override the system values, specify the desired values in the user profile parameters. The available choices are the same as those for the system values themselves.
Group Profiles GRPPRF (Group Profile) OWNER (Owner) GRPAUT (Group Authority) All the parameters discussed to this point are used to define profiles for individual users. The GRPPRF, OWNER, and GRPAUT parameters let you associate an individual with a group of user profiles via a group profile. When you authorize a group profile to objects on the system, the authorization applies to all profiles in the group. How is this accomplished? You create a user profile for the group. The group profile should specify PASSWORD(*NONE) to prevent it from actually being used to sign on to the system -- all members of the group should sign on using their own individual profiles. For instance, you might create a profile called DEVPGMR to be the group profile for your programming staff. Then for each user profile belonging to a member of the staff, use the CHGUSRPRF command and the GRPPRF, OWNER, and GRPAUT parameters to place them in the DEVPGMR group. The GRPPRF parameter names the group profile with which this user profile will be associated. If you create the group profile DEVPGMR, you would specify DEVPGMR as the GRPPRF value for the user profiles you put into that group. The OWNER parameter specifies who owns the objects created by the group profile. The parameter value determines whether the user profile or the group profile will own the objects created by profiles that belong to the group. There is an advantage to having the group profile own all objects created by its constituent user profiles. When the group profile owns the objects, then every member of the group has *ALL authority to the objects. This
is helpful, for instance, in a programming environment where more than one programmer works on the same projects. However, there is a way to provide authority to group members without giving them *ALL authority. If you specify OWNER(*USRPRF), individual user profiles own the objects they create. If a user profile owns an object, the group profile and other members in the group have only the authority specified in the GRPAUT parameter to the object. The GRPAUT parameter specifies the authority to be granted to the group profile and to members of the group when *USRPRF is specified as the owner of the objects created. Valid values are *ALL, *CHANGE, *USE, *EXCLUDE, and *NONE. The first four of these values are authority classes, each of which represents a set of specific object and data authorities that will be granted; these values are discussed in detail in Chapter 4 as part of the discussion of specific authorities. If you specify one of the authority class values for the GRPAUT parameter, the individual user profile that creates an object owns it, and the other members of the group, including the group profile, have the specified set of authorities to the object. *NONE is the value required when *GRPPRF is specified as the owner of objects created by the user. Because the group profile automatically owns the object, all members of the group will share that authority. JOBD (Job Description) The JOBD parameter on the CRTUSRPRF command determines the job description associated with the user profile. The job description specifies a set of attributes that determine how the system will process the job. Not only is the job description you specify used when the user profile submits a batch job to the system, but values in the job description determine the attributes of the user profile's workstation session. For instance, the initial library list that you specify for the job description becomes the user portion of the library list for the workstation session. If you don't specify a particular job description for the user profile on the JOBD parameter, the system defaults to JOBD(QDFTJOBD), an IBM-supplied job description that uses the QUSRLIBL system value to determine the user portion of the library list. The JOBD parameter does not affect any other portion of the library list. After the user profile signs on, the initial program can manipulate the library list. One way to manage the user portion of the library list is to use QUSRLIBL to establish all user libraries. Then when someone signs on to the system, QUSRLIBL supplies all possible libraries, and users can always find the programs and data they need. However, this approach disregards security because it lets all users access all libraries, even those they don't need. Another approach to setting up user libraries is to create a job description for each user type on the system. Then when you create the user profile, you can specify the appropriate job description for the JOBD parameter, and that job description's library list becomes the user library list when that profile signs on to the system. The approach I recommend is to specify only general-purpose user libraries in QUSRLIBL. These libraries should contain only general utility programs (e.g., date routines, extended math functions, a random number generator). Each profile's initial program should then add only the application libraries needed by that particular user profile. You can use department name or some other trigger kept in a database file to determine library need. SPCENV (Special Environment) CRTUSRPRF's SPCENV parameter determines which operating environment the user profile is in after signing on. The values for SPCENV are *SYSVAL, *36, or *NONE. The value *SYSVAL indicates that the system value QSPCENV will be referenced to retrieve the operating environment. If you specify *S36, the user profile will enter the S/36 environment at sign-on. If you specify *NONE, the user profile will be in the native environment at sign-on and the user will have to enter either a STRS36E or CALL QCL command to enter the S/36 or S/38 environment.
Message Handling MSGQ (Message queue) DLVRY (Delivery) SEV (Severity code filter) When you create a user profile, the system automatically creates a message queue by the same name in library QUSRSYS. The user receives job completion messages, system messages, and messages from other system
users via this message queue. Three CRTUSRPRF parameters relate to handling user messages. The MSGQ parameter specifies the message queue for the user. Except in very unusual circumstances, you should use the default value (*USRPRF) for this parameter. If you keep the message queue name the same as the user profile name, system operators and other users can more easily remember the message queue name when sending messages. The DLVRY parameter specifies how the system should deliver messages to the user. The value *BREAK specifies that the message will interrupt the user's job upon arrival. This interruption may annoy users, but it does help to ensure that they notice messages. The value *HOLD causes the queue to hold messages until a user or program requests them. The value *NOTIFY specifies that the system will notify the job of a message by sounding the alarm and displaying the message-waiting light. Users can then view messages at their convenience. The value *DFT specifies that the system will answer with the default reply any message that requires a response; information messages are ignored. The last parameter of the message group, SEV, specifies the lowest severity code of a message that the system will deliver when the message queue is in *BREAK or *NOTIFY mode. Messages of lower severity are delivered to the user profile's message queue but do not sound the alarm or turn on the message-waiting light. The default severity code is 00, meaning that the user will receive all messages. You should usually leave the SEV value at 00. But if you do not want certain users, because of their operational responsibilities, for instance, to be bothered by a lot of low-severity messages, you can assign another value (up to 99).
Printed Output Handling PRTDEV (Print Device) OUTQ (Output Queue) The PRTDEV and OUTQ parameters are important to a basic understanding of directing printed output on the AS/400. If the user does not specifically direct a particular spooled output file to an output queue or device via an override statement (i.e., with an OVRPRTF (Override with Printer File) command; the S/36 environment procedure statements PRINT or SET; the OS/400 CHGJOB (Change Job) command; or by naming a specific output queue in a job description or print file), the system directs printed output according to the values of these two parameters. PRTDEV specifies the name of the printer to which output is directed. This might be an actual printer name or the default value of *WRKSTN that instructs the system to get the name of the printer from the workstation device description. Although PRTDEV refers to a specific device, an output queue with the same name as the device specified for PRTDEV must exist on the system. If the device specified does not exist (and thus no output queue exists for that device), and if no output queue is specified in the OUTQ parameter of the user profile, then spooled output is sent to the default system printer specified in the system value QPRTDEV. If the value of PRTDEV is *SYSVAL, output also goes to the default system printer. The OUTQ parameter specifies the qualified name of the output queue the profile will use. Here again the default value of *WRKSTN instructs the system to get the name of the output queue from the workstation device description. The OUTQ parameter takes precedence over the PRTDEV parameter. In other words, if the OUTQ parameter contains the name of a valid output queue (or *WRKSTN refers to an actual output queue), the system ignores the parameter PRTDEV for this user profile and places into the specified output queue any printed output not specifically directed (via an OVRPRTF or CHGJOB command during job execution) to another output queue or printer. When the OUTQ parameter has the value *DEV, the printed output file is placed on the output queue named in the DEV attribute of the current printer file (this attribute is determined by the DEV parameter of the CRTPRTF (Create Printer File), CHGPRTF (Change Printer File), or OVRPRTF (Override Printer File) command). I follow two basic rules to determine who is on the system and to direct printed output. First, I use the user profile to determine who is on the system and the resources (e.g., libraries, menus, programs, authority) that user needs. Regardless of where users sign on to the system, they need to see their own menus, work with their usual objects, and have the same authority they always do. Those resources relate directly to the user's function. Second, I don't direct spooled output by user profile, but by the workstation being used. If a user signs on to a terminal in another department because his or her workstation is broken, spooled output should print according to the user's location. These two rules are good standards for setting up your system, yet they give you the flexibility to handle special cases, such as sending output to a printer that can handle a special form.
Documenting User Profiles TEXT (Text Description) The last parameter we will look at on the CRTUSRPRF command is TEXT. TEXT gives you 50 characters in which to meaningfully describe the user profile. The information you include and its format should be consistent for each user profile to ensure readability and usability. You can retrieve, print, or display this text to identify who requests a report or uses a program. Before you actually create any user profiles, consider each parameter and develop a plan to best use it. Once you determine your company's needs, devise standards for creating your user profiles. Figure 3.5 creates a sample user profile for an order-entry clerk at branch location 01. Notice that I specified an output queue for the user profile in spite of my rule that the user's location at sign-on should control spooled output. I specified the output queue in this example so the directory will know where to send output when the user is using directory functions such as E-mail, network spooled files, or network messages. With minor changes in the user profile name, output queue, and text, I could use the same code to create user profiles for all order-entry clerks. Before you create your user profiles, it helps to chart the various profile types and the parameter values you will use to create them. Figure 3.6 is a sample table that lists values you could use if your company had order entry, inventory control, accounting, purchasing, MIS operations, and MIS programming departments. A table such as this serves as part of your security strategy and as a reference document for creating user profiles.
Maintaining User Profiles After you set up your user profiles, you will need to maintain them as users come and go or as their responsibilities change. You can change a user profile with the CHGUSRPRF (Change User Profile) command. As with CRTUSRPRF, you must have *SECADM special authority to use CHGUSRPRF. The CHGUSRPRF command is the same as the CRTUSRPRF command, except that the CHGUSRPRF command does not have an AUT (authority) parameter, and the parameter default values for CHGUSRPRF are the parameter values you assigned when you executed the CRTUSRPRF command. Typically, you might employ CHGUSRPRF when a user forgets a password. Because the system won't display a password, you would need to use CHGUSRPRF to change the forgetful user's password temporarily and then require the user to choose a new password at the next sign-on. To accomplish this, execute the command
CHGUSRPRF USRPRF(profile_name) PASSWORD(password) PWDEXP(*YES) This command resets the password to a known value and sets the password expiration to *YES, so that the system prompts the user to choose a new secret password at the next sign-on. It is not uncommon to delete a user profile. When an employee leaves, the security administrator should promptly remove the employee's user profile from the system, or at least set the password to *NONE. To delete a user profile, use the DLTUSRPRF (Delete User Profile) command. This command has been much improved since its introduction in S/38 CPF. Many S/38 shops share a mutual problem when a user leaves, especially an MIS staff member: The user profile cannot be deleted if it owns any objects. If there are no automated methods for deleting or transferring objects owned by the former user profile, this cleanup process can take several hours. The OS/400 version of the DLTUSRPRF command has a parameter OWNOBJOPT that tells the system how to handle any objects owned by the user profile you asked to delete. The system will not delete a profile that owns objects if you specify the default *NODLT for OWNOBJOPT. However, you can specify *DLT to delete those objects. Avoid the option *DLT unless you have used the DSPUSRPRF (Display User Profile) command to identify the owned objects and are sure you want to delete them. Remember: A backup of these objects is an easy way to cover yourself in case of error. The remaining option for OWNOBJOPT is *CHGOWN, which instructs the system to transfer ownership of any objects owned by the profile you want to delete. You must specify the new owner of these objects in the second part of this parameter. For instance, if a programmer owns some objects privately and you want to delete that programmer's profile, you might specify
DLTUSRPRF USRPRF(profile_name) OWNOBJOPT(*CHGOWN MIS) to transfer ownership of the objects to your MIS group profile. If you write a program to help you maintain user profiles, you may find the RTVUSRPRF (Retrieve User Profile) command helpful. You can use RTVUSRPRF to retrieve into a CL variable one or more of the parameter values associated with a user profile. (See IBM's AS/400 Programming: Control Language Reference (SC41-0030) for details about this command's parameters. You can also prompt this command on your screen and then use the help text to learn more about each variable you can retrieve.) Figure 3.7 shows the prompt screen for RTVUSRPRF. The prompt lists the length of each variable next to the parameter whose value is retrieved in that variable. This command is valid only within a CL program because the parameters actually return variables to the program, and return variables cannot be accepted when you enter a command from an interactive command line. You might use this command to retrieve the user's actual user profile name for testing. For example, the code segment in Figure 3.8 retrieves the current user profile name into the variable &USRPRF and tests the first character to see whether or not it is the letter B. When this condition is met, the code might display a certain menu. Or you could use the test to determine what application libraries to put in the user's library list, based on user location or department.
Flexibility: The CRTUSR Command One option for retaining certain information about past and current user profiles is to use a database file for that purpose. By modifying this database file, you can also use it to automate user profile creation and establish a session when a user signs on to the system. Figure 3.9 shows sample Data Description Specifications for file USRINF. You can use the information in this file not only for audit purposes, but to track authorized users and to establish initial values for programs (such as providing the correct branch location number in an inquiry program) or to identify the user requesting printed output (the name can then be placed on the report). The USRPRF and AUTEDT fields together serve as the primary key. As a result, you can maintain one or more records for every system user profile. Figure 3.10 shows the source for a user-written CRTUSR command. The command processing program (CPP), CRTUSRCL, actually creates the user profile on the AS/400 and calls RPG program CRTUSRR to write a record to file USRINF. The CPP (Figure 3.11 — 32 KB) begins by deriving the user's initials from the user's name and putting them into variable &INITS. Then the CPP uses variable &INITS and the variables &LEVEL (user's company level) and &LOCATION to create the user profile name, which is stored in variable &USRPRF. For example, if Jane P. Doe is a branch employee at the Kalamazoo branch office (office number 12), her user profile name becomes B12JPD. For regional employee Jack J. Jones at the Sacramento (20) office, the user profile name is R20JJJ.
After concatenating the user profile name, the program concatenates the user's first name, middle initial, and last name and stores the value in variable &NAME, which will be used to create the TEXT parameter for the CRTUSRPRF command. The CPP sets up the TEXT parameter by combining the values from variables &LEVEL (branch, regional, or corporate), &LOCATION, and &NAME, thus providing consistent text for every user profile and making it easy to identify one particular user profile from a list. The next three variables -- &GRPPRF, &LMTCPB, and &USRCLS -- are all determined from the user's department. If the user works in the MIS department, the group profile becomes MIS and variable &LMTCPB is assigned the value *NO. The program further determines an MIS user's class by testing whether the user works in
operations (OP) or on the programming staff (PG) and then assigning the appropriate value (*SYSOPR or *PGMR) to variable &USRCLS. The CPP assigns non-MIS personnel to the USERS group profile and to the *USER user class. Next, if &DEPT is equal to OP or PG, the CPP checks whether or not a personal library and output queue already exist for the user profile being created. If these objects do not exist, the CPP creates them and transfers their ownership to the group profile. The program then creates the user profile by executing the CRTUSRPRF command, substituting the variables established in the program. The CPP requires the user to have *SECADM special authority and authority to the CRTUSRPRF command. If the user does not have these authorities, the program must be compiled with the attribute USRPRF(*OWNER) to adopt the authority of the owner, who does have the proper authorities. If an error occurs during the execution of the CRTUSRPRF command, the global message monitor passes control to label DIAG. When the command is successful, the program calls the RPG program CRTUSRR (Figure 3.12), which establishes the correct date for variable AUTBDT (authorization beginning date) and writes the record to disk. If an error occurs on the WRITE statement, the program sets field OFLAG (output flag) to 1, and control returns to the CPP. The system sends the appropriate message to the requester based on the value of the field &FLAG. Then the diagnostic routine reads any messages from the program queue and sends them to the requester, and the program ends. The CRTUSR command ensures that you create each user profile similarly, according to shop standards. You can create your own CHGUSR and DLTUSR commands and programs to maintain the records in USRINF and to change or delete the user profile on the system. Keep in mind there will be exceptions you will have to handle individually. You should usually use these commands to create and maintain user profiles. Only in an exceptional case should you directly use the OS/400-supplied commands.
Making User Profiles Work for You Whether you create user profiles with CL commands or employ user-written commands, it is important to plan. Careful planning saves literally hundreds of hours during the system's lifetime. If you maintain a database file like USRINF with the appropriate user information, it provides essential historical data for auditing and a way to extract significant information about the user profile during a workstation session. You will have a consistent method for creating and maintaining user profiles, and you can easily train others to create and maintain user profiles for their departments. Moreover, you will be able to retrieve information from file USRINF via a high-level language program; and you can use that information in applications to establish the work environment, library list, and initial menu for a user profile. When you set up your AS/400, take the time to examine your current standards for establishing user profiles, and make your user profiles work for you!
Chapter 4 - The Facts About Public Authorities by Gary Guthrie and Wayne Madden High among the many strengths of the AS/400 and iSeries 400 is a robust resource security mechanism. Resource security defines users’ authority to objects. There are three categories of authority to an object:
• • •
Object authority defines the operations that can be performed on an object. Figure 1A describes object authorities. Data authority defines the operations that can be performed on the object’s contents. Figure 1B describes data authorities. Field authority defines the operations that can be performed on data fields. Figure 1C describes field authorities.
Figure 1A – Object authorities Authority
Description
Allowed operations
*ObjOpr
Object operational
Examine object description Use object as determined by data authorities
*ObjMgt
Object management
Specify security for object Move or rename object All operations allowed by *ObjAlter and *ObjRef
*ObjExist
Object existence
Delete object Free storage for object Save and restore object Transfer object ownership
*ObjAlter
Object alter
Add, clear, initialize, and reorganize database file members Alter and add database file attributes Add and remove triggers Change SQL package attributes
*ObjRef
Object reference
Specify referential constraint parent
*AutLMgt
Authorization list management
Add and remove users and their authorities from authorization lists
Figure 1B – Data authorities Authority
Description
Allowed operations
*Read
Read
Display object’s contents
*Add
Add
Add entries to object
*Upd
Update
Modify object’s entries
*Dlt
Delete
Remove object’s entries
*Execute
Execute
Run a program, service program, or SQL package Locate object in library or directory
Figure 1C – Field authorities Authority
Description
Allowed operations
*Mgt
Management
Specify field’s security
*Alter
Alter
Change field’s attributes
*Ref
Reference
Specify field as part of parent key in referential constraint
*Read
Read
Access field’s contents
*Add
Add
Add entries to data
*Update
Update
Modify field’s existing entries
Because of the number of options available, resource security is reasonably complex. It’s important to examine the potential risks — as well as the benefits — of resource security’s default public authority to ensure you maintain a secure production environment.
What Are Public Authorities? Public authority to an object is that default authority given to users who have no specific, or private, authority to the object. That is, the users have no specific authority granted for their user profiles, are not on an authorization list that supplies specific authority, and are not part of a group profile with specific authority. When you create an object, either by restoring an object to the system or by using one of the many CrtXxx (Create) commands, public authorities are established. If an object is restored to the system, the public authorities
stored with that object are the ones granted to the object. If a CrtXxx command is used to create an object, the Aut (Authority) parameter of that command establishes the public authorities that will be granted to the object. Public authority is granted to users in one of several standard authority sets described by the special values *All, *Change, *Use, and *Exclude. Following is a description of each of these values:
•
•
•
•
*All — The user can perform all operations on the object except those limited to the owner or controlled by authorization list management authority. The user can control the object’s existence, grant and revoke authorities for the object, change the object, and use the object. However, unless the user is also the owner of the object, he or she can’t transfer ownership of the object. *Change — The user can perform all operations on the object except those limited to the owner or controlled by object management authority, object existence authority, object alter authority, and object reference authority. The user can perform basic functions on the object; however, he or she cannot change the attributes of the object. Change authority provides object operational authority and all data authority when the object has associated data. *Use — The user can perform basic operations on the object (e.g., open a file, read the records, and execute a program). However, although the user can read and add associated data records or entries, he or she will be prevented from updating or deleting data records or entries. This authority provides object operational authority, read data authority, add data authority, and execute data authority. *Exclude — The user is specifically denied any access to the object.
Figure 2A shows the individual object authorities defined by the above authority sets. Figure 2B shows the individual data authorities.
Figure 2A – Individual object authorities Authority set
Object authorities *ObjOpr
*ObjMgt
*ObjExist
*ObjAlter
*All
X
X
X
X
*Change
X
*Use
X
*ObjRef X
*Exclude
Figure 2B – Individual data authorities Authority set
Data authorities *Read
*Add
*Upd
*Dlt
*Execute
*All
X
X
X
X
X
*Change
X
X
X
X
X
*Use
X
X
X
*Exclude
Creating Public Authority by Default When your system arrives, OS/400 offers a means of creating public authorities. This default implementation uses the QCrtAut (Create default public authority) system value, the CrtAut (Create authority) attribute of each library, and the Aut (Public authority) parameter on each of the CrtXxx commands that exist in OS/400. System value QCrtAut provides a vehicle for systemwide default public authority. It can have the value *All, *Change, *Use, or *Exclude. *Change is the default for system value QCrtAut when OS/400 is loaded onto your system. QCrtAut alone, though, doesn’t control the public authority of objects created on the system.
The library attribute CrtAut found on the CrtLib (Create Library) and ChgLib (Change Library) commands defines the default public authority for all objects created in that library. Although the possible values for CrtAut include *All, *Change, *Use, *Exclude, and an authorization list name, the default for CrtAut is *SysVal, which references the value specified in system value QCrtAut. Therefore, when you create a library and don’t specify a value for parameter CrtAut, the system uses the default value *SysVal. The value found in system value QCrtAut is then used to set the default public authority for objects created in the library. You should note, however, that the CrtAut value of the library isn’t used when you create a duplicate object or move or restore an object in the library. Instead, the public authority of the existing object is used. The Aut parameter of the CrtXxx commands accepts the values *All, *Change, *Use, *Exclude, and an authorization list name, as well as the special value *LibCrtAut, which is the default value for most of the CrtXxx commands. *LibCrtAut instructs OS/400 to use the default public authority defined by the CrtAut attribute of the library in which the object will exist. In turn, the CrtAut attribute might have a specific value defined at the library level, or it might simply reference system value QCrtAut to get the value. Figure 3 shows the effect of the new default values provided for the CrtAut library attribute and the Aut object attribute. The lines and arrows on the right show how each object’s Aut attribute references, by default, the CrtAut attribute of the library in which the object exists. The lines and arrows on the left show how each CrtAut attribute references, by default, the QCrtAut system value. The values specified in Figure 3 for the QCrtAut system value, the CrtAut library attribute, and the Aut parameter are the shipped default values. Unless you change those defaults, every object you create on the system with the default value of Aut(*LibCrtAut) will automatically grant *Change authority to the public. (If you use the Replace(*Yes) parameter on the CrtXxx command, the authority of the existing object is used rather than the CrtAut value of the library.) If you look closely at Figure 3, you’ll see that although this method may seem to make object authority easier to manage, it’s a little tricky to grasp. First of all, consider that all libraries are defined by a library description that resides in library QSys (even the description of library QSys itself must reside in library QSys). Therefore, the QSys definition of the CrtAut attribute controls the default public authority for every library on the system (not the objects in the libraries, just the library objects themselves) as long as each library uses the default value Aut(*LibCrtAut). Executing the command
DspLibD QSys displays the library description of QSys, which reveals that *SysVal is the value for CrtAut. Therefore, if you create a new library using the CrtLib command and specify Aut(*LibCrtAut), users will have the default public authority defined originally in the QCrtAut system value. Remember, at this point the Aut parameter on the CrtLib command is defining only the public authority to the library object. As you can see in Figure 3, for each new object created in a library, the Aut(*LibCrtAut) value tells the system to use the default public authority defined by the CrtAut attribute of the library in which the object will exist. When implementing default public authorities, consider these facts:
• • • •
You can use the CrtAut library attribute to determine the default public authority for any object created in a given library, provided the object being created specifies *LibCrtAut as the value for the Aut parameter of the CrtXxx command. You can elect to override the *LibCrtAut value on the CrtXxx command and still define the public authority using *All, *Change, *Use, *Exclude, or an authorization list name. The default value for the CrtAut library attribute for new libraries will be *SysVal, instructing the system to use the value found in system value QCrtAut (in effect, controlling new object default public authority at the system level). You can choose to replace the default value *SysVal with a specific default public authority value for that library (i.e., *All, *Change, *Use, *Exclude, or an authorization list name).
Limiting Public Authority
The fact that public authority can be created by certain default values brings us to an interesting point. The existence of default values indicates that they are the “suggested” or “normal” values for parameters. In terms of security, you may want to look at default values differently. Default values that define the public authority for objects created on your system are effective only if planned as part of your overall security implementation. Your first inclination may be to change QCrtAut to *Use or even *Exclude to reduce the amount of public authority given to new libraries and objects. However, let us warn you that doing so could cause problems with some IBMsupplied functions. Another tendency might be to change this system value to *All, hoping that every system object can then be “easily” accessed. Unfortunately, this would be like opening Pandora’s box! Here are a few suggestions for effectively planning and implementing object security for your libraries and the objects in those libraries.
Public Authority by Design The most significant threat of OS/400’s default public authority implementation is the possible misuse of the QCrtAut system value. There is no doubt that changing this system value to *All would simplify security, but doing so would simply eliminate security for new libraries and objects — an unacceptable situation for any production machine. Therefore, leave this system value as *Change. The first step in effectively implementing public authorities is to examine your user-defined libraries and determine whether the current public authorities are appropriate for the libraries and the objects within those libraries. Then, modify the CrtAut attribute of your libraries to reflect the default public authority that should be used for objects created in each library. By doing so, you’re providing the public authority at the library level instead of using the CrtAut(*SysVal) default, which references the QCrtAut system value. As a general rule, use the level of public authority given to the library object (the Aut library attribute) as the default value for the CrtAut library attribute. This is a good starting point for that library. Consider this example. Perhaps a library contains only utility program objects that are used by various applications on your system (e.g., date-conversion programs, a binary-to-decimal conversion program, a check object or check authority program). Because all the programs should be available for execution, it’s logical that the CrtAut attribute of this library be set to *Use so that any new objects created in the library will have *Use default public authority. Suppose the library you’re working with contains all the payroll and employee data files. You probably want to restrict access to this library and secure it by user profile, group profile, or an authorization list. Any new objects created in this library should probably also have *Exclude public authority unless the program or person creating the object specifically selects a public authority by using the object’s Aut attribute. In this case, you would change the CrtAut attribute to *Exclude. The point is this: Public authority at the library level and public authority for objects created in that library must be specifically planned and implemented — not just implemented by default via the QCrtAut system value.
Object-Level Public Authority If you follow the suggestions above concerning the QCrtAut system value and the CrtAut library attribute, Aut(*LibCrtAut) will work well as the default for each object you create. In many cases, the level of public authority at the object level coincides with the public authorities established at the library level. However, it’s important to plan this rather than simply use the default value to save time. We hope you now recognize the significance of public authorities and understand the process of establishing them. If you’ve already installed OS/400, examine your user-defined libraries and objects to determine which, if any, changes to public authority are needed.
Chapter 5 - Installing a New Release One task you’ll perform at some time on your AS/400 is installing a new release of OS/400 and your IBM licensed program products. The good news is that this process is “a piece of cake” today compared with the effort it
required back when IBM first announced and delivered the AS/400 product family. No longer must you IPL the system more than a dozen times to complete the installation. When you load a new operating-system release today, you can have the system perform an automatic installation or you can perform a manual installation — and either method normally requires only one machine IPL. To prepare you for today’s approach, here’s a step-by-step guide to planning for and installing a new release of OS/400 and new IBM licensed program products. I cover the essential planning tasks you should accomplish before the installation, as well as the installation process itself.
Planning Is Preventive Medicine Just as planning is important when you install your AS/400 system the first time, planning for the installation of a new release offers the benefits of any preventive medicine — and it’s painless! You’ll no doubt be on a tight upgrade schedule, with little time for unexpected problems. By planning ahead and following the suggestions in this chapter, you can avoid having to tell your manager that the AS/400 will be down longer than expected while you recover the operating system because something was missing or damaged and prevented completing the installation. Before I describe the specific steps that will ensure a successful system upgrade, there’s one other important preventive measure to note: Unless it’s impossible, you should avoid mixing a hardware upgrade and a software upgrade — don’t perform both tasks at the same time. If a new AS/400 model requires a particular release of OS/400 and that release is compatible with your older hardware, first install the new release on your older hardware, and then upgrade your hardware at another time to avoid compounding any problems you might encounter.
The Planning Checklist Every good plan needs a checklist, and the list of steps in Figure 1, below, is your guide in this case. You can find a similar list in IBM’s AS/400 Software Installation (SC41-5120).
Figure 1 Installation planning checklist Pre-installation-day tasks Step 1:
When you receive the new release, verify your order (make sure you have the correct release, the right products on the media, and software keys for any locked licensed programs), and review the appropriate installation documents shipped with the release. If these documents weren't shipped with the release, you should order them; they may contain additional items you'll need to order before the installation.
Step 2:
Determine whether you'll perform the automatic or manual installation.
Step 3:
Permanently apply any temporarily applied PTFs.
Step 4:
A few days before installing the new release, remove unused objects from the system.
Step 5:
Verify disk storage requirements.
Step 6:
A few days before installation, document or save changes to IBM-supplied objects.
Step 7:
A few days before installation, order the latest cumulative PTF package if you don't have the latest. You should also order the latest appropriate group packages, particularly the HIPER PTF group package.
Step 8:
A day before or on the same day as the installation, save the system.
Installation-day tasks Step 9:
If your system participates in a network, resolve any pending database resynchronizations. If your system uses a 3995 optical library, check for and resolve any held optical files.
Step 10: If your system has an active Integrated Netfinity Server for AS/400, deactivate the server. Step 11: Verify the integrity of system objects (user profiles QSECOFR and QLPINSTALL, as well as the database cross-reference files).
Step 12: Verify and set appropriate system values.
Because IBM makes minor changes and improvements to the installation process for each release of the operating system, each new release means a new edition of the Software Installation manual. To ensure you have the latest information about installing a new release, you should read this chapter along with the manual. Read the chapter entirely to get a complete overview of the process before performing the items on the checklist. Note: If IBM’s instructions conflict with those given here, follow IBM’s instructions.
Step 1: Is Your Order Complete? One of the first things you’ll do is check the materials IBM shipped to you to make sure you have all the pieces you need for the installation. As of this writing, you should receive these items:
• • • • • • •
distribution media (normally CD-ROM) Media Distribution Report Read This First Memo to Users for OS/400 AS/400 PTF Shipping Information Letter individual product documentation AS/400 Software Installation
Don’t underestimate the importance of each of these items. Examine the CD-ROMs to make sure they’re not physically damaged, and then use the Media Distribution Report to determine whether all listed volumes are actually present. For each item on the CD-ROMs, the Media Distribution Report identifies the version, release, and modification level; licensed program name; feature number (e.g., 5769SS1, 5769RG1); and language feature code. For V4R5, you’ll find the version number listed as V4 (Version 4) in the product name; the release number and modification level are represented as R05M00 (Release 5, Modification Level 0) on the report. Note that the Media Distribution Report lists only priced features. Some features, such as licensed internal code and base OS/400, are shipped with no additional charge. The report contains no entries for these items, nor does it contain entries for locked products. The Read This First document is just what it sounds like: a document IBM wants you to read before you install the release, and preferably as soon as possible. This document contains any last-minute information that may not have been available for publication in the Memo to Users for OS/400 or in any manual. The Memo to Users for OS/400 describes any significant changes in the new release that could affect your programs or system operations. You can use this memo to prepare for changes in the release. You’ll find a specific section pertaining to licensed programs that you have installed or plan to install on your system. You’ll want to read the AS/400 PTF Shipping Information Letter for instructions on applying the cumulative program temporary fix (PTF) package. You also may receive additional documentation for some individual products; you should review any such documents because they may contain information unique to a product that could affect its installation. In addition to reviewing the deliverables listed above, you may want to review pertinent information found in the AS/400 Preventive Service Planning Information document. This document lists additional preventive service planning documents you may want to order. To obtain it, order PTF SF98 vrm, where v = version, r = release, and m = modification level for the new release. (For information about PTF ordering options, see Chapter 6, “Introduction to PTFs.”) After reviewing this information, you should verify not only that you can read the CD-ROMs but also that they contain all necessary features. An automated procedure, Prepare for Install (available through an option on the Work with Licensed Programs panel), greatly simplifies this verification process compared with earlier releases, which involved considerable manual effort. The panel in Figure 2, below, shows the installation-preparation procedures supported by Prepare for Install. One of the panel’s options compares the programs installed on your system with those on the CD-ROMs, generating a
list of preselected programs that will be replaced during installation. You can inspect this list to determine whether you have all the necessary features.
Figure 2 Prepare for Install screen Prepare for Install
System:
AS400
Type option, press Enter. 1=Select Opt _ _ _ _ _ _ _
F3=Exit
Description Work with user profiles Work with licensed programs for target release Display licensed programs for target release Work with licensed programs to delete List licensed programs not found on media Verify system objects Estimated storage requirements for system ASP
F9=Command line
F10=Display job log
Bottom F12=Cancel
To perform this verification, take these steps:
1.
Arrange the CD-ROMs in the proper order. Chapter 3 of AS/400 Software Installation contains a table specifying the correct order. You should refer to this table not only for sequencing information but also for any potential special instructions.
2.
From the command line, execute the following CHGMSGQ (Change Message Queue) command to put your message queue in break mode:
CHGMSGQ QSYSOPR *BREAK SEV(95) 3.
From the command line, execute
GO LICPGM 4.
You’ll see the Work with Licensed Programs panel. Select option 5 (Prepare for install), and press Enter.
5.
Select the option “Work with licensed programs for target release,” and press Enter.
6.
You’ll see the Work with Licensed Programs for Target Release panel. You should a. b. c. d. e.
load the first CD-ROM specify 1 (Distribution media) for the Generate list from prompt specify the appropriate value for the Optical device prompt specify the appropriate value for the Target release prompt press Enter
When the system has read the CD-ROM, you’ll receive a message asking you to load the next volume. If you have more CD-ROMs, load the next volume and reply G to the message to continue processing; otherwise, reply X to indicate that all CD-ROMs have been processed. 7.
Once you’ve processed all the CD-ROMs, the Work with Licensed Programs for Target Release panel will display a list of the licensed programs that are on the distribution media and installed on your system.
Preselected licensed programs (those with a 1 in the option column) indicate that a product on the distribution media can replace a product installed on your system. You can use F11 to display alternate views that provide more detail and use option 5 (Display release-to-release mapping) to see what installed products can be replaced. 8.
Press Enter until the Prepare for Install panel appears.
9.
Select the option “List licensed programs not found on media,” and press Enter.
10. You’ll see the Licensed Programs Not Found on Media panel. If no products appear in the panel’s list, you have all the media necessary to replace your existing products. If products do appear in the list, you must determine whether they’re necessary. If they’re not, you can delete them (I describe this procedure later when I talk about cleaning up your system). If the products are necessary, you must obtain them before installation. Make sure you didn’t omit any CD-ROMs during the verification process. If you didn’t omit any CD-ROMs, compare your media labels with the product tables in AS/400 Software Installation and check the Media Distribution Report to determine whether the products were shipped (or should have been shipped) with your order.
11.
Exit the procedure.
Step 2: Manual or Automatic? Before installing the new release, you need to determine whether you’ll perform an automated or a manual installation. The automatic installation process is the recommended method and the one that minimizes the time required for installation. However, if you’re performing any of the tasks listed below, you should use the manual installation process instead.
• • • • •
adding a disk device using device parity protection, mirrored protection, or user auxiliary storage pools (ASPs) changing the primary language that the operating system and programs support (e.g., changing from English to French) creating logical partitions during the installation using tapes created with the SAVSYS (Save System) command changing the environment (AS/400 or System/36), system values, or configuration values. These changes differ from the others listed here because you can make them either during or after the new-release installation. To simplify the installation, it’s best to automatically install the release and then manually make these changes.
The automatic installation will install the new release of the operating system and any currently installed licensed program products.
Step 3: Permanently Apply PTFs One step that will save you time later is to permanently apply any PTFs that remain temporarily applied on your system. Doing so cleans up the disk space occupied by the temporarily applied PTFs. That disk space may not be much, but now is an opportune time to perform cleanup tasks. For more specific information about applying PTFs, see Chapter 6.
Step 4: Clean Up Your System In addition to permanently applying PTFs, you should complete several other cleanup procedures. These tasks not only promote overall tidiness but also help ensure you have enough disk space for the installation. Consider these tasks:
•
Delete PTF save files and cover letters. To delete these items, you’ll use command DLTPTF. Typically, you’ll issue this command only for products 5769999 (licensed internal code) and 5769SS1 (OS/400).
•
Delete unnecessary spooled files, and reclaim associated storage. Check all output queues for unnecessary spooled files. A prime candidate for housing unnecessary spooled files is output queue QEZJOBLOG. After deleting these spooled files, reclaim spool storage using command RCLSPLSTG.
•
Have each user delete any unnecessary objects he or she owns. You’d be surprised just how much storage some users can unnecessarily consume. If at all possible, have users perform a bit of personal housekeeping by deleting spooled files and owned objects they no longer need.
•
Delete unnecessary licensed programs or optional parts. Some licensed programs may be unnecessary for reasons such as lack of support at the target release. To review candidates for deletion, you can use the Prepare for Install panel’s “Work with licensed programs to delete” option. To reach this option, display menu LICPGM (GO LICPGM) and select option 5 (Prepare for install). The “Work with licensed programs to delete” option preselects licensed programs to delete. You can use F11 (Display reasons) to determine why licensed programs are selected for deletion. I rarely see a system that doesn’t contain unused licensed programs or licensed program parts. For instance, it’s not uncommon to see systems with many unused language dictionaries or unnecessary double-byte character set options. Prepare for Install’s “Work with licensed programs to delete” option won’t preselect such unnecessary options because they are valid options. If for any reason you’re unable to use this procedure to delete licensed programs, you can use option 12 (Delete licensed programs) from menu LICPGM.
•
Delete unnecessary user profiles. It’s rarely necessary to delete user profiles as part of installation cleanup, but if this action is appropriate in your environment, consider taking care of it now. The Prepare for install option on menu LICPGM also offers procedures for cleaning up user profiles.
•
Use the automatic cleanup options in Operational Assistant. These options provide a general method for tidying your system on a periodic basis.
Step 5: Is There Enough Room? Once you’ve cleaned up your system, you should verify that you have enough storage to complete the installation. Like most installation-related tasks today, this one is much easier than in earlier releases. To determine whether you have adequate storage, perform these steps: 1.
From the command line, execute
GO LICPGM 2.
You’ll see the Work with Licensed Programs panel. Select option 5 (Prepare for install), and press Enter.
3.
Select the option “Estimated storage requirements for system ASP,” and press Enter.
4.
You’ll see the Estimated Storage Requirements for System ASP panel. At the Additional storage required prompt, enter storage requirements for any additional software (e.g., third-party vendor software) that you’ll be installing. Include storage requirements only for software that will be stored in the system ASP. Press Enter to continue.
5.
You’ll see the second Estimated Storage Requirements for System ASP panel. This panel displays information you can use to determine whether enough storage is available. Compare the value shown for “Storage required to install target release” with the value shown for “Current supported system capacity.” If the value for “Current supported system capacity” is greater than the value for “Storage required to install target release,” you can continue with the installation. Otherwise, you must make additional storage available by removing items from your system or by adding DASD to your system.
6.
Exit the procedure.
If you make changes to your system that affect the available storage, you should repeat these steps.
Step 6: Document System Changes When you load a new release of the operating system, all IBM-supplied objects are replaced on the system. The installation procedure saves any changes you’ve made in libraries QUSRSYS (e.g., message queues, output
queues) and QGPL (e.g., subsystem descriptions, job queue descriptions, other work management–related objects). However, any changes you make to objects in library QSYS are lost because all those objects are replaced. To minimize the possible loss of modified system objects, you should document any changes you make to these objects so that you can reimplement them after installing the new release. I strongly suggest maintaining a CL program that contains code to reinstate customized changes, such as command defaults; you can then execute this program with each release update. When possible, implement these customizations in a user-created library rather than in QSYS. Although the installation won’t replace the user-created library’s contents, you should regenerate the custom objects it contains to avoid potential problems. Such problems might occur, for example, if IBM adds a parameter to a command. Unless you duplicate the new command and then apply your customization, you’ll be operating with an outdated command structure. In some cases, this difference could be critical. The CL program that customizes IBM-shipped objects should therefore first duplicate each object (when appropriate) and then change the newly created copy.
Step 7: Get the Latest Fixes Normally, some time passes between the time you order and receive a new release and the date when you actually install it. During this elapsed time, PTFs to the operating system and licensed program products usually become available. To ensure you have the latest of these PTFs during installation, order PTFs for the new release the week before you install the release. Obtain the latest cumulative PTF package and appropriate group packages. Of the group packages, you should at least order the HIPER group package. (IBM releases HIPER, or High-Impact PERvasive, PTFs regularly — often daily — as necessary to correct high-risk problems.) For more information about ordering PTFs, see Chapter 6.
Step 8: Save Your System Just before installing the new release (either on installation day or the day before), you should save your system. To be safe, I recommend performing a complete system save (option 21 from the SAVE menu), but this isn’t a requirement. I advise performing at least these two types of saves:
• •
SAVSYS — saves OS/400 and configuration and security information SAVLIB LIB(*IBM) — saves all IBM product libraries
It’s also wise to schedule the installation so that it immediately follows your normally scheduled backup of data and programs. This approach guarantees that you have a current copy of all your most critical information in case any problems with the new installation require you to reinstall the old data and programs.
Installation-Day Tasks Once you’ve completed step 8, you’re nearly ready to start installing the new AS/400 release. The remaining steps (9 through 12) are best performed on the day of the installation (if they apply in your environment). They, together with the installation process itself, are the focus of the remainder of this chapter. (If you’ll be using a tape drive on installation day, see “Installing from Tape?” (below) for some additional tips.)
Step 9: Resolve Pending Operations First, if your system participates in a network and runs applications that use two-phase commit support, you should resolve any pending database resynchronizations before starting the installation. Two-phase commit support, used when an application updates database files on more than one system, ensures that the databases remain synchronized. To determine whether your system uses two-phase commit support, issue the following WRKCMTDFN (Work with Commitment Definitions) command:
WRKCMTDFN JOB(*ALL) STATUS(*RESYNC)
If the system responds with a message indicating that no commitment definitions are active, you need do nothing further. Because the typical AS/400 environment isn’t concerned with two-phase commit support, I don’t provide details about database resynchronization here. For this information, refer to AS/400 Software Installation (SC415120). Next, if your system has a 3995 optical library, check for and resolve any held optical files — that is, files that haven’t yet been successfully written to media. Use the WRKHLDOPTF (Work with Held Optical Files) command to check for such files and either save or release the files.
Step 10: Shut Down the INS If your system has an active Integrated Netfinity Server for AS/400 (INS), the installation may fail. You should therefore deactivate this server before starting the installation. To do so, access the Network Server Administration menu (GO NWSADM) and select option 3.
Step 11: Verify System Integrity You should also verify the integrity of system objects required by the installation process. Among the requirements for the installation process are
• • •
System distribution directory entries must exist for user profiles QSECOFR and QLPINSTALL. Database cross-reference files can’t be in error. User profile QSECOFR can’t contain secondary language libraries or alternate initial menus.
To verify the integrity of these objects, you can use the Prepare for install option on menu LICPGM. This option adds user profiles QSECOFR and QLPINSTALL to the system distribution directory if necessary and checks for errors in the database cross-reference files. To use the option, follow these steps: 1.
From the command line, execute command GO LICPGM.
2.
The Work with Licensed Programs panel will appear. Select option 5 (Prepare for install), and press Enter.
3.
From the resulting panel (Figure 3, below), select the Verify system objects option, and press Enter.
4.
If errors exist in the database cross-reference files, the system will issue message “CPI3DA3 Database cross-reference files are in error.” Follow the instructions provided by this message to resolve the errors before continuing.
5.
Exit the procedure.
Figure 3 Prepare for Install screen Prepare for Install Type option, press Enter. 1=Select Opt _ _ _ _ _ _ _
System:
Description Work with user profiles Work with licensed programs for target release Display licensed programs for target release Work with licensed programs to delete List licensed programs not found on media Verify system objects Estimated storage requirements for system ASP
AS400
Botto
m F3=Exit
F9=Command line
F10=Display job log
F12=Cancel
A couple of items remain to check before you’re finished with this step. If you’re operating in the System/36 environment, check to see whether user profile QSECOFR has a menu or program specified. If so, you must remove the menu or program from the user profile before installing licensed programs. Also, user profile QSECOFR can’t have a secondary language library (named QSYS29 xx) at a previous release in its library list when you install a new release. If QSECOFR has an initial program, ensure that the program doesn’t add a secondary language library to the system library list.
Step 12: Check System Values Your next step is to check and set certain system values. Remove from system values QSYSLIBL (System Library List) and QUSRLIBL (User Library List) any licensed program libraries and any secondary language libraries (QSYS29xx). Do not remove library QSYS, QUSRSYS, QGPL, or QTEMP from either of these system values. In addition, set system value QALWOBJRST (Allow Object Restore) to *ALL. Once the installation is complete, reset the QALWOBJRST value to ensure system security.
Ready, Set, Go! With the planning behind you, you’re ready to install your new release! The rest of this chapter provides basic instructions for the automatic installation procedure, which is the recommended method. If you must use the manual method (based on the criteria stated in planning step 2), see AS/400 Software Installation for detailed instructions about this process. When you perform an automatic installation of a new release of the operating system and licensed program products, the process retains the current operating environment (AS/400 or System/36), system values, and configuration while replacing these items:
• • • •
IBM licensed internal code OS/400 operating system licensed programs and optional parts of licensed programs currently installed on your system language feature code on the distribution media that’s installed as the primary language on the system
If, during the installation process, the System Attention light on the control panel appears, you should refer to Chapter 5 of AS/400 Software Installation for a list of system reference codes (SRCs) and instructions about how to continue. The only exception is if the attention light comes on and the SRC begins with A6. The A6 codes indicate that the system is waiting for you to do something, such as reply to a message or make a device ready. To install the new release, take the following steps. Step 1. Arrange the CD-ROMs in the order you’ll use them. Step 2. Load the CD-ROM that contains the licensed internal code. Wait for the CD-ROM In-Use indicator to go out. Step 3. At the control panel, set the mode to Normal. Step 4. Execute the following PWRDWNSYS (Power Down System) command:
PWRDWNSYS *IMMED RESTART(*YES) IPLSRC(D)
This command will start an IPL process. Note that SRC codes will continue to appear in the display area of the control panel. Step 5. You’ll see the Licensed Internal Code – Status panel. Upon 100 percent completion of the install, the display may be blank for approximately five minutes and the IPL in Progress panel may appear. You needn’t respond to any of these panels. Step 6. Load the next volume when prompted to do so. You’ll receive this prompt several times during the installation process. After loading the volume, you must respond to the prompt to continue processing. The response value you specify depends on whether you have more volumes to process: A response of G instructs the installation process to continue with the next volume, and a response of X indicates that no more volumes exist. Step 7. Next, the installation process loads the operating system followed by licensed programs. During this process, you may see panels with status information. One of these panels, Licensed Internal Code IPL in Progress, lists several IPL steps, some of which can take a long time (two hours or more). The amount of time needed depends on the amount of recovery your system requires. As the installation process proceeds, you needn’t respond to the status information panels you see. Once all your CD-ROMs have been read, be prepared to wait for quite some time while the installation process continues. The process is hands-free until the Sign On panel appears. Step 8. When installation is complete, you’ll see the Sign On panel. If you receive the message “Automatic installation not complete,” you should sign on using the QSECOFR user profile and refer to Appendix A, “Recovery Procedures,” in AS/400 Software Installation for instructions about how to proceed. If the automatic installation process was completed normally, sign on using user profile QSECOFR and continue by verifying the installation, loading additional products, loading PTFs, and updating software license keys. Verify the installation. To verify the installation, execute the GO LICPGM command. On the Work with Licensed Programs display, choose option 50 (Display log for messages). The Display Install History panel (Figure 4, below) will appear. Press Enter on this panel, and scan the messages found on the History Log Contents display. If any messages indicate a failure or a partially installed product, refer to “Recovery Procedures” in AS/400 Software Installation.
Figure 4 Display Install History screen Display Install History Type choices, press Enter. Start date . . . . . .
07/17/00
MM/DD/YY
Start time . . . . . .
09:32:35
HH:MM:SS
Output . . . . . . . .
*______
F3=Exit F12=Cancel (C) COPYRIGHT IBM CORP. 1980, 1998.
Next, verify the status and check the compatibility of the installed licensed programs. To do so, use option 10 (Display licensed programs) from menu LICPGM to display the release and installed status values of the licensed programs. A status of *COMPATIBLE indicates a licensed program is ready to use. If you see a different status value for any licensed program, refer to the “Installed Status Values” section of Appendix E in AS/400 Software Installation.
Load additional products. You’re now ready to load any additional licensed programs and secondary languages. Return to the Work with Licensed Programs menu, and select option 11 (Install licensed programs). You’ll see the Install Licensed Programs display that appears in Figure 5, below. The installation steps for loading additional products are similar to the steps you’ve already taken. Select a licensed program to install, and continue. If you don’t see a desired product in the list, follow the specific instructions delivered with the distribution media containing the new product.
Figure 5 Install Licensed Programs screen Install Licensed Programs System:
Type options, press Enter. 1=Install Licensed Option Program _ 5769SS1 _ 5769SS1 _ 5769SS1 _ 5769SS1 _ 5769SS1 Support _ 5769SS1 _ 5769SS1 _ 5769SS1 _ 5769SS1 _ 5769SS1 _ 5769SS1 Support _ 5769SS1 Assistant _ 5769SS1
AS400
Installed Status *COMPATIBLE *COMPATIBLE *COMPATIBLE *COMPATIBLE *COMPATIBLE
Description OS/400 - Library QGPL OS/400 - Library QUSRSYS OS/400 - Extended Base Support OS/400 - Online Information OS/400 - Extended Base Directory
*COMPATIBLE *COMPATIBLE *COMPATIBLE
OS/400 OS/400 OS/400 OS/400 OS/400 OS/400
-
S/36 and S/38 Migration System/36 Environment System/38 Environment Example Tools Library AFP Compatibility Fonts *PRV CL Compiler
OS/400 - S/36 Migration *COMPATIBLE
e... F3=Exit F11=Display release trademarks
OS/400 - Host Servers Mor F12=Cancel
F19=Display
(C) COPYRIGHT IBM CORP. 1980, 1998. Load PTFs. Next, install the cumulative PTF package (either the one that arrived with the new release or a new one you ordered, as suggested in the planning steps above). The shipping letter that accompanies the PTF tape will have specific instructions about how to install the PTF package. Note: To complete the installation process, you must install a cumulative PTF package or perform an IPL. An IPL is required to start the Initialize System (INZSYS) process (the INZSYS process can take two hours or more on some systems, but for most systems it’s completed in a few minutes.) In addition to installing a cumulative PTF package, you should install any group PTFs you have — particularly the HIPER, or High-Impact PERvasive, PTFs group package. (For information about installing PTFs, see Chapter 6.) After the IPL is completed, sign on as QSECOFR and check the install history (using option 50 on menu LICPGM) for status messages relating to the INZSYS process. You should look for a message indicating that INZSYS has started or a message indicating its completion. If you see neither message, wait a few minutes and try option 50 again. Continue checking the install history until you see the message indicating INZSYS completion. If the message doesn’t appear in a reasonable amount of time, refer to the “INZSYS Recovery Information” section of Appendix A in AS/400 Software Installation. Update software license keys. To install software license keys, use the WRKLICINF (Work with License Information) command. For each product, update the license key and the usage limit to match the usage limit you ordered. The license information is part of the upgrade media. You must install license keys within 70 days of your release installation.
Step 9. The installation of your new release is now complete! The only thing left to do before restarting production activities is to perform another SAVSYS to save the new release and the new IBM program products. Just think how much trouble it would be if you had a disk crash soon after loading the new release and, with no current SAVSYS, were forced to restore the old release and repeat the installation process. To make sure you don’t suffer this fate, perform the SAVSYS and the SAVLIB LIB(*IBM) operations now. Before starting the save, determine whether system jobs that decompress objects are running. You should start your save only if these jobs are in an inactive state. To make this determination, use the WRKACTJOB (Work with Active Jobs) command and check the status of QDCPOBJx jobs (more than one may exist). You can ensure these jobs are inactive by placing the system in restricted state. Don’t worry — the QDCPOBJ I. x jobs will become active again when the system is no longer in restricted state.
Final Advice The only risk you take when installing a new release is not being prepared for failure. It’s rare that a new-release installation must be aborted midway through, but it does happen. If you take the precautions mentioned in the planning suggestions and turn to “Recovery Procedures” in AS/400 Software Installation in the event of trouble, you won’t find yourself losing anything but time should you encounter an unrecoverable error. For the most part, installing new releases is only an inconvenience in time.
Chapter 6 - Introduction to PTFs Updated June 2000 Here's a step-by-step guide to ordering and installing PTFs — and to knowing when you need them Much as we'd like to think the AS/400 is invincible, from time to time even the best of systems needs a little repair. IBM provides such assistance for the AS/400 in the form of PTFs. A PTF, or program temporary fix, is one or more objects (most often program code) that IBM creates to correct a problem in the IBM licensed internal code, in the OS/400 operating system, or in an IBM licensed program product. In addition to issuing PTFs to correct problems, IBM uses PTFs to add function or enhance existing function in these products. The fixes are called 'temporary' because a PTF fixes a problem or adds an enhancement only until the next release of that code or product becomes available; at that time, the fix becomes part of the base product itself, or 'permanent.' Hardware and software service providers distribute PTFs. Your hardware maintenance vendor is typically responsible for providing microcode PTFs, while your software service provider furnishes system software PTFs. Because IBM is both the hardware and the software provider for most shops, the focus here is on IBM distribution of PTFs. In this introduction to PTFs, you'll learn the necessary information to determine when PTFs are required on your system, what PTFs you need, how to order PTFs, and how to install and apply those PTFs.
When Do You Need a PTF? Perhaps the most difficult hurdle to get over in understanding PTFs is knowing when you need one. Basically, there are three ways to determine when you need one or more PTFs. The first way is simple: You should regularly order and install the latest cumulative PTF package, group PTFs, Client Access service pack, and necessary individual HIPER PTFs. A cumulative PTF package is an ever-growing collection of significant PTFs. You might wonder what criteria IBM uses to determine whether a PTF is significant. In general, a PTF is deemed significant, and therefore included in a cumulative package, when it has a large audience or is critical to operations. IBM releases cumulative packages on a regular basis, and you should stay up-to-date with them, loading each package fairly soon after it becomes available. You should also load the latest cumulative package any time you load a new release of OS/400. To order the latest cumulative PTF package, you use the special PTF identifier SF99vrm, where v = OS/400 version, r = release, and m = modification. A group PTF is a logical grouping of PTFs related to a specific function, such as database or Java. Each group
has a single PTF identifier assigned to it so that you can download all PTFs for the group by specifying only one identifier. Client Access service packs are important if you access your system using Client Access. Like a group PTF, a service pack is a logical grouping of multiple PTFs available under a single PTF identifier for easy download. HIPER, or High-Impact PERvasive, PTFs are released regularly (often daily) as necessary to correct high-risk problems. Ignore these important PTFs, and you chance catastrophic consequences, such as data loss or a system outage. A second way you may discover you need a PTF is by encountering a problem. To identify and analyze the problem, you might use the ANZPRB (Analyze Problem) command, or you might investigate error messages issued by the system. If you report a system problem to IBM based on your analysis, you may receive a PTF immediately if someone else has already reported the problem and IBM has issued a PTF to resolve it. The third way to discover you might need particular PTFs is by regularly examining the latest Preventive Service Planning (PSP) information. You can download PSP information by ordering special PTFs. (To learn more about PSP documents and for helpful guidelines for managing PTFs, see the section 'Developing a Proactive PTF Management Strategy' near the end of this article.)
How Do You Order a PTF? You can order individual PTFs, a set of PTFs (e.g., a cumulative PTF package, a group PTF), and PSP information from IBM by mail, telephone, fax, or electronic communications. Each PTF you receive has two parts: a cover letter that describes both the PTF and any prerequisites for loading the PTF, and the actual fix. You have two choices when ordering PTFs electronically. You can use Electronic Customer Support (ECS) and the CL SNDPTFORD (Send PTF Order) command, or you can order PTFs on the Internet. Electronically ordered PTFs are delivered electronically only when they're small enough that they can be transmitted within a reasonable connect time. When electronic means are not practical, IBM send the PTFs via mail on selected media, as it does for PTFs ordered by non-electronic means.
SNDPTFORD Basics The SNDPTFORD command is a simple command to use; however, a brief introduction here may point out a couple of the command's finer points to simplify its use. Figure 1 shows the prompted SNDPTFORD command. For parameter PTFID, you enter one, or up to 20, PTF identifiers (e.g., SF98440, MF98440). The parameter actually has three elements or parts. First is the actual PTF identifier, a required entry. The second element is the Product identifier, which determines whether the PTF order is for a specific product or for all products installed on your system. The default value you see in Figure 1, *ONLYPRD, indicates that the order is for all products installed or supported on your system. Instead of this value, you can enter a specific product ID (e.g., 5769RG1, 5769PW1) to limit your order to PTFs specific to that product. The third PTFID element, Release, determines whether the PTF order is for the current release levels of products on your system or a specific release level, which may or may not be the current release level installed for your products. For example, you might order a different release-level PTF for products you support on remote AS/400s. A Release value of *ONLYRLS indicates that the order is for the release levels of the products installed or supported on your system. If you prefer, you can enter a specific release identifier (e.g., V4R4M0, V4R3M0) to limit the PTF order to that release. Two restrictions apply to the Product and Release elements of the PTFID parameter. First, if you specify a particular product, you also must specify a particular release level. Second, if you specify *ONLYPRD for the product element, you also must specify *ONLYRLS for the release element. From time to time, you may want to download only a cover letter to determine whether a particular PTF is necessary for your system. The next SNDPTFORD parameter, PTFPART (PTF parts), makes this possible. Use
value *ALL to request both PTF(s) and cover letter(s) or value *CVRLTR to request cover letter(s) only. The next two parameters, RMTCPNAME (Remote control point) and RMTNETID (Remote network identifier), identify the remote service provider and the remote service provider network. You should change parameter RMTCPNAME (default value *IBMSRV) only if you are using a service provider other than IBM or are temporarily accessing another service provider to obtain application-specific PTFs. Parameter RMTNETID must correctly identify the remote service provider network. The value *NETATR causes the system to refer to the network attributes to retrieve the local network identifier (you can view the network attributes using the DSPNETA, or Display Network Attributes, command). If you change the local network identifier in the network attributes, you may then have to override this default value when you order PTFs. Your network provider can give you the correct RMTNETID if the default does not work. SNDPTFORD's DELIVERY parameter determines how PTFs are delivered to you. A value of *LINKONLY tells ECS to deliver PTFs only via the electronic link. The value *ANY specifies that the PTFs should be delivered by any available method. Most PTFs ordered using SNDPTFORD are downloaded immediately using ECS; however, PTFs that are too large are instead shipped via mail. The next parameter, ORDER, specifies whether only the PTFs ordered are sent or also any requisite PTFs that you must apply before, or along with, applying the PTFs you're ordering. Value *REQUIRED requests the PTFs you're ordering as well as any other required PTFs that accompany the ordered PTFs. Value *PTFID specifies that only those PTFs you are ordering are to be sent. The last parameter, REORDER, specifies whether you want to reorder a PTF that is currently installed or currently ordered. Valid values are *NO and *YES. Note that REORDER(*YES) is necessary if you've previously sent for the cover letter only and now want to order the PTF itself. If you permit REORDER to default to *NO, OS/400 won't order the PTF because it thinks it has already ordered it when, in fact, you've received only the cover letter.
Ordering PTFs on the Internet IBM provides a detailed overview of the Internet PTF download process, along with detailed instructions, at the IBM Service Web site, http://www.as400service.ibm.com. The service is free and available to all AS/400 owners. When you visit the site, expand the 'Fixes Downloads and Updates' branch and then select 'Internet PTF Downloads' to reach the AS/400 Internet PTF Downloads (iPTF) page. Then simply complete the following few steps, and you're ready to download PTFs: 1.
Register for the service.
2.
Configure your AS/400, and start the appropriate services.
3.
Test your PC's Internet browser to ensure it supports the JavaScript programs used in the download process.
4.
Log on, identify the PTFs you want to download, and begin the download.
5.
After you've downloaded the PTFs, you simply continue normal PTF application procedures.
For a more detailed description of the Internet PTF process, see 'Working with the AS/400 iPTF Function'.
How Do You Install and Apply a PTF? Installing a PTF includes two basic steps: loading the PTF and applying the PTF. The process outlined here performs both the loading and the application of the PTF. Note one caution concerning the process of loading and applying PTFs: You must not interrupt any step in this process. Interrupting a step can cause problems significant enough to require reloading the current version of the licensed internal code or the operating system. Make sure, for example, that your electrical power is protected with a UPS. Also note that for systems with logical partitions, the PTF process differs in some critical ways; if you have such a system, be sure to read
'PTFs and Logical Partitioning (LPAR)' (below) for more information. First, we'll look at loading and applying PTFs for the IBM licensed internal code. Then we'll examine the process for loading and applying PTFs for licensed program products.
Installing Licensed Internal Code PTFs Step 1. Print and review any cover letters that accompany the PTFs. Look especially for any specific preinstallation instructions. You can do this by entering the DSPPTF (Display Program Temporary Fix) command and specifying the parameters COVERONLY(*YES) and either OUT PUT(*) or OUTPUT(*PRINT), depending on whether you want to view the cover letter on your workstation or print the cover letter. For example, to print the cover letter for PTF MF12345, you would enter the following DSPPTF command:
DSPPTF LICPGM(5769999) SELECT(MF12345) COVERONLY(*YES) OUTPUT(*PRINT)
+ + +
Note: You can also access cover letters at the IBM Service Web site by selecting from the Tech Info & Databases branch. Step 2. Determine which storage area your machine is currently using. The system maintains two copies of all the IBM licensed internal code on your system. This lets your system maintain one permanent copy while you temporarily apply changes (PTFs) to the other area. Only when you're certain you want to keep the changes are those changes permanently applied to the control copy of the licensed internal code. The permanent copy is stored in system storage area A, and the copy considered temporary is stored in system storage area B. When the system is running, it uses the copy you selected on the control panel before the last IPL. Except for rare circumstances, such as when serious operating system problems occur, the system should always run using storage area B. If you currently see a B in the Data portion of the control-panel display, this means that the next system IPL will use storage area B for the licensed internal code. To apply PTFs to the B storage area, the system must actually IPL from the A storage area and then IPL again on the B storage area to begin using those applied PTFs. On older releases of OS/400, you had to manually IPL to the A side, apply PTFs, and then manually IPL to the B side again. The system now handles this IPL process automatically during the PTF install and apply process. To determine which storage area you're currently using, execute the command
DSPPTF 5769999 and check the IPL source field to determine which storage area is current. You will see either ##MACH#A or ##MACH#B, which tells you whether you are running on storage area A or B, respectively. If you are not running on the B storage area, execute the following PWRDWNSYS (Power Down System) command before continuing with your PTF installation:
PWRDWNSYS OPTION(*IMMED) + RESTART(*YES) + IPLSRC(B) Step 3. Enter GO PTF and press Enter to reach the Program Temporary Fix (PTF) panel. Select the 'Install program temporary fix package' option. Step 4. Supply the correct value for the Device parameter, depending on whether you received the PTF(s) on media or electronically. If you received the PTF(s) on media, enter the name of the device you're using. If you received the PTF(s) electronically, enter the value *SERVICE. Then press Enter.
Step 5. The system then performs the necessary steps to temporarily apply the PTFs and re-IPL to the B storage area. Once the IPL is complete, verify the PTF installation (see the section 'Verifying Your PTF Installation').
Installing Licensed Program Product PTFs Installing PTFs for licensed program products is almost identical to installing licensed internal code PTFs except that you don't have to determine the storage area on which you're currently running. The separate storage areas apply only to licensed internal code. The abbreviated process for licensed program products is as follows: Step 1. Review any cover letters that accompany the PTFs. Look especially for any specific pre-installation instructions. Step 2. Enter GO PTF and press Enter to reach the Program Temporary Fix (PTF) panel. Select the 'Install program temporary fix package' option. Step 3. Supply the correct value for the Device parameter, depending on whether you received the PTF(s) on media or electronically. If you received the PTF(s) on media, enter the name of the device you're using. If you received the PTF(s) electronically, enter the value *SERVICE. Then press Enter. Step 4. After the IPL is complete, verify the PTF installation (see 'Verifying Your PTF Installation').
Verifying Your PTF Installation After installing one or more PTFs, you should verify the installation process before resuming either normal system operations or use of the affected product. Use the system-supplied history log to verify PTF installations by executing the DSPLOG (Display Log) command, specifying the time and date you want to start with in the log:
DSPLOG LOG(QHST) + PERIOD((start_time start_date)) Be sure to specify a starting time early enough to include your PTF installation information. On the Display Log panel, look for any messages regarding PTF installation. If you have messages that describe problems, see AS/400 Basic System Operation, Administration, and Problem Handling (SC41-5206) for more information about what to do when your PTF installation fails. When installing a cumulative PTF package, you can also use option 50, 'Display log for messages,' on the Work with Licensed Programs panel (to reach this panel, issue the command GO LICPGM). The message log will display messages that indicate whether the install was successful.
How Current Are You? One last thing that will help you stay current with your PTFs is knowing what cumulative PTF package you currently have installed. To determine your current cumulative PTF package level, execute the command
DSPPTF LICPGM(5769SS1) The ensuing display panel shows the identifiers for PTFs on your system. The panel lists PTFs in decreasing sequence, showing cumulative package information first, before individual PTFs. Cumulative packages start with the letters TC or TA and end with five digits that represent the Julian date (in yyddd format) for the particular package. PTF identifiers that start with TC indicate that the entire cumulative package has been applied; those starting with TA indicate that HIPER PTFs and HIPER licensed internal code fixes have been applied. To determine the level of licensed internal code fixes on your system, execute the command
DSPPTF LICPGM(5769999)
Identifiers beginning with the letters TL and ending with the five-digit Julian date indicate the cumulative level. Typically, you want the levels for TC, TA, and TL packages to match. This circumstance indicates that you've applied the cumulative package to licensed program products as well as to licensed internal code.
Developing a Proactive PTF Management Strategy The importance of developing sound PTF management processes cannot be overstated. A proactive PTF management strategy lessens the impact to your organization that can result from program failures by avoiding those failures, ensuring optimal performance, and maximizing availability. Because environments vary, no single strategy applies to all scenarios. However, you should be aware of certain guidelines when evaluating your environment and establishing scheduled maintenance procedures. Your PTF maintenance strategy should include provisions for preventive service planning, preventive service, and corrective service.
Preventive Service Planning Planning your preventive measures is the first step to effective PTF management. To help you with planning, IBM publishes several Preventive Service Planning documents in the form of informational PTFs. (The easiest and fastest way to obtain these documents is from the IBM Service Web site.) Following are some minimum recommendations for PSP review. You should start with the software and hardware PSP information documents by ordering SF98vrm (Current Cumulative PTF Package) and MF98vrm (Hardware Licensed Internal Code Information), respectively. These documents contain service recommendations concerning critical PTFs or PTFs that are most likely to affect your system, as well as a list of the other PSP documents from which you can choose. You should order and review SF98vrm and MF98vrm at least monthly. Between releases of cumulative PTF packages, you may need to order individual PTFs critical to sound operations. If you review no other additional PSP documents, review the information for HIPER PTFs and Defective PTFs. These documents contain information about critical PTFs. At a minimum, review this information weekly. In years past, PSP documents contained enough detail to let you determine the nature of the problems that PTFs fixed. Unfortunately, that's no longer the case. With problem descriptions such as 'Data Integrity' and 'Usability of a Product's MAJOR Function,' you often must do a little more work to determine the nature of problems described in the PSP documents by referring to PTF cover letters. In addition to reviewing PSP documents, consider subscribing to IBM's AS/400 Alert offering. This service notifies you weekly about HIPER problems, defective PTFs, and the latest cumulative PTF package. You can receive this information by fax or mail. To learn more about this service, go to http://www.ibm.com/services.
Preventive Service Preventive measures are instrumental to your system's health. Remember the old adage 'An ounce of prevention ...'? Suffice it to say I've seen situations where PTFs would have saved tens of thousands of dollars. Avoid problems, and you avoid their associated high costs. Preventive maintenance includes regular application of cumulative and group PTF packages and Client Access service packs. Because all of these are collections of PTFs, your work is actually quite easy. There's no need to wade through thousands of PTFs to determine those you need. Instead, simply order and apply the packages. Cumulative PTF packages are your primary preventive maintenance aid. Released on a periodic basis, they should be applied soon after they become available -- usually every three to four months. This rule of thumb is especially true if you're using the latest hardware or software releases or making significant changes to your environment. In conjunction with cumulative PTF packages, you should stay current with any group PTF packages applicable to your environment, as well as with Client Access service packs if appropriate. You can find Client Access
service pack information and download service packs by following the links at http://www.as400.ibm.com/clientaccess.
Corrective Service Even the most robust and aggressive scheduled maintenance efforts can't thwart all possible problems. When you experience problems, you need to find the corrective PTFs. Ferreting out PSP information about individual problems and fixes is without a doubt the most detailed of the tasks in managing PTFs. However, if you take the time to learn your way around PSP information and PTF cover letters, you'll be able to find timely resolution to your problems. Your goal should be to minimize the corrective measures required. In doing so, your environment will be dramatically more stable operationally. With robust preventive service planning and preventive service measures, your corrective service issues will be minimal. This article is excerpted from a new edition of Wayne Madden's Starter Kit for the AS/400, to be published in the spring of 2001 by NEWS/400 Books. Gary Guthrie is a technical editor for NEWS/400. You can reach him by e-mail at [email protected] as400network.com.
PTFs and Logical Partitioning (LPAR) Although the basic steps of installing PTFs are the same for a system with logical partitions, some important differences exist. Fail to account for these differences when you apply PTFs, and you could find yourself with an inoperable system requiring lengthy recovery procedures. For systems with logical partitions, heed the following warnings: When you load PTFs to a primary partition, shut down all secondary partitions before installing the PTFs. When using the GO PTF command on the primary partition, change the automatic IPL parameter from its default value of *YES to *NO unless the secondary partitions are powered down. These warnings, however, are only the beginning with respect to the differences imposed by logical partitioning. There are also partition-sensitive PTFs that apply specifically to the lowest-level code that controls logical partitions. These PTFs have special instructions that you must follow exactly. These instructions include the following steps:
1.
Permanently apply any PTFs superseded by the new PTFs.
2.
Perform an IPL of all partitions from the A side.
3.
Load the PTFs on all logical partitions using the LODPTF (Load PTF) command. Do not use the GO PTF command.
4.
Apply the PTFs temporarily on all logical partitions using the APYPTF (Apply PTF) command.
5.
Power down all secondary partitions.
6.
Perform a power down and IPL of the primary partition from side B in normal mode.
7.
Perform normal-mode IPLs of all secondary partitions from side B.
8.
Apply all the PTFs permanently using command APYPTF.
When you receive partition-sensitive PTFs, always refer to any accompanying special instructions before loading the PTFs onto your system.
— G.G.
Chapter 7 - Getting Your Message Across: User to User Sooner or later, you will want to use messages on the AS/400. For instance, you might need to have a program communicate to a user or workstation to request input, report a problem, or simply update the user or system operator on the status of the program (e.g., 'Processing today's invoices'). Another time, your application might need to communicate with another program. Program-to-program messages can include informational, notification, status, diagnostic, escape, and request messages, each of which aids in developing program function, problem determination, or application auditing. 'File YOURLIB/YOUROBJ not found' is an example of a diagnostic program-to-program message. You or your users can also send messages to one or more users or workstations on the spur of the moment. Sometimes called impromptu messages, user-to-user messages are not predefined in a message file to system users. They might simply convey information, or they might require a response (e.g., 'Joe, aliens have just landed and taken the programming manager hostage. What should we do???'). User-to-user messages can serve as a good introduction to AS/400 messaging.
Sending Messages 101 To send user-to-user messages, you use one of three commands: SNDMSG (Send Message), SNDBRKMSG (Send Break Message), or SNDNETMSG (Send Network Message). SNDMSG is the most commonly used (you can use it even if LMTCPB(*YES) is specified on your user profile) and the easiest to learn. The SNDMSG prompt screen is shown in Figure 7.1. To access the SNDMSG command, you can
• • • •
key SNDMSG on a command line, select option 5 on the System Request menu, select option 3 on the User Task menu, or select option 4 on the Operational Assistant menu. (This option may be best for end users because Operational Assistant provides the most user-friendly interface to the SNDMSG command.)
The message string you enter in the MSG parameter can be up to 512 characters long. To specify the message destination, you can enter a user profile name in the TOUSR parameter. TOUSR can have any of the following values:
• • • •
*SYSOPR -- to request that the message be sent to the system operator's message queue (QSYS/QSYSOPR). *REQUESTER -- to request that the message be sent to the interactive user's external message queue or to the system operator's message queue when the command is executed from within a program. *ALLACT -- to request that the message be sent to the message queue of every user currently signed on to the system. (*ALLACT is not valid when MSGTYPE(*INQ) is also specified.) User_profile_name -- to request that the message be sent to the user's message queue (which may or may not have the same name as the user profile).
For example, if you simply want to inform John, a co-worker, of a meeting, you could enter
SNDMSG MSG('John - Our meeting today will
+
be at 4:00. Jim')
+
TOUSR(JSMITH) Another way to specify the message destination is to enter up to 50 message queue names in the TOMSGQ parameter. The specified message queue can be any external message queue on your system, including the workstation, user profile, or system history log (QHST) message queue (for more about sending messages to QHST, see 'Sending Messages into History,'). Specifying more than one message queue is valid only for informational messages. The MSGTYPE parameter lets you specify whether the message you are sending is an *INFO (informational, the default) or *INQ (inquiry) message. Like the informational message, an inquiry message appears on the destination message queue as text. However, an inquiry message supplies a response line and waits for a reply. If you want to schedule a meeting with John and be sure he receives your message, you could enter
SNDMSG MSG('John - Will 4:00 be a good time for our meeting today? Jim')
+ +
TOUSR(JSMITH) MSGTYPE(*INQ) The RPYMSGQ parameter on the SNDMSG command specifies which message queue should receive the response to the inquiry message. Because the default for RPYMSGQ is *WRKSTN, John's reply will return to your (the sender's) workstation message queue. As you can see, the SNDMSG command provides a simple way to send a message or inquiry to someone else on the local system. However, it has one quirk. Although SNDMSG can send a message to a message queue, it is the message queue attributes that define how that message will be received. If the message queue delivery mode is *BREAK and no break-handling program is specified, the message is presented as soon as the message queue receives it. A delivery mode of *NOTIFY causes a workstation alarm to sound and illuminates the 'message wait' block on the screen. A delivery mode of *HOLD does not notify the user or workstation about a message received.
I Break for Messages The SNDBRKMSG command offers a solution for messages that must get through regardless of the message queue's delivery mode or break-handling program or the message's severity. Although SNDBRKMSG provides the same function as the SNDMSG command, the message queue receiving the command handles messages in break mode, regardless of the message queue's delivery mode. Figure 7.2 shows the SNDBRKMSG prompt screen.
There are two other differences between the SNDBRKMSG command and the SNDMSG command. First, the SNDBRKMSG command has only the TOMSGQ parameter on which to specify a destination (i.e., only workstation message queues can be named as destinations). Second, the SNDBRKMSG command lets you specify the value *ALLWS (all workstations) in the TOMSGQ parameter to send a message to all workstation message queues.
The following is a sample message intended for all workstations on the system:
SNDBRKMSG MSG('Please sign off the system + The system
immediately.
+
will be unavailable 30 minutes.')
+ for the next +
TOMSGQ(*ALLWS) This message will go immediately to all workstation message queues and be displayed on all active workstations. If a workstation is not active, the message simply will be added to the queue and displayed when the workstation becomes active and the message queue is allocated.
Casting Network Messages The third command you can use to send a message to another user is SNDNETMSG (Figure 7.3). As with SNDMSG and SNDBRKMSG, you can type an impromptu message up to 512 characters long in the MSG parameter. The distinguishing feature of the SNDNETMSG command is the destination parameter, TOUSRID. The value you specify must be either a valid network user ID or a valid distribution list name (i.e., a list of network user IDs). If necessary, you can add network user IDs to the system network directory using the WRKDIR (Work with Directory) command. Each network user ID is associated with a user profile on a local or remote system in the network.
There are two situations for which the SNDNETMSG command is more appropriate than SNDMSG or SNDBRKMSG. First, you might need this command if your system is in a network because SNDMSG and SNDBRKMSG can't send messages to a remote system. Second, you can use SNDNETMSG to send messages to groups of users on a network -- including users on your local system -- using a distribution list. You can create a distribution list using the CRTDSTL (Create Distribution List) command and add the appropriate network user IDs to the list using the ADDDSTLE (Add Distribution List Entry) command. When you specify a distribution list as the message destination, the message is distributed to the message queue of each network user on the list. For example, if distribution list PGMRS consists of network user IDs for Bob, Sue, Jim, and Linda, you could send the same message to each of them (and give them reason to remember you on Bosses' Day) by executing the following command:
SNDNETMSG MSG('Thanks for your hard work
+
on the order entry project. Go home early today and
+
+
enjoy a little time off.')
+
TOUSRID(PGMRS) The only requirements for this method are that user profiles have valid network user IDs on the network directory and that System Network Architecture Distribution Services (SNADS) be active. (You can start SNADS by starting the QSNADS subsystem.)
As you can see, you have more than one option when sending user-to-user messages on the AS/400. Now you're ready to move on to program-to-user and program-to-program messages, but these are topics for another day. This introduction to messages should get you started and whet your appetite for learning more.
Chapter 8 - Secrets of a Message Shortstop What makes the OS/400 operating system tick? You could argue that messages are really at the heart of the AS/400. The system uses messages to communicate between processes. It sends messages noting the completion of jobs or updating the status of ongoing jobs. Messages tell when a job needs some attention or intervention. The computer dispatches messages to a problem log so the operator can analyze any problems the system may be experiencing. You send requests in the form of messages to the command processor when you execute AS/400 commands. OfficeVision uses a message to sound an alarm when a calendar event is imminent. You can design screens and reports that use messages instead of constants, thus enabling multilingual support. And, of course, users can send impromptu messages to and receive them from other workstation users on the system. With hundreds of messages flying around your computer at any given moment, it's important to have some means of catching those that relate to you -- and that might require some action. IBM provides several facilities to organize and handle messages, and you can create programs to further define how to process messages. In this chapter, I'll explore three methods of message processing: the system reply list, break handling programs, and default replies. The system reply list lets you specify that the operating system is to respond automatically to certain predefined inquiry messages without requiring that the user reply to them. A break handling program lets you receive messages and process them according to their content. The reply list and the break handling program have similar functions and can, under some conditions, accomplish the same result. The reply list tends to be easier to implement, while a break handling program can be much more flexible in the way it handles different kinds of messages. The third message handling technique, the default reply, lets you predefine an action that the computer will take when it encounters a specific message; the reply becomes a built-in part of the message description.
Return Reply Requested The general concept of the system reply list is quite simple. The reply list primarily consists of message identifiers and reply values for each message. There is only one reply list on the system (hence the official name: system reply list). When a job using the reply list encounters a predefined inquiry message, OS/400 searches the reply list for an entry that matches the message ID (and the comparison data, which we'll cover later). When a matching entry exists, the system sends the listed reply without intervention from the user or the system operator. When the system finds no match, it sends the message to the user (for interactive jobs) or to the system operator (for batch jobs). A job does not automatically use the system reply list -- you must specify that the reply list will handle inquiry messages. To do this, indicate INQMSGRPY(*SYSRPYL) within any of the following CL commands:
• • • • •
BCHJOB (Batch Job) SBMJOB (Submit Job) CHGJOB (Change Job) CRTJOBD (Create Job Description) CHGJOBD (Change Job Description)
IBM ships the AS/400 with the system reply list already defined as illustrated in Figure 8.1. This predefined reply list issues a 'D' (job dump) reply for inquiry messages that indicate a program failure. Note that the reply list uses the same convention as the MONMSG (Monitor Message) CL command for indicating generic ranges of messages; for example, 'RPG0000' matches all messages that begin with the letters 'RPG,' from RPG0001 through RPG9999. You can modify the supplied reply list by adding your own entries using the following CL commands:
• • • •
WRKRPYLE (Work with Reply List Entries) ADDRPYLE (Add Reply List Entry) CHGRPYLE (Change Reply List Entry) RMVRPYLE (Remove Reply List Entry)
Figure 8.2 lists some possibilities to consider for your own reply list. Each entry consists of a unique sequence number (SEQNBR), a message identifier (MSGID), optional comparison data (CMPDTA) and starting position (START), a reply value (RPY), and a dump attribute (DUMP). Let's look at each component individually.
A Table of Matches The system searches the reply list in ascending sequence number order. Therefore, if you have two list entries that would satisfy a match condition, the system uses the one with the lowest sequence number. The message identifier can indicate a specific message (e.g., RPG1241) or a range of messages (e.g., RPG1200 for any RPG messages from RPG1201 through RPG1299), or you can use *ANY as the message identifier for an entry that will match any inquiry message, regardless of its identifier. The reply list message identifiers are independent of the message files. If you have two message files with a message ID USR9876, for example (usually not a good idea), the system reply list treats both messages the same. Use the *ANY message identifier with great care. It is a catch-all entry that ensures the system reply list handles all messages, regardless of their message identifier. If you use it, it should be at the end of your reply list, with sequence number 9999. You should also be confident that the reply in the entry will be appropriate for any error condition that might occur. If the system reply list gets control of any message other than the listed ones, it performs a dump and then replies to the message with the default reply from the message description. If you don't use *ANY, the system sends unmonitored messages to the operator. The comparison data is an optional component of the reply list. You use comparison values when you want to send different replies for the same message, according to the contents of the message data. The format of the message data is defined when you or IBM creates the message. To look at the format, use the DSPMSGD (Display Message Descriptions) command. When a reply list entry contains comparison values, the system compares the values with the message data from the inquiry message. If you indicate a starting position in the system reply list, the comparison begins at that position in the message data. If the message data comparison value matches the list entry comparison value, the system uses the list entry to reply to the message; otherwise, it continues to search the list. For example, Figure 8.2 shows three list entries for the CPA4002 (Align forms) message. When the system encounters this message, it checks the message data for the name of the printer device. If the device name matches either the 'PRT3816' or 'PRTHPLASER' comparison data, the system automatically replies with the 'I' (Ignore) response; otherwise, it requires the user or the system operator to respond to the message. You use the reply value portion of the list entry to indicate how the system should handle the message in this entry. Your three choices are:
• • •
Indicate a specific reply (up to 32 characters) that the system automatically sends back to the job in response to the message (e.g., I, R, D, and G in Figure 8.2). Use *DFT (Default) to have the system send the message default reply from the message description. Use *RQD (Required) to require the user or system operator to respond to the message, just as if the job were not using the reply list.
The dump attribute in the system reply list tells the system whether or not to perform a job dump when it encounters the message matching this entry. Specify DUMP(*YES) or DUMP(*NO) for the list entry. You may request a job dump no matter what you specified for a reply value. The system dumps the job before it replies to the message and returns control to the program that originated the message. The dump then serves as a snapshot of the conditions that caused a particular inquiry message to appear. Although the reply list is a system-wide entity, you can use it with a narrower focus. Figure 8.3 shows portions of a CL program that temporarily changes the system reply list and then uses the changed list for message handling, checking for certain inquiry messages, and issuing replies appropriate to the program. At the end, the program returns the system reply list to its original condition. You should probably limit this approach to programs run on a dedicated or at least a fairly quiet system. Since the program temporarily changes the system reply list, any other jobs that use the reply list may use the changed reply list while this program is active. However, this technique does work well for such tasks as software installation and nighttime unattended operations.
Give Me a Break Message Another means of processing messages is to use a break handling program, which processes messages arriving at a message queue in *BREAK mode. IBM supplies a default break handling program; it's the same command processing program used by the DSPMSG (Display Messages) command. But you can write your own break handling program if you want break messages to do more than just interrupt your normal work with the Display Messages screen. Both the system reply list and a break handling program customize your shop's method of handling messages that arrive on a message queue, but there are several differences. The system reply list handles only inquiry messages, while a break handler can process any type of message, such as a completion message or an informational message. The system reply list has a specific purpose: to send a reply back to a job in response to a specific message. The break handler's function, on the other hand, is limited only by your programming ability. It can send customized replies for inquiry messages, it can convert messages to status messages, it can process command request messages, it can initiate a conversational mode of messaging between workstations, it can redirect messages to another message queue -- it can perform any number of functions. Unlike the system reply list, the break handler interrupts the job in which the message occurs and processes the message; it then returns control to the job. The interruption can, however, be transparent to the user. Like the reply list, a break handler does not take control of break messages unless you first tell it to do so. To turn control over to a break handling program, use the following CL command:
CHGMSGQ MSGQ(library/msgq_name) + DLVRY(*BREAK)
+
PGM(program_name)
+
SEV(severity_code) OS/400 calls the break handler if a message of high enough severity reaches the message queue. If you use a break handler in a job that is already using the system reply list, the reply list will get control of the messages first, and it will pass to the break handler only those messages it cannot process.
Take a Break Figure 8.4 shows a sample break handling program. To make the break handler work, OS/400 passes it three arguments:
• • •
the name of the message queue the library containing the message queue the reference key of the received message
The only requirement of the break handler is that it must receive the referenced message with the RCVMSG (Receive Message) command. You can then do nearly anything you want with the message before you end the break handler and let the original program resume. The example in Figure 8.4 displays any notify or inquiry messages, allowing you to send a reply, if appropriate. It also checks for any calendar alarms sent by OfficeVision and displays them. In addition, it monitors for and displays messages that could indicate potentially severe conditions, such as running out of DASD space. For any other messages, it simply resends the message as a status message, which appears quietly at the bottom of the user's display without interrupting work (unless display of status messages is suppressed in the user profile, the job, or the system value QSTSMSG). Figure 8.5 shows a portion of an initial program that puts a break handler into action. The initial program first displays all messages that exist in a user's message queue, and then it clears all but unanswered messages from the queue and activates the break handling program. Note that the initial program also checks whether the user is the system operator; if so, it activates the break handler for the system operator message queue.
It's Your Own Default One of the easiest methods of processing message replies automatically is also one of the most often overlooked. The message descriptions for inquiry or notify messages can contain default replies, which you can tell the system to use when the message occurs. The default reply must be among the valid replies for the message. You specify the message's default reply using either the ADDMSGD (Add Message Description) or CHGMSGD (Change Message Description) command. You can display a message's default reply using the DSPMSGD command. You can also use WRKMSGD (Work with Message Descriptions) to manage message descriptions. The default reply is used under the following circumstances:
• •
when you use the system reply list and the list entry's reply for the message is *DFT when you have changed the delivery mode of the receiving message queue to *DFT, using the CHGMSGQ (Change Message Queue) command
No messages are put in a message queue when the queue is in *DFT delivery mode; informational messages are ignored. Messages will be logged, however, in the system history log (QHST). You can easily set up an unattended environment for your computer to use every night by having your system operator execute the following command daily when signing off:
CHGMSGQ MSGQ(QSYSOPR) DLVRY(*DFT) Your system will then use default replies instead of sending messages to an absent system operator. This technique may prevent your overnight batch processing from hanging up because of an unexpected error condition. You should be careful, however, to ensure the suitability of the default replies for any messages that might be sent to the queue. You might also consider including the CHGMSGQ command within key CL programs, such as unattended backup procedures or program installation procedures, for which default replies may be appropriate. Another good use for default replies is to have one message queue handle all printer messages. By defining default replies to these messages and placing that queue in *DFT delivery mode, you can have the system automatically respond to forms loading and alignment messages.
Chapter 9 - Print Files and Job Logs There is certainly nothing mysterious about printing on your AS/400; however, you must understand a few basic concepts about print files to make printing operations run more smoothly. In this chapter, I cover two items concerning print files: modifying attributes of print files and handling a specific type of print file -- the systemgenerated job log. This basic understanding of how to define print files and job logs and of the functions they provide will increase your power to customize your system by controlling output. These tips are especially helpful if you have migrated from the S/36 or equipment other than the S/38.
How Do You Make It Print Like This? The AS/400 does support direct printing (i.e., output directly to the printer, which ties up a workstation or job while the printer device completes the task); however, almost 100 percent of the time you will use OS/400 print files to format and direct output. IBM ships the system with many print files, such as QSYSPRT, which the system uses when you compile a CL program; QSUPPRT, which the system uses when you print a listing from the source file; and QQRYPRT, which the system uses when you run a query. These print files have predefined attributes that control such features as lines per inch (LPI), characters per inch (CPI), form size, overflow line number, and output queue. In addition to the print files IBM provides, you can create two types of print files within your applications. The first type uses the CRTPRTF (Create Print File) command to define a print file that has no external definition (i.e., the print file has a set of defined attributes from the CRTPRTF command but only one record format). Any program using this type of print file must contain output specifications that describe the fields, positions, and edit codes used for printing. The second type of print file is externally described: When you use the CRTPRTF command, you specify a source member that describes the various record formats your program will use for printing. (For specifications you can
make in DDS, refer to IBM's Data Management Guide (SC41-9658).) Whether you create an externally described print file or a print file that must be used with programs that internally describe the printing, you define certain print file attributes (e.g., those controlling LPI, CPI, and form size) as part of the print file object definition. Let's examine a problem that often occurs when an AS/400 installation is complete. All the IBM-supplied print files are predefined for use with paper that is 11 inches long. If you have been using paper that is shorter (e.g., the 14 1/2-by-8 1/2-inch size) and generate output (using DSPLIB OUTPUT(*PRINT) or a QUERY/400 report) with a system-supplied print file, the system will print the report through the page perforations. On your previous system, the overflow worked just right, but you weren't around when someone set the system up. So how do you instruct the AS/400 to print correctly on the short, wide paper? First, you need to find out what the default values for printing are. To do so, you type in the DSPFD (Display File Description) command for the print file QSYSPRT:
DSPFD QSYSPRT When you execute that command, you see the display represented in Figure 9.1. Notice the page size parameter, PAGESIZE(66 132); the LPI parameter, LPI(6); and the overflow parameter, OVRFLW(60). These default parameters combine to determine the number of inches (i.e., 11) the system considers to be a single page on the system-supplied objects. But in this example, your paper is only 8 1/2 inches long, so you need to modify the form size and overflow of each print file (including all system-supplied print files and those you create yourself) that generates reports on this short-stock paper. You can accomplish this task by identifying each print file that needs to be modified and executing the following command for each:
CHGPRTF FILE(library_name/file_name) PAGESIZE(51 132) OVRFLW(45)
+
If you need to change all print files on the system, you can execute the same command, but place the value *ALL in the parameter FILE:
CHGPRTF FILE(*ALL/*ALL) PAGESIZE(51 132) OVRFLW(45) Another approach is to change the LPI parameter to match a valid number of lines per inch for the configured printer and then calculate the new form size and overflow parameters based on the new LPI you specified. The page size can vary from one form type to the next, but you can easily compensate for differences by modifying the appropriate print files. Remember that changing the LPI, the page length, and the overflow line number does not require programming changes for programs that let the system check for overflow status (i.e., you do not need to have program logic count lines to control page breaks). Such programs use the new attributes of the print file at the next execution. Once you have set up the page size you want and determined how a given job will print, you can start thinking about controlling when that job will print. The two parameters you can use to ensure that spooled data is printed at the time you designate are SCHEDULE and HOLD. The SCHEDULE parameter specifies when to make the spooled output file available to a writer for printing. If the system finds the *IMMED value for SCHEDULE, the file is available for a writer to begin printing the data as soon as the records arrive in the spooled file. This approach can be advantageous for short print items, such as invoices, receipts, or other output that is printed quickly. However, when you generate long reports, allocating the writer as soon as data is available can tie up a single writer for a long time. Entering a *FILEEND value for SCHEDULE specifies that the spooled output file is available to the writer as soon as the print file is closed in the program. Selecting this value can be useful for long reports you want available for printing only after the entire report is generated. The *JOBEND value for SCHEDULE makes the spooled output file available only after the entire job (not just a program) is completed. One benefit of selecting this value is that you can ensure that all reports one job generates will be available at the same time and therefore will be printed in succession (unless the operator intervenes).
The HOLD parameter works the way the name sounds. Selecting a value of *YES specifies that when the system generates spooled output for a print file, the output file stays on the output queue with a *HLD status until an operator releases the file to a writer. Selecting the *NO value for HOLD specifies that the system should not hold the spooled print file on the output queue and should make the output available to a writer at the time the SCHEDULE parameter indicates. For example, when a program generates a spooled file with the attributes of SCHEDULE(*FILEEND) and HOLD(*NO), the spooled file is available to the writer as soon as the file is closed. As with the PAGESIZE and OVRFLW parameters, you can modify the SCHEDULE and HOLD parameters for print files by using the CHGPRTF command. Remember that you can also override these parameters at execution time using the CL OVRPRTF (Override with Print File) command. You can also change some print file attributes at print time using the CHGSPLFA (Change Spool File Attributes) command or option 2 on the Work with Output Queue display. You should examine the various attributes associated with the CRTPRTF (Create Print File), CHGPRTF, and OVRPRTF commands to see whether or not you need to make other changes to customize your printed output needs. For further reading on these parameters, see the discussion of the CRTPRTF command in IBM's Programming: Control Language Reference (SC41-0030).
Where Have All the Job Logs Gone? After you have your print files under control, the next step in customizing your system can prick a nasty thorn in the flesh of AS/400 newcomers: learning how to manage all those job logs the system generates as jobs are completed. A job log is a record of the job execution and contains informational, completion, diagnostic, and other messages. The reason these potentially useful job logs can be a pain is that the AS/400 generates a job log for each completed job on the system. But fortunately, you can manage job logs. The three methods for job-log management are controlling where the printed output for the job log is directed, deciding whether to generate a printed job log for jobs that are completed normally or only for jobs that are completed abnormally, and determining how much information to include in the job logs. Controlling where the printed output is directed. When your system is shipped, it is set up so that every job (interactive sessions as well as batch) generates a job log that records the job's activities and that can vary in content according to the particular job description. You can use the DSPJOB (Display Job) or the DSPJOBLOG (Display Job Log) command to view a job log as the system creates it during the job's execution. When a job is completed, the system spools the job log to the system printer unless you change print file QUSRSYS/QPJOBLOG (the print file the system uses to generate job logs) to redirect spool files to another output queue where they can stay for review or printing. You can elect to redirect this job log print file in one of two ways. The most popular method is to utilize the OS/400-supplied Operational Assistant, which will not only redirect your job logs to a single output queue, but also perform automatic cleanup of old job logs based on a number of retention days, which you supply. You can access the system cleanup option panel from the Operational Assistant main menu (type 'GO ASSIST'), from the SETUP menu (type 'GO SETUP'), or directly by typing 'GO CLEANUP,' which will present you with the Cleanup Menu panel that you see in Figure 9.2. Before starting cleanup, you need to define the appropriate cleanup options by selecting option 1, 'Change cleanup options.' Figure 9.3 presents the Change Cleanup Options panel, where you can enter the retention parameters for several automated cleanup functions as well as determine at what time you want the system to perform cleanup each day. You can find a complete discussion of this panel and automated cleanup in Chapter 12, 'AS/400 Disk Storage Cleanup.' For now, my only point is this: The first time you activate the automated cleanup function by typing a 'Y' in the 'Allow automatic cleanup' option on this panel (see Figure 9.3), OS/400 changes the job log print file so that all job logs are directed to the system-supplied output queue QEZJOBLOG. Even if you do not start the actual cleanup process, or if you elect to stop the cleanup function at a later date, the job logs will continue to accumulate in output queue QEZJOBLOG. The second method for redirecting job logs is to manually create an output queue called QJOBLOGS, JOBLOGS, or QPJOBLOG using the CRTOUTQ (Create Output Queue) command. After creating an output queue to hold the job logs, you can use the CHGPRTF command (with OUTQ identifying the output queue you created for this purpose) by typing
CHGPRTF FILE(QPJOBLOG) OUTQ(QUSRSYS/output queue_name) Now the job logs will be redirected to the specified output queue. You might also want to specify HOLD(*YES) to place the spool files on hold in your new output queue. However, if no printer is assigned to that queue, those spool files will not be printed. The job logs can now remain in that queue until you print or delete them. When you think about managing job logs, you should remember that if you let job logs accumulate, they can reduce the system's performance efficiency because of the overhead for each job on the system. If a job log exists, the system is maintaining information concerning that job. Therefore it is important either to utilize the automated cleanup options available in OS/400's Operational Assistant or to manually use the CLROUTQ (Clear Output Queue) command regularly to clear all the job logs from an output queue. Deciding whether or not to generate a printed job log for jobs that are completed normally. Another concern related to the overhead involved with job logs is how to control their content (size) and reduce the number of them the system generates. The job description you use for job initiation is the object that controls the creation and contents of the job log. This job description has a parameter with the keyword LOG, which has three elements -- the message level and the message severity, both of which control the number of messages the system writes to a job log; and the message text level, which controls the level (i.e., amount) of message text written to the job log when the first two values create an error message. Before discussing all three parameters, I should define the term 'message severity.' Every message generated on the AS/400 has an associated 'severity,' which you can think of as its priority. Messages that are absolutely essential to the system's operation (e.g., inquiry messages that must be answered) have a severity of 99. Messages that are informational (e.g., messages that tell you a function is in progress) have a severity of 00. (For a detailed description of severity codes, you can refer to IBM's Programming: Control Language Reference, Volume 1, Appendix A, 'Expanded Parameter Descriptions.') The first parameter, message level, specifies one of the following five logging levels (note that a high-level message is one sent to the program message queue of the program that received the request or commands being logged from a CL program):
0 No data is logged. 1 The only information logged is any message sent to the job's external message queue with a severity greater than or equal to the message severity specified in this LOG parameter.
2 In addition to the information logged at level 1 above, the following is logged: • •
Any requests or commands logged from a CL program that cause the system to issue a message with a severity level that exceeds or is equal to that specified in the LOG parameter. All messages associated with a request or commands being logged from a CL program and that result in a high-level message with a severity greater than or equal to the message severity specified in the LOG parameter.
3 The same as level 2, with the additional logging of any requests or commands being logged from a CL program: • •
All requests or commands being logged from a CL program. All messages associated with a request or commands being logged from a CL program and that result in a high-level message with a severity greater than or equal to the message severity specified.
4 The following information is logged: •
All requests or commands logged from a CL program and all messages with a severity greater than or equal to the severity specified, including trace messages.
The second element of the LOG parameter, message severity, determines which messages will be logged and which will be ignored. Messages with a severity greater than or equal to the one specified in this parameter will be logged in the job log according to the logging level specified in the previous parameter.
With the third element of the LOG parameter, the message text level, a value of *MSG specifies that the system write only first-level message text to the job log. A value of *SECLVL specifies that the system write both the message and help text of the error message to the job log. By setting the message text level value to *NOLIST, you ensure that any job initiated using that value in the job description does not generate a job log if the job is completed normally. Jobs that are completed abnormally will generate a job log with both message and help text present. Eliminating job logs for jobs that are completed normally can greatly reduce the number of job logs written into the output queue. Determining how much information to include in the job logs. You can cause any interactive or batch job initiated with QDFTJOBD to withhold spooling of a job log if the job terminates normally. You simply create your user profiles with the default -- i.e., QDFTJOBD (Default Job Description) -- for the parameter JOBD (Job Description) and enter the command
CHGJOBD JOBD(QDFTJOBD) LOG(*SAME *SAME *NOLIST) Is this approach wise? Interactive jobs almost always end normally. Therefore, changing the job description for such interactive sessions is effective. Do you need the information in those job logs? If you understand how your workstation sessions run (e.g., which menus are used and which programs called), you probably do not need the information from sessions that end normally. You might need the information when errors occur, but you can generally re-create the errors at a workstation. You can rest assured with this approach that jobs ending abnormally will still generate a job log and provide helpful diagnostic information. Note that for interactive jobs, the LOG parameter on the SIGNOFF command overrides the value you specify on the job description. For instance, if on the job description you enter the value of *NOLIST in the LOG parameter and use the SIGNOFF LOG(*LIST) command to sign off from the interactive job, the system will generate a job log. For batch jobs, the question of eliminating job logs is more complex than it is for interactive jobs. It is often helpful to have job logs from batch jobs that end normally as well as those that end abnormally, so someone can re-create events chronologically. When many types of batch jobs (e.g., nightly routines) run unattended, job log information can be useful. Remember, the job description controls job log generation, so you can use particular job descriptions when you want the system to generate a job log regardless of how the job ends. The job description includes the parameter LOGCLPGM (Log CL Program Commands). This parameter affects the job log in that a value of *YES instructs the system to write to the job log any logable CL commands (which can happen only if you specify LOG(*JOB) or LOG(*YES) as an attribute of the CL program being executed). A value of *NO specifies that commands in a CL program are not logged to the job log. A basic understanding of AS/400 print files will help you effectively and efficiently operate your system. Handling job logs is a simple, but essential, part of managing system resources. When you neglect to control the number of job logs on the system, the system is forced to maintain information for an excessive number of jobs, which can negatively affect system performance. And job logs are a valuable information source when a job fails to perform. Customize your system to handle job logs and other print files to optimize your operations.
Chapter 10 - Understanding Output Queues Printing. It's one of the most common things any computer does, and it's relatively easy with the AS/400. What complicates this basic task is that the AS/400 provides many functions you can tailor for your printing needs. For example, you can use multiple printers to handle various types of forms. You can use printers that exist anywhere in your configuration -- whether the printers are attached to local or remote machines or even to PCs on a LAN. You can let users view, hold, release, or cancel their own output; or you can design your system so their output simply prints on a printer in their area without any operator intervention except to change and align the forms. The cornerstone for all this capability is the AS/400 output queue. Understanding how to create and use output queues can help you master AS/400 print operations.
What Is an Output Queue? An output queue is an object containing a list of spooled files that you can display on a workstation or write to a printer device. (You can also use output queues to write spooled output to a diskette device, but this chapter does not cover that function.) The AS/400 object type identifier for the output queue is *OUTQ. Figure 10.1a shows the AS/400 display you get on a workstation when you enter the WRKOUTQ (Work with Output Queue) command for the output queue QPRINT
WRKOUTQ QPRINT As the figure shows, the Work with Output Queue display lists each spooled file that exists on the queue you specify. For each spooled file, the display also shows the spooled file name, the user of the job that created the spooled file, the user data identifier, the status of that spooled file on the queue, the number of pages in the spooled file, the number of copies requested, the form type, and that spooled file's output priority (which is defined in the job that generates the spooled file). You can use function key F11=View 2 to view additional information (e.g., job name and number) about each spooled file entry. The status of a spooled file can be any of the following:
OPN The spooled file is being written and cannot be printed at this time (i.e., the SCHEDULE parameter of the print file is *FILEEND or *JOBEND).
CLO The file is spooled but unavailable for printing (i.e., the SCHEDULE parameter's value for the print file is *JOBEND).
HLD The file is spooled and on hold in the output queue. You can use option 6 to release the spooled file for printing.
RDY The file is spooled and waiting to be printed when the writer is available. You can use option 3 to hold the spooled file.
SAV The spooled file has been printed and is now saved in the output queue. (The spooled file attribute SAVE has a value of *YES. In contrast, a spooled file with SAVE(*NO) will be removed from the queue after printing.)
WTR The spooled file is being printed. You can still use option 3 to hold the spooled file and stop the printing, and the spooled file will appear on the display as HLD. I have mentioned two options for spooled files -- option 3, which holds spooled files, and option 6, which releases them. The panel in Figure 10.1a shows all available options. Figure 10.1b explains each option.
How To Create Output Queues Now that we've seen that output queues contain spooled files and let you perform actions on those spooled files, we can focus on creating output queues. The most common way output queues are created is through a printer device description. Yes, you read correctly! When you create a printer device description using the CRTDEVPTR (Create Device Description (Printer)) command or through autoconfiguration, the system automatically creates an output queue in library QUSRSYS by the same name as that assigned to that printer. This output queue is the default for that printer. In fact, the system places 'Default output queue for PRINTER_NAME' in the output queue's TEXT attribute.
An alternative method is to use the CRTOUTQ (Create Output Queue) command. The parameter values for this command determine attributes for the output queue. When you use the CRTOUTQ command, after entering the name of the output queue and of the library in which you want that queue to exist, you are presented with two categories of parameters -- the procedural ones (i.e., SEQ, JOBSEP, and TEXT) and those with security implications (i.e., DSPDTA, OPRCTL, AUTCHK, and AUT). For a look at some of the parameters you can use, see the CRTOUTQ panel in Figure 10.2.
The first of the procedural parameters, SEQ, controls the order of the spooled files on the output queue. You can choose values of either *FIFO (first in, first out) or *JOBNBR. If you select *FIFO, the system places new spooled files on the queue following all other entries already on the queue that have the same output priority as the new spooled files (the job description you use during job execution determines the output priority). Using *FIFO can be tricky because the following changes to an output queue entry cause the system to reshuffle the queue's contents and place the spooled file behind all others of equal priority:
• • •
A change of output priority when you use the CHGJOB (Change Job) or CHGSPLFA (Change Spooled File Attributes) command; A change in status from HLD, CLO, or OPN to RDY; A change in status from RDY back to HLD, CLO, or OPN.
The other possible value for the SEQ parameter -- *JOBNBR -- specifies that the system sort queue entries according to their priorities, using the date and time the job that created the spooled file entered the system. I recommend using *JOBNBR instead of *FIFO, because with *JOBNBR you don't have to worry about changes to an output queue entry affecting the order of the queue's contents. The next procedural parameter is JOBSEP (job separator). You can specify a value from 0 through 9 to indicate the number of job separators (i.e., pages) the system should place at the beginning of each job's output. The job separator contains the job name, the job user's name, the job number, and the date and time the job is run. This information can help in identifying jobs. If you'd rather not use a lot of paper, you can lose the job separator by selecting a value of 0. Or you can enter *MSG for this value, and each time the end of a print job is reached, the system will send a message to the message queue for the writer. Don't confuse the JOBSEP parameter with the FILESEP (file separator) parameter, which is an attribute of print files. When creating or changing print files, you can specify a value for the FILESEP parameter to control the number of file separators at the beginning of each spooled file. The information on the file separators is similar to that printed on the job separator but includes information about the particular spooled file. When do you need the file separator, the job separator, or both? You need file separators to help operators separate the various printed reports within a single job. You need job separators to help separate the printed output of various jobs and to quickly identify the end of one report and the beginning of the next. However, if you program a header page for all your reports, job separators are probably wasteful. Another concern is that for output queues that handle only a specific type of form, such as invoices, a separator wastes an expensive form. In reality, a person looking for a printed report usually pays no attention to separator pages but looks at the first page of the report to identify the contents and destination of the report. And as you can imagine, a combination of file separators and job separators could quickly launch a major paper recycling campaign. Understand, I am not saying these separators have no function. I am saying you should think about how helpful the separators are and explicitly choose the number you need.
The security-related CRTOUTQ command parameters help control user access to particular output queues and particular spooled data. To appreciate the importance of controlling access, remember that you can use output queues not only for printing spooled files but also for displaying them. What good is it to prevent people from watching as payroll checks are printed, if they can simply display the spooled file in the output queue? The DSPDTA (display data) parameter specifies what kind of access to the output queue is allowed for users who have *READ authority. A value of *YES says that any user with *READ access to the output queue can display, copy, or send the data of any file on the queue. A value of *NO specifies that users with *READ authority to the output queue can display, copy, or send the output data only of their own spooled files unless they have some other special authority. (Special authorities that provide additional function are *SPLCTL and *JOBCTL.) The OPRCTL (operator control) parameter specifies whether or not a user who has *JOBCTL special authority can manage or control the files on an output queue. The values are *YES, which allows control of the queue and provides the ability to change queue entries, or *NO, which blocks this control for users with the *JOBCTL special authority. One problem you might face relating to security is how to allow users to start, change, and end writers without having to grant them *JOBCTL special authority, which also grants a user additional job-related authorities that might not be desirable (e.g, the ability to control any job on the system). An alternative is to write a program to perform such writer functions. You can specify that the program adopt the authority of its owner, and you would make sure that the owner has *JOBCTL special authority. During program execution, the current user adopts the special and object-specific authorities of the owner. When the program ends, the user has not adopted *JOBCTL authority and thus cannot take advantage of a security hole. If the user does not have *JOBCTL special authority or does not adopt this special authority, (s)he must have a minimum of *CHANGE authority to the output queue and *USE authority to the printer device. The AUTCHK (authority check) parameter specifies whether the commands that check the requester's authority to the output queue should check for ownership authority (*OWNER) or for just data authority (*DTAAUT). When the value is *OWNER, the requester must have ownership authority to the output queue to pass the output queue authorization test. When the value is *DTAAUT, the requester must have *READ, *ADD, and *DELETE authority to the output queue. Finally, the AUT parameter specifies the initial level of authority allowed for *PUBLIC users. You can modify this level of authority by using the EDTOBJAUT (Edit Object Authority), GRTOBJAUT (Grant Object Authority), or RVKOBJAUT (Revoke Object Authority) command. As you can see, creating output queues requires more than just selecting a name and pressing Enter. Given some appropriate attention, output queues can provide a proper level of procedural (e.g., finding print files and establishing the order of print files) and security (e.g., who can see what data) support.
Who Should Create Output Queues? Who should create output queues? Although this seems like a simple question, it is important for two reasons: First, the owner can modify the output queue attributes as well as grant/revoke authorities to the output queue, which means the owner controls who can view or work with spooled files on that queue. Second, the AUTCHK parameter checks the ownership of the output queue as part of the authorization test when the output queue is accessed. So ownership is a key to your ability to secure output queues. Here are a few suggestions. The system operator should be responsible for creating and controlling output queues that hold data considered public or nonsecure. With this ownership and the various authority parameters on the CRTOUTQ command, you can create an environment that lets users control their own print files and print on various printers in their area of work. For secure data (e.g., payroll, human resources, financial statements), the department supervisor profile (or a similar one) should own the output queue. The person who owns the output queue is responsible for maintaining the security of the output queue and can even explicitly deny access to DP personnel.
How Spooled Files Get on the Queue It is very important to understand that all spooled output generated on the AS/400 uses a print file. Whether you enter the DSPLIB (Display Library) command using the OUTPUT(*PRINT) parameter to direct your output to a
report, create and execute an AS/400 query, or write a report-generating program, you are going to use a print file to generate that output. A print file is the means to spool output to a file that can be stored on a queue and printed as needed. Also, a print file determines the attributes printed output will have. This means you can create a variety of print files on the system to accommodate various form requirements. Another essential fact to understand about spooling on the AS/400 is that normally all printed output is placed on an output queue to be printed. As mentioned in the previous chapter, the AS/400 is capable of bypassing the spool process to perform direct printing; but this is normally avoided because of performance and work management problems when implementing direct printing. With that said, we can examine the spooling process more closely. When a job generates a spooled file, that file is placed on an output queue. The output queue is determined by one of two methods -- if the print file has a specifically defined output queue or is overridden to a specific output queue, the output from that print file is placed on that specific queue; if the print file does not specifically direct the spool file, it is placed on the output queue currently defined as the output queue for that particular job. Figure 10.3 illustrates how one job can place spooled files on different output queues. The job first spools the nightly corporate A/R report to an output queue at the corporate office. Then the program creates a separate A/R report for each branch office and places the report on the appropriate output queue.
How Spooled Files Are Printed from the Queue So how do the spooled files get printed from the queue? The answer is no secret. You must start (assign) a writer to an output queue. You make spooled files available to the writer by releasing the spooled file, using option 6. You then use the STRPRTWTR (Start Printer Writer) command. The OUTQ parameter on that command determines the output queue to be read by that printer. When the writer is started to a specific output queue and you use the WRKOUTQ command for that specific output queue, the letters WTR appear in the Status field at the top of the Work with Output Queue display to indicate that a writer is assigned to print available entries in that queue. You can start a writer for any output queue (only one writer per output queue and only one output queue per writer). You don't have to worry about the name of the writer matching the name of the queue. For instance, to start printing the spooled files in output queue QPRINT, you can execute the STRPRTWTR command
STRPRTWTR WRITER(writer_name) OUTQ(QPRINT) (Messages for file control are sent to the message queue defined in the printer's device description unless you also specify the MSGQ parameter.) When you IPL your system, the program QSTRUP controls whether or not the writers on the system are started. When QSTRUP starts the writers, each printer's device description determines both its output queue and message queue. You can modify QSTRUP to start all writers, to start specific writers, or to control the output queues by using the STRPRTWTR command. After a writer is started, you can redirect the writer to another output queue by using the CHGWTR (Change Writer) command or by ending the writer and restarting it for a different output queue. To list the writers on your system and the output queues they are started to, type the WRKOUTQ command and press Enter. You will see a display similar to the one in Figure 10.4. You can also use the WRKWTR (Work with Writer) command by typing WRKWTR and pressing Enter to get a display like the one in Figure 10.5. It is important to understand that the output queue and the printer are independent objects, so output queues can exist with no printer assigned and can have entries. The Operational Assistant (OA) product illustrates some implications of this fact. OA lets you create two output queues (i.e., QUSRSYS/QEZJOBLOG and QUSRSYS/QEZDEBUG) to store job logs and problem-related output, respectively. These output queues are not default queues for any printers. Entries are stored in these queues, and the people who manage the system can decide to print, view, move, or delete them.
A Different View of Spooled Files The WRKOUTQ command allows you to work with all spooled files on a particular output queue. Another helpful command is the WRKSPLF (Work with Spooled Files) command. This command allows you to work with all spooled files generated by your job, even if those spooled files are on multiple output queues. Figure 10.6 represents the WRKSPLF command output for someone who works at the 'basic' OS/400 assistance level (one's assistance level is determined first at the user profile level by the ASTLVL parameter, then at the command level based on the last use of the command or what the user enters when prompting the ASTLVL parameter on the command). Notice that one spooled file is assigned to the printer 'CONTES3' while the other spooled files are 'unassigned.' They are definitely on an output queue; but since no printer is currently started for any of those output queues, the files are listed as 'unassigned.' This basic assistance level hides some of the technical details of spooled files and output queues unless you request more information by selecting option 8 'attributes' to display the spooled file detail information.
Figure 10.7 represents the WRKSPLF command output for someone who works at the 'intermediate' level (there is no 'advanced' assistance level for this command, so those at the advanced assistance level will also see this same panel). Now you can clearly see which output queue each spool file is assigned to, the number of pages, the status, and the user who created the spooled file. You cannot see other spooled files on those same output queues since this WRKSPLF command works only with the current user's spooled files. You then have two methods for working with spooled files. You will find that you use both in your daily operations, but that using the WRKOUTQ command is the most useful of the two for system operations, since you can see more than one job's spooled files.
How Output Queues Should Be Organized The organization of your output queues should be as simple as possible. To start, you can let the system create the default output queues for each printer you create. Of course, you may want to modify ownership and some output queue attributes. At this point, you can send output to an output queue and there will be a printer assigned to print from that queue. How can you use output queues effectively? Each installation must discover its own answer, but I can give you a few ideas. If your installation generates relatively few reports, having one output queue per available printer is the most efficient way to use output queues. Installations that generate large volumes of printed output need to control when and where these reports might be printed. For example, a staff of programmers might share a single printer. If you spool all compiled programs to the same queue and make them available to the writer, things could jam up fast; and important reports might get delayed behind compile listings being printed just because they were spooled to a queue with a writer. A better solution is to create an output queue for each programmer. Each programmer can then use a job description to route printed output to his or her own queue. When a programmer decides to print a spooled file, he or she moves that file to the output queue with the shared writer active. This means that the only reports printed are those specifically wanted. Also, you can better schedule printing of a large number of reports.
What about the operations department? Is it wise to have one output queue (e.g., QPRINT or PRT01) to hold all the spooled files that nightly, daily, and monthly jobs generate? You should probably spend a few minutes planning for a better implementation. I recommend you do not assign any specific output to PRT01. You should create specific output queues to hold specific types of spooled files. For instance, if you have a nightly job that generates sales, billing, and posting reports, you might consider having either one or three output queues to hold those specific files. When the operations staff is ready to print the spooled files in an output queue, they can use the CHGWTR command to make the writer available to that output queue. Another method is to move the spooled files into an output queue with a printer already available. This method lets you browse the queue to determine whether or not the reports were generated and lets you print these files at your convenience. For some end users, you may want to make the output queue invisible. You can direct requested printed output to an output queue with an available writer in the work area of the end user who made the request. Long reports should be generated and printed only at night. The only things the user should have to do are change or add paper and answer a few messages. What a moutain of information! And I've only discussed a few concepts for managing output queues. But this information should be enough to get you started and on your way to mastering output queues.
Chapter 11 - The V2R2 Output Queue Monitor In applications that must handle spooled files, you may need a way to determine when spooled files arrive on an output queue. For instance, your application may need to automatically transfer any spooled file that arrives on a particular local output queue to a user on a remote system. Or perhaps you want to automatically distribute copies of a particular spooled file to users in the network directory. You may even want to provide a simple function that transfers all spooled files from one output queue to another while one of your printers is being repaired. In any case, you must find a way to monitor an output queue for new entries.
The Old Solution If you are running pre-V2R2 OS/400, you can write a program that uses the following tried-and-true approach:
• • • • •
Wake up periodically and perform a WRKOUTQ (Work with Output Queue) command specifying OUTPUT(*PRINT) Copy the output to a database file Read the database file and look for spooled file entries Determine whether an entry is new on the queue (you must be creative here) Perform the appropriate action for any new spooled files
Another option is to use the CVTOUTQ tool from the QUSRTOOL library or the version offered in Chapter 24, 'CL: You're Stylin' Now!' Both of these utilities convert the entries on an output queue to a database file, which you can then read and search for new spooled file entries. If you simply want to take a snapshot of all the entries on an output queue at any given time, you can do so easily with the approach outlined above or with either of the CVTOUTQ tools. Such a capability is useful when you want to perform a function against some or all of the spooled files on a queue and then delete those spooled files before taking the next snapshot. However, all these methods lack one fundamental ability that some applications require: the ability to easily identify new spooled file entries as they arrive on the output queue.
A Better Solution With V2R2, you can easily determine when a new spooled file arrives on an output queue. The V2R2 versions of the CRTOUTQ (Create Output Queue) and CHGOUTQ (Change Output Queue) commands let you associate a data queue with an output queue. When you do, and a spooled file becomes ready (a 'RDY' status) on the output queue, OS/400 will send an entry to the associated data queue. The entry identifies the new spooled file, so your program can monitor the data queue and take appropriate action whenever a new spooled file appears.
A spooled file is always in one of several statuses on an output queue (e.g., RDY = ready, HLD = held). We are interested in the 'RDY' or 'ready' status. The 'ready' status signifies that a spooled file is ready to print. When a spooled file arrives on the output queue and is in the 'RDY' status, OS/400 sends an entry to the attached data queue (if one is attached). If you then hold that spooled file entry and again release the entry, another data queue entry is sent to the data queue. Each time a spooled file becomes ready to print on the output queue, an entry is sent to the data queue. Figure 11.1 shows the prompt screen for the CHGOUTQ (Change Output Queue) command. For the DTAQ keyword, a value of *NONE indicates that no data queue is associated with the output queue. If you enter the name of a data queue, OS/400 will send an entry to that data queue when a spooled file arrives on the associated output queue. The only requirement for entering a data queue name is that the data queue exist. The value *SAME for the DTAQ parameter indicates no change to the existing parameter value.
Figure 11.2 shows the prompt screen for the CRTDTAQ (Create Data Queue) command. A data queue associated with an output queue must have a MAXLEN value of at least 128. You can specify a longer MAXLEN, but the data queue entry that describes the spooled file will occupy only the first 128 positions. After you create the data queue and use the CHGOUTQ command to associate the data queue with an output queue, OS/400 will create a data queue entry for every spooled file that arrives on that output queue until you again execute the CHGOUTQ command and specify DTAQ(*NONE) to stop the function. Figure 11.3 represents the field layout of the spooled file data queue entry as documented in the Guide to Programming and Printing (SC41-8194). You can use a data queue defined longer than 128 bytes, but not shorter, since the entry uses 128 bytes for each entry.
The STRTFROUTQ Utility One way you could use this new feature is to automatically transfer spooled files arriving on one output queue to another output queue. Such a utility is useful when a printer breaks and you want to reroute the broken printer's output to another printer. Because a printer can have only one output queue, you can't simply have another printer print the broken printer's output queue as well as its own. Formerly, an operator would have had to monitor the output queue and manually transfer the spooled files to another output queue. Users waiting for reports (especially if they have to walk to a remote printer to get them) don't like having to wait for the operator to transfer the spooled files or having to transfer the files themselves. Figure 11.4 shows the source for the STRTFROUTQ command, a utility that incorporates the V2R2 data queue feature to automatically transfer spooled files from one output queue to another. Besides transferring spooled files, this simple, useful utility illustrates the use of the data queue capability.
To use STRTFROUTQ, you enter both a source and a target output queue. The source output queue is the one the program will monitor for new arrivals. The target output queue is the one to which the spooled files will be transferred.
Figure 11.5 (41 KB) is the source for command processing program (CPP) STRTFROTQC. This program is the workhorse that actually identifies and transfers the spooled files. STRTFROTQC first checks that both the source and target output queues exist. If either does not, the program sends a message to the program queue and then ends, which causes the error message to be sent to the calling program. (Because you would normally be running this job in batch, the message would then be forwarded to the external queue -- the system operator.) When both the source and target output queues exist, the program associates a data queue with the source output queue. If a data queue with the same name as the source output queue already exists in library QGPL, the program uses it. If such a data queue does not exist, the program creates one. I chose to put the data queue in library QGPL because all AS/400s have a library named QGPL, but you can use any other available library instead. After making sure the data queue exists, the program uses the CHGOUTQ command to associate the data queue with the source output queue. At this point, the program enters 'polling' mode. At B in Figure 11.5, the program executes the RCVDTAQE command, a front end I wrote for the QRCVDTAQ API (for the code for my front ends to the data queue APIs, see 'A Data Queue Interface Facelift' — 151 KB). There is no equivalent OS/400 command. The four parameters listed at B are required; five optional parameters also exist for RCVDTAQE, but we don't need them here. The required parameters are
• • • •
DTAQ, the qualified data queue name (20 alphanumeric) DTALEN, the length of the data queue entry (5,0 decimal) DATA, the data queue entry (i.e., the data) (n alphanumeric; length as defined in previous parameter) WAIT, how long the program should wait for an entry to arrive on the data queue (1,0 decimal; negative for a never-ending wait, n for number of seconds to wait, or 0 for no wait at all)
After receiving the data for an entry, STRTFROTQC extracts the needed fields. The first field it extracts is the &end_flag field. This field, which is used later to end the program, is not part of the OS/400-supplied spooled file data queue entry. I'll explain the use and significance of this field in a moment. The values for &job, &user, &jobnbr, and &splf are all extracted from the data queue contents and transferred to character variables using the CHGVAR (Change Variable) command. Because the spooled file number is stored in binary, the CHGVAR command that extracts it uses the V2R2 %BIN or %BINARY function (D) to extract the value into a decimal field. Once the field values are extracted, the program executes the CHGSPLFA (Change Spooled File Attributes) command to move the spooled file identified in the data queue entry to the target output queue. Now back to that &end_flag field. After you execute the STRTFROUTQ command, the job will wait indefinitely for new data queue entries because STRTFROTQC assigned variable &wait a negative value (A). You could use the ENDJOB (End Job) command to end the job, but this solution is messy: It doesn't clean up the data queue or the associated output queue. When you use data queues, you must be a careful housekeeper. Data queue storage accumulates constantly and is not freed until you delete the data queue. A more elegant ending to such an elegant solution is the ENDTFROUTQ command and its CPP, ENDTFROTQC, shown in Figure 11.6 and Figure 11.7 (20 KB), respectively. When you are ready to end the STRTFROUTQ job, just enter the ENDTFROUTQ command and specify the name of the source output queue for the SOUTQ parameter. The CPP then sends a special data queue entry to the associated data queue; this entry has the value *TFREND in the first seven positions. Program STRTFROTQC checks each received data queue entry for the value *TFREND (C in Figure 11.5). When it detects this value, the program ends gracefully after deleting the data queue and disassociating the output queue so that no more data queue entries are created (E).
If you collect utility programs, you will want to have the STRTFROUTQ and ENDTFROUTQ utilities in your toolkit. By taking advantage of a little-known OS/400 function, these commands make spooled file management a little easier and more efficient.
Chapter 12 - AS/400 Disk Storage Cleanup OS/400 is a sophisticated operating system that tracks almost everything that happens on the system. This tracking is good, but it results in a messy by-product of system-supplied database files, journal receivers, and message queues. Users add to the clutter with old messages, unused documents, out-of-date records, and unprinted spool files. If you do nothing about this disorder, it will eventually strangle your system. But you can implement a few simple automated and manual procedures to keep your disk storage free of unwanted debris.
Automatic Cleanup Procedures In August 1990, IBM introduced Operational Assistant (OA) as part of the operating system. Today's OA functions include automatic cleanup of some of the daily messes the AS/400 makes. OA's automatic cleanup is a good place to start when you're trying to clean up your AS/400's act. To access the OA Cleanup Tasks menu (Figure 12.1), you can type GO CLEANUP or select option 11 ('Customize your system, users, and devices') and then option 2 ('Cleanup tasks'), both from the OA main menu. You can use this menu to start and stop automatic cleanup and to change cleanup parameters. Option 1, 'Change cleanup options,' gives you the Change Cleanup Options display (Figure 12.2). (To bypass these menus, just prompt and execute the CHGCNLUP (Change Cleanup) command.) Note that you must have *ALLOBJ, *SECADM, and *JOBCTL authorities to change cleanup options. If option 1 does not appear on the Cleanup Tasks menu, you do not have the proper authorities.
Using the Change Cleanup Options screen, you can enable the automatic cleanup function and specify that cleanup should be run either at a specific time each day or as part of any scheduled system power-off. Specify *YES for the ALWCLNUP parameter to tell the system that you want to enable automatic cleanup. For STRTIME, you can enter a specific time (e.g., 23:00) for the cleanup to start, or you can enter *SCDPWROFF to tell the system to run cleanup during a system power-off that you've scheduled using OA's power scheduling function (the cleanup will not be run if you power off using the PWRDWNSYS (Power Down System) command or force a power-off using the control panel). Returning to the Cleanup Tasks menu, execute option 2, 'Start cleanup at scheduled time,' and your AS/400 will execute the cleanup each day at the specified time. Although it is ideal to run cleanup procedures when the system is relatively free of other tasks, it is not a requirement; and OA's cleanup will not conflict with application programs other than competing for CPU cycles.
The other parameters on the Change Cleanup Options screen let you control which objects the procedure will attempt to clean up. Each parameter allows a value of either *KEEP, which tells the system not to clean up the specified objects, or a number from 1 to 366 that indicates the number of days the objects or entries are allowed to stay on the system before the cleanup procedure removes them. The table in Figure 12.3 lists the cleanup options and the objects that they automatically clean up. Look closely at the list of objects cleaned up by the 'Job logs and other system output' option. When you activate this option, the system places all job logs into output queue QUSRSYS/QEZJOBLOG and all dumps (e.g., system and program dumps) into output queue QUSRSYS/QEZDEBUG. The cleanup procedure removes from these output queues any spool files that remain on the system beyond the maximum number of days. OS/400 uses a variety of database files and journals to manage operating system functions (e.g., job accounting, performance adjustment, SNADS, the problem log). Regular, hands-off cleanup of these journals and logs is the single most beneficial function of the automatic cleanup procedures; without this automatic cleanup, you have to locate the files and journals and write your own procedures to clean them up. This, along with the possibility that IBM could change or add to these objects in a future release of OS/400, makes this cleanup option the most helpful. For OfficeVision/400 users, the 'OfficeVision/400 calendar items' option is an effective way to manage the size of several OfficeVision production objects. This option cleans up old calendar items and reorganizes key database files to help maintain peak performance. If you ever want to stop the automatic daily cleanup, just select option 4, 'End cleanup,' to stop all automatic cleanup until you restart it using option 2.
Manual Cleanup Procedures OA's automatic cleanup won't do everything for you. Figure 12.4 lists cleanup tasks you must handle manually. By 'manually,' I mean you must manually execute commands that clear entries or reorganize files, or you must write a set of automated cleanup tools that you can run periodically or along with OA's daily cleanup operations.
Save security audit journal receivers. If you activate the security audit journaling process, the receiver associated with QAUDJRN (the security audit journal) will grow continuously as long as it's attached to QAUDJRN. In fact, if you select all possible auditing values, this receiver will grow rapidly. As with all journal receivers, you are responsible for receiver maintenance. Here are my recommendations. First, do not place audit journal receivers into library QSYS (QAUDJRN itself must be in QSYS, but receivers can be in any library and in any auxiliary storage pool). Place them in a library (e.g., one called AUDLIB) that you can save and maintain separately. Each week, use the CHGJRN (Change Journal) command to detach the old receiver from QAUDJRN and attach a new one. Make sure your regular backup procedure saves the security journal receivers (only detached receivers are fully saved). If you specify 'System journals and system logs,' OA's automated cleanup operation deletes old security audit journal receivers that are no longer attached to the journal. Your backup strategy should include provisions for retaining several months of security journal receivers in case you need to track down a security problem. Do an IPL regularly. Perform an IPL regularly (e.g., weekly or bimonthly). An IPL causes the system to delete temporary libraries, compress work control blocks, and free up unused addresses. The result is that more disk storage becomes available, and performance improves. During an IPL, the system also closes job logs and opens new ones. This housekeeping especially benefits system-supplied jobs (e.g., QSYSWRK, QSYSARB, QSPLMAINT), whose job logs can grow quite large between IPLs. After IPL, system jobs require less time to write to the end of the job log, giving performance a boost. The
more active your system, the more frequently you need to IPL -- on very active systems, you should IPL at least once a week. Reclaim spool file storage. Like the S/38, the AS/400 has an operating-system-managed database file that contains a member for every spool file (e.g., job log, user report, Print key output) on the system. When you or the system creates a spool file, OS/400 uses an empty member of the spool file database (which is maintained in library QSPL) if one is available; otherwise, OS/400 creates a member. Whenever a spool file is deleted or printed, the operating system clears that file's database member, readying it for reuse. But even empty database members occupy a significant amount of space. If you create many spool files, this database can grow like Jack's beanstalk (I have seen QSPL grow to 150 MB). Again like the S/38, the AS/400 checks all empty QSPL database members at every IPL and deletes those that have been on the system for seven or more IPLs. But since V1R3 of OS/400, the AS/400 has provided two additional methods of cleaning up these empty database members. The first method is to use system value QRCLSPLSTG, which lets you limit the number of days an empty member remains on the system. Valid values include whole numbers from 1 to 366; the default is 8 days. When an empty member reaches the specified limit, the system deletes the member. *NONE is also a valid value, but it is impractical because it causes the system to generate a new database member for each spool file you create, thus overburdening the system and hurting performance. A value of *NOMAX tells the system to ignore automatic spool storage cleanup. The second new housecleaning method for spool files is to execute the RCLSPLSTG (Reclaim Spool Storage) command. If you want to control spool file cleanup yourself rather than have the system do it, you can enter a value of *NOMAX for system value QRCLSPLSTG and then execute the RCLSPLSTG command whenever necessary. Reclaim storage. You should use the RCLSTG (Reclaim Storage) command periodically to find damaged or lost objects and to ensure that all auxiliary storage is either used properly or available for use. Unexpected power failures, device failures, or other abnormal job endings can create unusual conditions in storage, such as damaged objects, objects with no owners, or even objects that exist in no library (i.e., the library name is absent). During a reclaim of storage, the system puts any damaged and lost objects it encounters into the recovery library, QRCL. After storage is reclaimed, you should look in QRCL, move any objects you want to keep to another library, and delete any remaining objects. Also, normal operations use a portion of auxiliary storage for permanent and temporary addresses. The RCLSTG command recovers and recycles addresses that the system used but no longer needs. You should run RCLSTG every six months or whenever you encounter messages about damaged objects or authority problems with objects. You also should monitor the permanent and temporary addresses the system uses by executing the WRKSYSSTS (Work with System Status) command. When WRKSYSSTS shows that permanent and temporary addresses exceed 20 percent of the available addresses, execute the RCLSTG command. Keep in mind that you can execute RCLSTG only when the AS/400 is in restricted state (i.e., all subsystems must be ended, leaving only the console active). You can also use OA's disk analysis reports, which list the space taken up by damaged objects, objects without owners, and objects without libraries, to determine when you need to do a RCLSTG. For more information about the OS/400 RCLSTG function, see the Basic Backup and Recovery Guide (SC41-0036-01). Remove unused licensed software. Another way to reclaim disk storage is to remove unused licensed program products (e.g., product demos, old third-party products you no longer use, and IBM products such as the OS/400 migration aids, once you're done with them). After saving libraries and objects you no longer need, delete the products you no longer need (you can use the GO LICPGM command to access the IBM licensed products menu). Clean up user output queues. What about user-created spooled output? OA's cleanup addresses job logs and certain service and program dump output. But when users create spool files, these files also stay on the system until the user prints or deletes them. You need to either monitor user-created output queues or have users monitor their own. One tool some AS/400 customers find helpful is DLTOLDSPLF, a utility in library QUSRTOOL that finds and moves or deletes all spool files older than a specified number of days.
Reset message queue sizes. User-created messages can also add to the clutter on the AS/400. As messages accumulate, message queues grow to accommodate them; but queues don't become smaller as messages are removed. Although OA's automatic cleanup clears old messages from user and workstation message queues, it doesn't reset the message queue size. To reset the queue size, you must use the CLRMSGQ (Clear Message Queue) command to completely clear the message queue. Again, you can perform this task manually for specific message queues, or you can automate the process by writing a program. Clear save files. If you frequently use save files for ad hoc or regular backups, you may want to define a manual or automated procedure to periodically clear those save files and reclaim that storage. After you save a save file's data to tape or diskette, clear the file by executing the CLRSAVF (Clear Save File) command. Manage journal receivers. If you use journaling on your system, you need to manage the journals you create. As with the security audit journal receivers, detach and save receivers as part of your normal backup and recovery strategy. Then you can delete receivers you no longer need. For more information about journaling and managing journals and receivers, refer to the Programming: Backup and Recovery Guide (SC41-8079). Delete old and unused objects. Old and unused objects of various kinds can accumulate on your system, unnecessarily using up storage and degrading performance. You should evaluate objects that are not used regularly to determine whether or not they should remain on the system. Remember to check development and test libraries as well as production libraries. Since V1R3, the description of each object on the system includes a 'last used' date and time stamp, as well as a 'last used' days counter. The object description also contains the 'last changed' date and time as well as the 'last saved' date and time. Beginning with V2R2, you can use the Disk Space Tasks menu (Figure 12.5) to collect information about and analyze disk space utilization. You can call this menu directly by typing GO DISKTASKS, or you can access it through the main OA menu. As you can see, the menu options let you collect and print disk space information as well as actually work with libraries, folders, and objects. When you select option 1 to collect disk space information, you'll see the prompt in Figure 12.6. You can collect disk space information at a specified date and time by selecting option 1. Selecting 2 or 3 tells the system to collect information at the specified interval. Whichever option you choose, the system collects information about objects (e.g., database files, folders (including shared folders), programs, commands) and stores it in file QUSRSYS/QAEZDISK. You can then select option 2 on the Disk Space Tasks menu to print reports that analyze disk space usage by library, folder, owner, or specific object. Or you can print a disk information system summary report. Because the data is collected in a database file, you can also perform ad hoc interactive SQL queries, use Query/400, or write high-level language programs to get the information you need. Purge and reorganize physical files. An active database environment can contribute to the AS/400's sloppy habits. One problem is files in which records accumulate forever. You should examine your database to determine whether any files fit this description and then design a procedure to handle the 'death' of active records. In some situations, you can simply delete records that are no longer needed. In other situations, you might want to archive records before you delete them. In either case, you certainly won't want to delete or move records manually; instead, look for a public-domain or vendor-supplied file edit utility or tool. Deleting records does not increase your disk space, however. Deleted records continue to occupy disk space until you execute a RGZPFM (Reorganize Physical File Member) command. You could write a custom report to search for files with a high percentage of deleted records and then manually reorganize those files. Or you could go one step further and write a custom utility that would search for those files and automatically reorganize them using the RGZPFM OS/400 command. Clean up OfficeVision/400 objects. OfficeVision/400 can devour disk space unless you clean up after it religiously. Encourage OfficeVision/400 users to police their own documents and mail items and to delete items they no longer need. You can use the QRYDOCLIB (Query Document Library) command as a reporting tool to monitor document and folder maintenance. You might also want to limit the auxiliary storage available to each user by using the MAXSTG parameter on each user profile.
Figure 12.7 lists the OfficeVision/400 database files you should reorganize regularly (every little bit helps with the OfficeVision performance hog!). You will probably want to write a CL program that reorganizes these files and run that program when OfficeVision is not in use.
Enhancing Your Manual Procedures You can handle many of the manual tasks I've mentioned by using the QEZUSRCLNP job to incorporate your own cleanup programs and commands into OA's automatic cleanup function. QEZUSRCLNP is essentially an empty template that gives you a place to add your own cleanup code. Every time OA's automatic cleanup function is run, it calls QEZUSRCLNP and executes your code. To add your enhancements to QEZUSRCLNP, first use the RTVCLSRC (Retrieve CL Source) command to retrieve the source statements for QEZUSRCLNP (Figure 12.8) from library QSYS. Then insert your cleanup commands or calls to your cleanup programs into the QEZUSRCLNP source. Be sure to add your statements after the SNDPGMMSG (Send Program Message) command for message CPI1E91 to ensure that, after your cleanup job has ended, the system sends a completion message to the system operator message queue. Finally, compile your copy of QEZUSRCLNP into a library that appears before QSYS on the system library list. (You can modify the system library list by editing the QSYSLIBL system value.) I caution you against replacing the system-supplied version of the program by compiling your copy of QEZUSRCLNP into QSYS. By using a different library, you can preserve the original program and avoid losing your modified program the next time you load a new release of the operating system. In OA's automated cleanup function, the AS/400 gives you the services of a maid to solve some simple cleanup issues. Use the function. But your cleanup shouldn't stop there. You also need to develop and implement procedures to maintain system-supplied and user-defined objects, such as spool and save files.
Chapter 13 - All Aboard the OS/400 Job Scheduler! The job scheduling function, new with V2R2, lets you schedule jobs to run at dates and times you choose without performing any add-on programming. There are two V2R2 additions that let you control job scheduling:
• •
new parameters on the SBMJOB command the new job schedule object
The job schedule function was made possible by enhancing the operating system with QJOBSCD, a new system job that is started automatically when you IPL the system. This job monitors scheduled job requirements, then submits and releases scheduled jobs at the appropriate date and time.
Arriving on Time The SBMJOB command, of course, places a job on a job queue for batch processing, apart from an interactive workstation session. Starting with V2R2, the new SCDDATE and SCDTIME parameters let you specify a date and time for the job to be run. This scheduling method is a one-time shot; you use it for a job that you want to run only once, at a later date and/or time. If you want a job to run more than once, you'll have to remember to submit it each time (or use the job schedule object, as I discuss later). When you use the new parameters to indicate a schedule date and/or time, the SBMJOB command places the job on a job queue in a scheduled state (SCD) until the date and time you specified; then the system releases the job on the job queue and processes it just like any other submitted job. If you specify HOLD(*YES) on the SBMJOB command, at the appointed time the job's status on the queue will change from scheduled/held (SCD HLD) to held (HLD). You can then release the job when you choose. The default value for the SCDDATE and SCDTIME parameters is *CURRENT, which indicates that you want to submit the job immediately; so if you don't specify a value for these parameters, the SBMJOB command works just as it always has. Otherwise, you'll usually specify an exact date (in the same format as the job's date) and time for the job to run. There are, however, other possible special values that you may find useful for the SCDDATE parameter.
If you indicate SCDDATE(*MONTHSTR), the job will run at the scheduled time on the first day of the month. SCDDATE(*MONTHEND) will run the job on the last day of the month. (No more '30 days hath September...' or counting on your fingers!) Or you can specify SCDDATE(*MON) or *TUE, *WED, *THU, *FRI, *SAT, or *SUN, to run the job on the specified day of the week. During which month, on which Monday, and so on, will your job be run? That depends. For example, if today is the first day of the month and you specify SCDDATE(*MONTHSTR) and the current time is previous to the time in the SCDTIME parameter... it'll run today. Otherwise, it'll wait until next month. Similar logic applies for other SCDDATE and SCDTIME possibilities. If you remove a scheduled job from a job queue, the job will not run, even when the scheduled time and date occur. You can remove a job from the queue either by using the CLRJOBQ (Clear Job Queue) command or by using the WRKJOBQ (Work with Job Queue) command and ending the job. Holding a job queue that includes a scheduled job can delay execution of the job, but it will not prevent the job from running when you release the job queue, even if the scheduled time has passed.
Running on a Strict Schedule In addition to enhancing the SBMJOB command, V2R2 introduces a new type of AS/400 object, the job schedule, with a system identifier of *JOBSCD. (Sorry, Canadians and Brits, IBM didn't pick *JOBSHD.) The job schedule is a timetable that contains descriptive entries for jobs to be executed at a specific date, time, and/or frequency. It is most useful for jobs that you want to run repeatedly according to a set schedule. If a job is on the job schedule, you need not remember to submit it for every execution; the operating system takes care of that chore. The job schedule function is documented in the Work Management Guide (SC41-8078). One job schedule exists on the system: object QDFTJOBSCD in library QUSRSYS. Although its name indicates that this object is the default job schedule, it is the only one. The operating system offers no commands to create, change, or delete your own customized job schedules... yet. You can manipulate the entries in the job schedule using the following new commands:
• • • • • •
ADDJOBSCDE (Add Job Schedule Entry) CHGJOBSCDE (Change Job Schedule Entry) HLDJOBSCDE (Hold Job Schedule Entry) RLSJOBSCDE (Release Job Schedule Entry) RMVJOBSCDE (Remove Job Schedule Entry) WRKJOBSCDE (Work with Job Schedule Entries)
Figure 13.1 shows a sample list display that appears when you run the WRKJOBSCDE command. When you select option 5 (Display details) for an entry, you get a display such as that in Figure 13.2. This example shows the details of a job my system runs every weekday morning at 3:30.
Each job schedule entry is made up of many components that define the job to be run and describe the environment in which it will run. Figure 13.3 describes those components and lists the parameter keywords the job-scheduling CL commands use. With V2R3, you can print a list of your job schedule entries by entering the
WRKJOBSCDE command, followed by a space and OUTPUT(*PRINT). For detailed information on each job schedule entry on the list, follow the WRKJOBSCDE command with PRTFMT(*FULL). OS/400 gives each job schedule entry a sequence number to identify it uniquely. You usually refer to an entry by its job name, but if there are multiple entries with the same job name, you also have to specify the sequence number to correctly refer to the entry. For example, in Figure 13.1, there are three entries named VKEMBOSS. Displaying the details for each, however, would show that they each have a unique sequence number. The frequency component (FRQ) of a schedule entry may seem confusing at first. It's obvious that you can schedule a job to run *ONCE, *WEEKLY, or *MONTHLY; but what if you want to schedule a daily job? In that case, you need to use an additional schedule entry element, the scheduled day (SCDDAY). To run a job every day, specify FRQ(*WEEKLY) and SCDDAY(*ALL). You can also run the job only on weekdays, using FRQ(*WEEKLY) and SCDDAY(*MON *TUE *WED *THU *FRI). Just Thursdays? That's easy: FRQ(*WEEKLY) and SCDDAY(*THU). The scheduled date component (SCDDATE) of a schedule entry tells the system a specific date to run the job. If you use the SCDDAY parameter, you cannot use the SCDDATE parameter; the two don't make sense together. The combination of FRQ(*MONTHLY) and SCDDATE(*MONTHEND) will run a job on the last day of each month, regardless of how many days each month has. The relative day of the month parameter (RELDAYMON) gives the job schedule even more flexibility. For instance, if you want to run a job only on the first Tuesday of each month, you indicate values for three parameters: FRQ(*MONTHLY) SCDDAY(*TUE) RELDAYMON(1). Sometimes your computer can't run a job at the scheduled time; for example, your AS/400 may be powered off or in the restricted state at the time the job is to be submitted. In the recovery action component (RCYACN) of the schedule entry, you can tell the computer to take one of three actions. RCYACN(*SBMRLS) submits the job to be run as soon as possible. RCYACN(*SBMHLD) submits the job, but holds it until you explicitly release it for processing. RCYACN(*NOSBM) is the 'Snooze, you loose' option; the job scheduler will not attempt to submit the job after the scheduled time passes. Notice that this feature applies only to jobs scheduled from the job schedule, not to those you submit with SBMJOB.
Two Trains on the Same Track When I was setting up job schedule entries for my system, I discovered that many of the entries I made were similar. I found myself wanting to copy a job schedule entry to save myself from the drudgery of retyping long, error-prone command strings. Because the job schedule commands don't offer such a function, I decided to write a command that does. My command, CRTDUPSCDE (Create a Duplicate Job Schedule Entry), is easy to use. You simply supply the command with the job name of the existing job schedule entry you want to copy from and a name you want to give the copy: CRTDUPSCDE FROMJOB(job-name) NEWNAME(new-name) The NEWNAME parameter defaults to *FROMJOB, indicating that the new entry should have the same name as the original; the system will give the entry a unique sequence number.
Figure 13.4 provides the code for the CRTDUPSCDE command. Figure 13.5 is the command processing program (CPP). CRTDUPSCDE uses the IBM-supplied program QWCLSCDE, a new API that lists job schedule entries in a user space. (See A in Figure 13.5.) After retrieving the 'from' job schedule entry (which you specified in the FROMJOB parameter), the program breaks the output from the API down into the parameter values that describe the entry; then it uses the same values in the ADDJOBSCDE command to create a new entry based on the existing one. After that, it's a simple matter to use the CHGJOBSCDE command to make any minor changes the new entry needs. (You can find documentation for QWCLSCDE and the user space layouts used in CRTDUPSCDE in the System Programmer's Interface Reference (SC21-8223).) The command also uses two user space APIs: QUSCRTUS and QUSRTVUS.
In addition to being easy to use, CRTDUPSCDE is very basic. To conserve space, I didn't include some features that you might want to add. For example, I used very basic error trapping instead of error-message-handling routines. Also, the command retrieves only the first instance of a schedule entry with the name you choose, even though the job schedule could contain multiple entries of the same name. If you have multiple same-name entries and you want to retrieve one other than the first, you'll need to add the code to loop through the data structure that returns the name. Finally, my command doesn't duplicate the OMITDATE values from the original. Doing so would require adding array-handling techniques to the CL program, which isn't worth the effort to me because I hardly ever use this parameter. I encourage you to experiment with enhancing this command to suit your own needs.
Derailment Dangers A few cautionary comments are in order before we finish our exploration of the new OS/400 job schedule object. There are a few situations that I ran into when I was implementing the function and that the Work Management Guide doesn't adequately cover. It is important to know that a job submitted by the job schedule will not retain the contents of the LDA from the job that originally added it to the job schedule. In my tests of the new function, I was never able to run the scheduled job with anything other than a blank LDA. When you submit a job with the SBMJOB command, however, the system passes a copy of the LDA to the submitted job. It's a common practice to store variable processing values in the LDA as a handy means of communicating between jobs or between programs within a job. If your application depends upon specific values in the LDA, you may want to schedule jobs using the SBMJOB command instead of creating a job schedule entry. I've discovered an alternate technique, however, that still lets me take advantage of a job schedule entry for recurring jobs that need the LDA. When I add the job schedule entry, I also create a unique data area that contains the proper values in the proper locations, according to the specifications in the submitted program. It's then a simple matter to make a minor change to the submitted program so that the program either uses the new data area instead of the LDA or retrieves the new data area and copies it to the LDA using the RTVDTAARA (Retrieve Data Area) and/or CHGDTAARA (Change Data Area) commands. This new data area should be a permanent object on the system as long as the dependent job schedule entry exists. SBMJOB has another benefit that a job scheduling entry does not offer. When you use SBMJOB to schedule a job, the system defaults to using an initial library list that is identical to the library list currently in use by the submitting job. The job schedule entry, on the other hand, depends upon the library list in its JOBD component. If you've gotten out of the old S/38 habit of creating unique job descriptions primarily to handle unique library lists, you'll need to resurrect this technique to describe the library list for job schedule entries. It's also noteworthy that, just like the railroad, the job scheduling function may not always run on time, no matter whether you use SBMJOB or the job schedule object. Although you can schedule a job to the second, the load on your system determines when the job actually runs. The system submits a job schedule entry to a job queue or releases a scheduled job already on a job queue approximately on time -- usually within a few seconds. But if there are many jobs waiting on the job queue ahead of the scheduled job, it will simply have to wait its turn. If it's
critical that a job run at a specific time, you can help by ensuring that the job's priority (parameter JOBPTY) puts it ahead of other jobs on the queue; but the job may still have to wait for an available activity slot before it can begin. And as I mentioned earlier, if your system is down or in a restricted state at the appointed time, the job schedule may not submit the job at all. Changing your system's date or time can also affect your scheduled jobs. If you move the date or time system values backward, the effect is fairly straightforward: The system will not reschedule any job schedule entries that were run within the repeated time. For example, if at three o'clock you change your system's time back to one o'clock, the job you had scheduled to run at two o'clock won't repeat itself. The system stores a 'next submission' date and time for each entry, which it updates each time the job schedule submits a job. Changing the system's date or time forward, however, can be tricky. If the change causes the system to skip over a time when you had a job scheduled, the job schedule's action depends upon whether or not the system is in restricted state when you make the change. If the system was not restricted, any missed job schedule entries are submitted immediately (only one occurrence of each missed entry is submitted even if, for example, you've scheduled a job to run daily and moved the system date ahead two days). If the system is in restricted state when you change the date or time system values, the system refers to the RCYACN attributes of the missed job schedule entries to determine whether or not to submit the jobs when you bring the system out of its restricted state. The job scheduling function in V2R2 does not offer job completion dependencies, regardless of which method you use. For example, if you use the job schedule to run a daily transaction posting, then a daily closing, you cannot condition the closing job to be run only if the posting job goes through to a successful completion. Some third-party scheduling functions offer this capability. Without a third-party product, if you need to schedule jobs with such a completion requirement, your best bet is probably to incorporate the entire procedure into a single CL program with appropriate escape routes defined in case one or more functions fail.
Chapter 14 - Keeping Up With the Past For many of you, AS/400 job processing is new, or at least different. There can be multiple subsystems, job queues, output queues, and messages flying all over the place at once. You can sign on to the system and submit several batch jobs for processing immediately, or you can submit jobs to be run at night. At the same time, the system operator can run jobs and monitor their progress, and users at various remote sites can sign on to the system. With so much going on, you might wonder how you can possibly manage and audit such activity. One valuable AS/400 tool at your fingertips is the history log, which contains information about the operation of the system and system status. The history log tracks high-level activities such as the start and completion of jobs, device status changes, system operator messages and replies, attempted security violations, and other securityrelated events. It records this information in the form of messages, which are stored in files created by the system. You can learn a lot from history -- even your system's history. By maintaining an accurate history log, you can monitor specific system activities and reconstruct events to aid problem determination and debugging efforts. Please note that history logs are different from job logs. Whereas job logs record the sequential events of a job, the history log records certain operational and status messages pertaining to all the jobs on a system. You can review the history log to find a particular point of interest, and then reference a job log to investigate further.
System Message Show and Tell You can display the contents of the history log on the AS/400 by executing the DSPLOG (Display Log) command
DSPLOG LOG(QHST)
The resulting display resembles the screen in Figure 14.1. The DSPLOG command lets you look at the contents of the history log as you would messages in a message queue. Because system events such as job completions, invalid sign-on attempts, and line failures are listed as messages in file QHST, you can place the cursor on a particular message and press the Help key to display second-level help text for the message.
The DSPLOG command has several parameters that provide flexibility when inquiring into the history log. To prompt for parameters, type in DSPLOG and press F4. The system displays the screen shown in Figure 14.2. The parameters for the DSPLOG command are as follows:
LOG The system refers to the history log as 'QHST.' QHST provides many of the functions the QSRV and QCHG logs provide on the S/36.
PERIOD You can enter a specific time period or take the defaults for the beginning and ending period. Notice that the default for 'Beginning time' is the earliest available time and the default for 'Beginning date' is the current date. To look at previous days, you must supply a value. Enter values as six-digit numbers (i.e., time as hhmmss and date as mmddyy).
OUTPUT You are probably familiar with this parameter. The value * results in output to the screen, and *PRINT results in a printed spooled file.
JOB You use the JOB parameter to search for a specific job or set of jobs. You can enter just the job name, in which case the system might find several jobs with the same name that ran during a given period of time. Or you can enter the specific job name, user name, and job number to retrieve the history information for a particular job.
MSGID Like the JOB parameter, this parameter helps narrow your search. You can specify one message or multiple messages. By specifying '00' as the last two digits of the message ID, you can retrieve related sets of messages. For example, if you enter the message ID CPF2200, the system retrieves all messages from CPF2200 to CPF2299 (these are all security-related messages).
History Log Housekeeping The history log consists of a message queue and system files that store history messages. The files belong to library QSYS and begin with the letters QHST, followed by a number derived as QHSTyydddn. The yyddd stands for the Julian date on which the log was created, and n represents a sequence character appended to the Julian date (0 through 9 or A through Z). The text description maintained by the system contains the beginning and ending date and time for the messages contained in that file, which is helpful for tracking activities that occurred during a particular time period. You can use the DSPOBJD (Display Object Description) command to display a list of history files. The command
DSPOBJD OBJ(QSYS/QHST*) OBJTYPE(*FILE)
results in a display similar to the one shown in Figure 14.3. The system creates a new file each time the existing file reaches its maximum size limit, which the system value QHSTLOGSIZ controls. Because the system itself does not automatically delete files, it is important to develop a strategy for deleting the log files (to save disk space) and for using the data before you delete the files.
You should maintain enough recent history on disk to be able to easily inquire into the log to resolve problems. The best way to manage history logs on your system is to take advantage of the automatic cleanup capabilities of Operational Assistant (OA). The OA category 'System Journals and System Logs' lets you specify the number of days of information to keep in the history log. OA then deletes log files older than the specified number of days. (For more information about Operational Assistant, see IBM's AS/400 System Operations: Operational Assistant Administrator's Guide (SC41-8082).) Keep in mind that OA does not provide a strategy for archiving the history logs to a media that you can easily retrieve. If you activate OA cleanup procedures, make sure that once each month you make a save copy of the QHST files. If you are remiss in performing this save, OA will still delete the log files. If you elect not to use this automatic cleanup the OA offers, you can do the following:
•
•
On the first day of each month, save all QHST files in library QSYS to tape. It's probably wise to use the same set of tapes and save to the next sequence number. For quick reference, record on the tape label the names of the beginning and ending log files. You can use the DLTQHST utility (from the QUSRTOOL library) to delete old history files. View the existing log files on the system and delete any that are more than 30 days old. (Hint: Remember that the text description contains the beginning and ending date and time to help you determine the age of the file.)
To determine how much history log information to keep, you should consider the disk space required to store the information and schedule your file saves accordingly. In most cases, it is a good idea to keep 30 days of on-line history, although large installations with heavy history log activity may need to save and delete objects every 15 days.
Inside Information Careful review of history logs can alert you to unusual system activity. If, for example, the message 'Password from device DSP23 not correct for user QSECOFR' appears frequently in the log, you might be prompted to find out who uses DSP23 and why (s)he is trying to sign on with the system security officer profile. Or you might notice the message 'Receiver ACG0239 in JRNLIB never fully saved (I C).' The second-level help text would tell you which program was attempting to delete the journal receiver. If these events are brought to your attention, you might be able to prevent the loss of important information. Maintaining a history log lets you reconstruct events that have taken place on the system. In reviewing its history log, one company discovered that a programmer had planted a system virus. A history log can also alert you to less serious occurrences (e.g., a specific sequence of jobs was not performed exactly as planned). Or you can use it to review all completion messages to find out how many jobs are executed on your system each day or which job ended abnormally. As you monitor the history log (preferably every day), you will soon start to recognize the messages that are most beneficial to you. The history log is a management tool that lets you quickly analyze system activities. It provides a certain amount of security auditing and lets you determine whether and when specific jobs were executed and how they terminated. Using and maintaining a history log is not difficult and could prove to be time well spent.
Note: The security journaling capabilities that OS/400 offers using the audit journal QAUDJRN provide additional event-monitoring capabilities specifically related to security. This new journal is capable of monitoring for the security-related events recorded in the QHST as well as additional events that QHST does not record. For more information concerning QAUDJRN, see the AS/400 Security Reference (SC41-8083).
Chapter 15 - Backup Basics The most valuable component of any computer system isn’t the hardware or software that runs the computer but, rather, the data that resides on the system. If a system failure or disaster occurs, you can replace the computer hardware and software that runs your business. Your company’s data, however, is irreplaceable. For this reason, it’s critical to have a good backup and recovery strategy. Companies go out of business when their data can’t be recovered. What should you be backing up? The simple answer to this question is that you should back up everything. A basic rule of backup and recovery is that if you don’t save it, it doesn’t get restored. However, you may have some noncritical data (e.g., test data) on your system that doesn’t need to be restored and can be omitted from your backup. When and how often do you need to back up? Ideally, saving your entire system every night is the simplest and safest backup strategy. This approach also gives you the simplest and safest strategy for recovery. Realistically, though, when and how you run your backup, as well as what you back up, depend on the size of your backup window — the amount of time your system can be unavailable to users while you perform a backup. To simplify recovery, you need to back up when your system is at a known point and your data isn’t changing. When you design a backup strategy, you need to balance the time it takes to save your data with the value of the data you might lose and the amount of time it may take to recover. Always keep your recovery strategy in mind as you design your backup strategy. If your system is so critical to your business that you don’t have a manageable backup window, you probably can’t afford an unscheduled outage either. If this is your situation, you should seriously evaluate the availability options of the iSeries, including dual systems. For more information about these options, see “Availability Options.”
Designing and Implementing a Backup Strategy You should design your backup strategy based on the size of your backup window. At the same time you design your backup strategy, you should also design your recovery strategy to ensure that your backup strategy meets your system recovery needs. The final step in designing a backup strategy is to test a full system recovery. This is the only way to verify that you’ve designed a good backup strategy that will meet your system recovery needs. Your business may depend on your ability to recover your system. You should test your recovery strategy at your recovery services provider’s location. When designing your backup and recovery strategy, think of it as a puzzle: The fewer pieces you have in the puzzle, the more quickly you can put the pieces of the puzzle together. The fewer pieces needed in your backup strategy, the more quickly you can recover the pieces. Your backup strategy will typically be one of three types:
• • •
Simple — You have a large backup window, such as an 8- to 12-hour block of time available daily with no system activity. Medium — You have a medium backup window, such as a 4- to 6-hour block of time available daily with no system activity. Complex — You have a short backup window, with little or no time of system inactivity.
A simple way to ensure you have a good backup of your system is to use the options provided on menu SAVE ( Figure 15.1), which you can reach by typing Go Save on a command line. This command presents you with additional menus that make it easy either to back up your entire system or to split your entire system backup into two parts: system data and user data. In the following discussion of backup strategies, the menu options I refer to are from menu SAVE.
Implementing a Simple Backup Strategy The simplest backup strategy is to save everything daily whenever there is no system activity. You can use SAVE menu option 21 (Entire system) to completely back up your system (with the exception of queue entries such as spooled files). You should also consider using this option to back up the entire system after installing a new release, applying PTFs, or installing a new licensed program product. As an alternative, you can use SAVE menu option 22 (System data only) to save just the system data after applying PTFs or installing a new licensed program product. Option 21 offers the significant advantage that you can schedule the backup to run unattended (with no operator intervention). Keep in mind that unattended save operations require you to have a tape device capable of holding all your data. (For more information about backup media, see “Preparing and Managing Your Backup Media.”) Even if you don’t have enough time or enough tape-device capability to perform an unattended save using option 21, you can still implement a simple backup strategy: Daily backup: Back up only user data that changes frequently. Weekly backup: Back up the entire system. A simple backup strategy may also involve SAVE menu option 23 (All user data). This option saves user data that can change frequently. You can also schedule option 23 to run without operator intervention. If your system has a long period of inactivity on weekends, your backup strategy might look like this: Friday night: Entire system (option 21) Monday night: All user data (option 23) Tuesday night: All user data (option 23) Wednesday night: All user data (option 23) Thursday night: All user data (option 23) Friday night: Entire system (option 21)
Implementing a Medium Backup Strategy You may not have a large enough backup window to implement a simple backup strategy. For example, you may have large batch jobs that take a long time to run at night or a considerable amount of data that takes a long time to back up. If this is your situation, you’ll need to implement a backup and recovery strategy of medium complexity. When developing a medium backup strategy, keep in mind that the more often your data changes, the more often you need to back it up. You’ll therefore need to evaluate in detail how often your data changes. Several methods are available to you in developing a medium backup strategy:
• • •
saving changed objects journaling objects and saving the journal receivers saving groups of user libraries, folders, or directories
You can use one or a combination of these methods. Saving changed objects. Several commands let you save only the data that has changed since your last save operation or since a particular date and time. You can use the SavChgObj (Save Changed Objects) command to save only those objects that have changed since a library or group of libraries was last saved or since a particular date and time. This approach can be useful if you have a system environment in which program objects and data files exist in the same library. Typically, data files change very frequently, while program objects change infrequently. Using the SavChgObj command, you can save just the data files that have changed. The SavDLO (Save Document Library Objects) command lets you save documents and folders that have changed since the last save or since a particular date and time. You can use SavDLO to save changed documents and folders in all your user auxiliary storage pools (ASPs) or in a specific user ASP.
You can use the Sav (Save) command to save only those objects in directories that have changed since the last save or since a particular date or time. You can also choose to save only your changed data, using a combination of the SavChgObj, SavDLO, and Sav commands, if the batch workload on your system is heavier on specific days of the week. For example: Day/time Friday night Monday night Tuesday night Wednesday night Thursday night Friday night
Batch workload Light Heavy Light Heavy Heavy Light
Save operation Entire system (option 21) Changed data only* All user data (option 23) Changed data only* Changed data only* Entire system (option 21)
* Use a combination of the SavChgObj, SavDLO, and Sav commands.
Journaling objects and saving the journal receivers. If your save operations take too long because your files are large, saving changed objects may not help in your system environment. For instance, if you have a file member with 100,000 records and one record changes, the SavChgObj command saves the entire file member. In this environment, journaling your database files and saving the journal receivers regularly may be a better solution. However, keep in mind that this approach will make your recovery more complex. When you journal a database file, the system writes a copy of every changed record to a journal receiver. When you save a journal receiver, you’re saving only the changed records in the file, not the entire file. If you journal your database files and have a batch workload that varies, your backup strategy might look like this: Day/time Friday night Monday night Tuesday night Wednesday night Thursday night Friday night
Batch workload Light Heavy Light Heavy Heavy Light
Save operation Entire system (option 21) Journal receivers only All user data (option 23) Journal receivers only Journal receivers only Entire system (option 21)
To take full advantage of journaling protection, you should detach and save the journal receivers regularly. The frequency with which you save the journal receivers depends on the number of journaled changes that occur on your system. Saving the journal receivers several times during the day may be appropriate for your system environment. The way in which you save journal receivers depends on whether they reside in a library with other objects. Depending on your environment, you’ll use either the SavLib (Save Library) command or the SavObj (Save Object) command. It’s best to keep your journal receivers isolated from other objects so that your save/restore functions are simpler. Be aware that you must save a new member of a database file before you can apply journal entries to the file. If your applications regularly add new file members, you should consider using the SavChgObj strategy either by itself or in combination with journaling. Saving groups of user libraries, folders, or directories. Many applications are set up with data files and program objects in different libraries. This design simplifies your backup and recovery procedures. Data files change frequently, and, on most systems, program objects change infrequently. If your system environment is set up like this, you may want to save only the libraries with data files on a daily basis. You can also save, on a daily basis, groups of folders and directories that change frequently.
Implementing a Complex Backup Strategy If you have a very short backup window that requires a complex strategy for backup and for recovery, you can use some of the same techniques described for a medium backup strategy, but with a greater level of detail. For example, you may need to save specific critical files at specific times of the day or week.
Several other methods are available to you in developing a complex backup strategy. You can use one or a combination of these methods:
• • •
save data concurrently using multiple tape devices save data in parallel using multiple tape devices use the save-while-active process
Before you use any of these methods, you must have a complete backup of your entire system. Saving data concurrently using multiple tape devices. You can reduce the amount of time your system is unavailable by performing save operations on more than one tape device at a time. For example, you can save libraries to one tape device, folders to another tape device, and directories to a third tape device. Or you can save different sets of libraries, objects, folders, or directories to different tape devices. Later, I provide more information about saving data concurrently using multiple tape devices. Saving data in parallel using multiple tape devices. Starting with V4R4, you can perform a parallel save using multiple tape devices. A parallel save is intended for very large objects or libraries. With this method, the system “spreads” the data in the object or library across multiple tape devices. (This function is implemented with IBM’s Backup, Recovery and Media Services product; for more information about it, see “Backup, Recovery and Media Services (BRMS) Overview” [Chapter 16].) Save-While-Active. The save-while-active process can significantly reduce the amount of time your system is unavailable during a backup. If you choose to use save-while-active, make sure you understand the process and monitor for any synchronization checkpoints before making your objects available for use. I provide more details about save-while-active later.
An Alternative Backup Strategy Another option available to help implement your backup strategy is the Backup, Recovery and Media Services licensed program product. BRMS is IBM’s strategic OS/400 backup and recovery product on the iSeries and AS/400. BRMS is a comprehensive tool for managing the backup, archiving, and recovery environment for one or more servers in a site or across a network in which data exchange by tape is required. For more information about using BRMS to implement your backup strategy, see “Backup, Recovery and Media Services (BRMS) Overview.” [Chapter 16]
The Inner Workings of Menu SAVE Menu SAVE contains many options for saving your data, but four are primary:
• • • •
20 — Define save system and user data defaults 21 — Entire system 22 — System data only 23 — All user data
You can use these menu options to back up your system. Or, if your installation requires a more complex backup strategy, you can use OS/400’s save commands in a CL program to customize your backup. To help you make your decision, as well as to provide skeleton code that you can use as a guideline for your own backup programs, this section provides a look at some of the inner workings of these primary save options. For detailed instructions and a checklist on using these options, refer to OS/400 Backup and Recovery (SC41-5304). Figure 15.2 illustrates the save commands and the SAVE menu options you can use to save the parts of the system and the entire system.
Entire System (Option 21) SAVE menu Option 21 lets you perform a complete backup of all the data on your system, with the exception of backing up spooled files (I cover spooled file backup later). This option puts the system into a restricted state. This
means no users can access your system while the backup is running. It’s best to run this option overnight for a small system or during the weekend for a larger system. Option 21 runs program QMNSave. The following CL program extract represents the significant processing that option 21 performs:
EndSbs Sbs(*All) Option(*Immed) ChgMsgQ MsgQ(QSysOpr) Dlvry(*Break or *Notify) SavSys SavLib Lib(*NonSys) AccPth(*Yes) SavDLO DLO(*All) Flr(*Any) Sav Dev('/QSYS.LIB/TapeDeviceName.DEVD') Obj(('/*') ('/QSYS.LIB' *Omit) ('/QDLS' *Omit)) UpdHst(*Yes) StrSbs SbsD(ControllingSubsystem)
+
+ + + +
Note: The Sav command omits the QSys.Lib file system because the SavSys (Save System) command and the SavLib Lib(*NonSys) command save QSys.Lib. The Sav command also omits the QDLS file system because the SavDLO command saves QDLS.
System Data Only (Option 22) Option 22 saves only your system data. It does not save any user data. You should run this option (or option 21) after applying PTFs or installing a new licensed program product. Like option 21, option 22 puts the system into a restricted state. Option 22 runs program QSRSavI. The following program extract represents the significant processing that option 22 performs:
EndSbs Sbs(*All) Option(*Immed) ChgMsgQ MsgQ(QSysOpr) Dlvry(*Break or *Notify) SavSys SavLib Lib(*IBM) AccPth(*Yes) Sav Dev('/QSYS.LIB/TapeDeviceName.DEVD') Obj(('/QIBM/ProdData') ('/QOpenSys/QIBM/ProdData')) UpdHst(*Yes) StrSbs SbsD(ControllingSubsystem)
+
+ + +
All User Data (Option 23) Option 23 saves all user data, including files, user-written programs, and all other user data on the system. This option also saves user profiles, security data, and configuration data. Like options 21 and 22, option 23 places the system in restricted state. Option 23 runs program QSRSavU. The following program extract represents the significant processing that option 23 performs:
EndSbs Sbs(*All) Option(*Immed) ChgMsgQ MsgQ(QSysOpr) Dlvry(*Break or *Notify) SavSecDta SavCfg SavLib Lib(*AllUsr) AccPth(*Yes)
+
SavDLO DLO(*All) Flr(*Any) Sav Dev('/QSYS.LIB/TapeDeviceName.DEVD') Obj(('/*') ('/QSYS.LIB' *Omit) ('/QDLS' *Omit) ('/QIBM/ProdData' *Omit) ('/QOpenSys/QIBM/ProdData' *Omit)) UpdHst(*Yes) StrSbs SbsD(ControllingSubsystem)
+ + + + + +
Note: The Sav command omits the QSys.Lib file system because the SavSys command, the SavSecDta (Save Security Data) command, and the SavCfg (Save Configuration) command save QSys.Lib. The Sav command also omits the QDLS file system because the SavDLO command saves QDLS. In addition, the Sav command executed by option 23 omits the /QIBM and /QOpenSys/QIBM directories because these directories contain IBM-supplied objects.
Setting Save Option Defaults When you save information using option 21, 22, or 23, you can specify default values for some of the commands used by the save process. Figure 15.3 shows the Specify Command Defaults panel values used by these options. You can use SAVE menu option 20 (Define save system and user data defaults) to change the default values displayed on this panel for menu options 21, 22, and 23. Changing the defaults simplifies the task of setting up your backups. To change the defaults, you must have *Change authority to both library QUsrSys and the QSRDflts data area in QUsrSys. When you select option 20, the system displays the default parameter values for options 21, 22, and 23. The first time you use option 20, the system displays the IBM-supplied default parameter values. You can change any or all of the parameter values to meet your needs. For example, you can specify additional tape devices or change the message queue delivery default. The system saves the new default values in data area QSRDflts in library QUsrSys for future use (the system creates QSRDflts only after you change the IBM-supplied default values). Once you’ve defined new default values, you no longer need to worry about which, if any, options to change on subsequent backups. You can simply review the new default options and then press Enter to start the backup using the new default parameters. If you have multiple, distributed systems with the same save parameters on each system, option 20 offers an additional benefit: You can simply define your default parameters using option 20 on one system and then save data area QSRDflts in library QUsrSys, distribute the saved data area to the other systems, and restore it.
Printing System Information When you perform save operations using option 21, 22, or 23 from menu SAVE, you can optionally request a series of reports with system information that can be useful during system recovery. The Specify Command Defaults panel presented by these options provides a prompt for printing system information. You can also use command PrtSysInf (Print System Information) to print the system information. This information is especially useful if you can’t use your SavSys media to recover and must use your distribution media. Printing the system information requires *AllObj, *IOSysCfg, and *JobCtl authority and produces many spooled file listings. You probably don’t need to print the information every time you perform a backup. However, you should print it whenever important information about your system changes. The following lists and reports are generated when you print the system information (the respective CL commands are noted in parentheses):
• • • • •
a library backup list with information about each library in the system, including which backup schedules include the library and when the library was last backed up (DspBckupL *Lib) a folder backup list with the same information for all folders in the system (DspBckupL *Flr) a list of all system values (DspSysVal) a list of network attributes (DspNetA) a list of edit descriptions (DspEdtD)
• • • • • • • • • • • • • • •
a list of PTF details (DspPTF) a list of reply list entries (WrkRpyLE) a report of access-path relationships (DspRcyAP) a list of service attributes (DspSvrA) a list of network server storage spaces (DspNwSStg) a report showing the power on/off schedule (DspPwrScd) a list of hardware features on your system (DspHdwRsc) a list of distribution queues (DspDstSrv) a list of all subsystems (DspSbsD) a list of the IBM software licenses installed on your machine (DspSfwRsc) a list of journal object descriptions for all journals (DspObjD) a report showing journal attributes for all journals (WrkJrnA) a report showing cleanup operations (ChgClnup) a list of all user profiles (DspUsrPrf) a report of all job descriptions (DspJobD)
Saving Data Concurrently Using Multiple Tape Devices As I mentioned earlier, one way to reduce the amount of time required for a complex backup strategy is to perform save operations to multiple tape devices at once. You can save data concurrently using multiple tape devices by saving libraries to one tape device, folders to another tape device, and directories to a third tape device. Or, you can save different sets of libraries, objects, folders, or directories to different tape devices.
Concurrent Saves of Libraries and Objects You can run multiple save commands concurrently against multiple libraries. When you run multiple save commands, the system processes the request in several stages that overlap, improving save performance. To perform concurrent save operations to different tape devices, you can use the OmitLib (Omit library) parameter with generic naming. For example:
SavLib Lib(*AllUsr) + Dev(FirstTapeDevice) + OmitLib(A* B* $* #* @* ... L*) SavLib Lib(*AllUsr) + Dev(SecondTapeDevice) + OmitLib(M* N* ... Z*) You can also save a single library concurrently to multiple tape devices by using the SavObj or SavChgObj command. This technique lets you issue multiple save operations using multiple tape devices to save objects from one large library. For example, you can save generic objects from one large library to one tape device and concurrently issue another SavObj command against the same library to save a different set of generic objects to another tape device. You can use generic naming on the Obj (Object) parameter while performing concurrent SavChgObj operations to multiple tape devices against a single library. For example:
SavChgObj Obj(A* B* C* $* #* ... L*) + Dev(FirstTapeDevice) + Lib(LibraryName) SavChgObj Obj(M* N* O* ... Z*) + Dev(SecondTapeDevice) + Lib(LibraryName)
Concurrent Saves of DLOs (Folders) You can run multiple SavDLO commands concurrently for DLO objects that reside in the same ASP. This technique allows concurrent saves of DLOs to multiple tape devices.
You can use the command’s Flr (Folder) parameter with generic naming to perform concurrent save operations to different tape devices. For example:
SavDLO DLO(*All) Flr(DEPT*) Dev(FirstTapeDevice) OmitFlr(DEPT2*) SavDLO DLO(*All) Flr(DEPT2*) Dev(SecondTapeDevice)
+ + + + +
In this example, the system saves to the first tape device all folders starting with DEPT except those that start with DEPT2. Folders that start with DEPT2 are saved to the second tape device. Note: Parameter OmitFlr is allowed only when you specify DLO(*All) or DLO(*Chg).
Concurrent Saves of Objects in Directories You can also run multiple Sav commands concurrently against objects in directories. This technique allows concurrent saves of objects in directories to multiple tape devices. You can use the Sav command’s Obj (Object) parameter with generic naming to perform concurrent save operations to different tape devices. For example:
Sav Dev('/QSYS.LIB/FirstTapeDevice.DEVD') Obj(('/DIRA*')) UpdHst(*Yes) Sav Dev('/QSYS.LIB/SecondTapeDevice.DEVD') Obj(('/DIRB*')) UpdHst(*Yes)
+ + + +
Save-While-Active To either reduce or eliminate the amount of time your system is unavailable for use during a backup (your backup outage), you can use the save-while-active process on particular save operations along with your other backup and recovery procedures. Save-while-active lets you use the system during part or all of the backup process. In contrast, other save operations permit either no access or only read access to objects during the backup.
How Does Save-While-Active Work? OS/400 objects consist of units of storage called pages. When you use save-while-active to save an object, the system creates two images of the pages of the object. The first image contains the updates to the object with which normal system activity works. The second image is a “snapshot” of the object as it exists at a single point in time called a checkpoint. The save-while-active job uses this image — called the checkpoint image — to save the object. When an application makes changes to an object during a save-while-active job, the system uses one image of the object’s pages to make the changes and, at the same time, uses the other image to save the object to tape. The system locks objects as it obtains the checkpoint images, and you can’t change objects during the checkpoint processing. After the system has obtained the checkpoint images, applications can once again change the objects. The image that the system saves doesn’t include any changes made during the save-while-active job. The image on the tape is an image of the object as it existed when the system reached the checkpoint. Rather than maintain two complete images of the object being saved, the system maintains two images only for the pages of the objects that are being changed as the save is performed.
Synchronization. When you back up more than one object using the save-while-active process, you must choose when the objects will reach a checkpoint in relationship to each other — a concept called synchronization. There are three kinds of synchronization:
• • •
With full synchronization, the checkpoints for all the objects occur at the same time, during a time period in which no changes can occur to the objects. It’s strongly recommended that you use full synchronization, even when you’re saving objects in only one library. With library synchronization, the checkpoints for all the objects in a library occur at the same time. With system-defined synchronization, the system decides when the checkpoints for the objects occur. The checkpoints may occur at different times, resulting in a more complex recovery procedure.
How you use save-while-active in your backup strategy depends on whether you choose to reduce or eliminate the time your system is unavailable during a backup. Reducing the backup outage is much simpler and more common than eliminating it. It’s also the recommended way to use save-while-active. When you use save-while-active to reduce your backup outage, your system recovery process is exactly the same as if you performed a standard backup operation. Also, using save-while-active this way doesn’t require you to implement journaling or commitment control. To use save-while-active to reduce your backup outage, you can end any applications that change objects or end the subsystems in which these applications are run. After the system reaches a checkpoint for those objects, you can restart the applications. One save-while-active option lets you have the system send a message notification when it completes the checkpoint processing. Once you know checkpoint processing is completed, it’s safe to start your applications or subsystems again. Using save-while-active this way can significantly reduce your backup outage. Typically, when you choose to reduce your backup outage with save-while-active, the time during which your system is unavailable for use ranges anywhere from 10 minutes to 60 minutes. It’s highly recommended that you use save-while-active to reduce your backup outage unless you absolutely cannot have your system unavailable for this time frame. You should use save-while-active to eliminate your backup outage only if you have absolutely no tolerance for any backup outage. You should use this approach only to back up objects that you’re protecting with journaling or commitment control. When you use save-while-active to eliminate your backup outage, you don’t end the applications that modify the objects or end the subsystems in which the applications are run. However, this method affects the performance and response time of your applications. Keep in mind that eliminating your backup outage with save-while-active requires much more complex recovery procedures. You’ll need to include these procedures in your disaster recovery plans.
Save Commands That Support the Save-While-Active Option The following save commands support the save-while-active option: Command SavLib SavObj SavChgObj SavDLO Sav
Function Save library Save object Save changed objects Save document library objects Save objects in directories
The following parameters are available on the save commands for the save-while-active process: Parameter
Description
You must decide whether you're going to use full synchronization, library synchronization, or system-defined synchronization. It's highly recommended that you use full synchronization in most cases. SavActWait (Save active You can specify the maximum number of seconds that the save-while-active SavAct (Save-whileactive)
wait time) SavActMsgQ (Save active message queue) SavActOpt (Save-whileactive options)
operation will wait to allocate an object during checkpoint processing. You can specify whether the system sends you a message when it reaches a checkpoint. This parameter has values that are specific to the Sav command.
For complete details about using the save-while-active process to either reduce or eliminate your backup outage, visit IBM’s iSeries Information Center at http://publib.boulder.ibm.com/pubs/html/as400/infocenter.htm.
Backing Up Spooled Files When you save an output queue, its description is saved but not its contents (the spooled files). With a combination of spooled file APIs, user space APIs, and list APIs, you can back up spooled files, including their associated advanced function attributes (if any). The spooled file APIs perform the real work of backing up spooled files. These APIs include
• • • • • • •
QUSLSpl (List Spooled Files) QUSRSplA (Retrieve Spooled File Attributes) QSpOpnSp (Open Spooled File) QSpCrtSp (Create Spooled File) QSpGetSp (Get Spooled File Data) QSpPutSp (Put Spooled File Data) QSpCloSp (Close Spooled File)
These APIs let you copy spooled file information to a user space for save purposes and copy the information back from the user space to a spooled file. Once you’ve copied spooled file information to user spaces, you can save the user spaces. For more information about these APIs, see System API Reference (SC41-5801). One common misconception is that you can use the CpySplF (Copy Spooled File) command to back up spooled files. This command does let you copy information from a spooled file to a database file, but you shouldn’t rely on this method for spooled file backup. CpySplF copies only textual data and not advanced function attributes such as graphics and variable fonts. CpySplF also does nothing to preserve print attributes such as spacing. IBM does offer support for saving and restoring spooled files in its BRMS product. BRMS maintains all the advanced function attributes associated with the spooled files. For more information about BRMS, see “Backup, Recovery and Media Services (BRMS) Overview.” [Chapter 16]
Recovering Your System Although the iSeries is very stable and disasters are rare, there are times when some type of recovery may be necessary. The extent of recovery required and the processes you follow will vary greatly depending on the nature of your failure. The sheer number of possible failures precludes a one-size-fits-all answer to recovery. Instead, you must examine the details of your failure and recover accordingly. To help determine the best way to recover your system, you should refer to “Selecting the Right Recovery Strategy” in OS/400 Backup and Recovery, which categorizes failures and their associated recovery processes and provides checklists of recovery steps. Before beginning your recovery, be sure to do the following:
• • • •
If you have to back up and recover because of some system problem, make sure you understand how the problem occurred so you can choose the correct recovery procedures. Plan your recovery. Make a copy of the OS/400 Backup and Recovery checklist you’re using, and check off each step as you complete it. Keep the checklist for future reference. If you need help later, this record will be invaluable. If your problem requires hardware or software service, make sure you understand exactly what the service representative does. Don’t be afraid to ask questions.
Starting with V4R5, the OS/400 Backup and Recovery manual includes a new appendix called “Recovering your AS/400 system,” which provides step-by-step instructions for completely recovering your entire system to the same system (i.e., restoring to a system with the same serial number). You can use these steps only if you saved your entire system using either option 21 from menu SAVE or the equivalent SavSys, SavLib, SavDLO, and Sav commands. Continue to use the checklist titled “Recovering your entire system after a complete system loss (Checklist 17)” in Chapter 3 of OS/400 Backup and Recovery to completely recover your system in any of the following situations:
• • • •
Your system has logical partitions. Your system uses the Alternate Installation Device Setup feature that you can define through Dedicated Service Tools (DST) for a manual IPL from tape. Your system has mounted user-defined file systems before the save. You’re recovering to a different system (a system with a different serial number).
One piece of advice warrants repeating: Test as many of the procedures in your recovery plan as you possibly can before disaster strikes. If any surprises await you, it’s far better to uncover them in a test situation than during a disaster. This article is excerpted from the book Starter Kit for the IBM iSeries and AS/400 by Gary Guthrie and Wayne Madden (29th Street Press, 2001). For more information about the book, see http://www.iseriesnetwork.com/str/books/uniquebook2.cfm?NextBook=187. Debbie Saugen is the technical owner of iSeries 400 and AS/400 Backup and Recovery in IBM’s Rochester, Minnesota, Development Lab. She is also a senior recovery specialist with IBM Business Continuity and Recovery Services. Debbie enjoys sharing her knowledge by speaking at COMMON, iSeries 400 and AS/400e Technical Conferences, and Business Continuity and Recovery Services Conferences and writing for various iSeries and AS/400e magazines and Web sites. Availability Options Availability options are a complement to a backup strategy, not a replacement. These options can significantly reduce the time it takes you to recover after a failure. In some cases, availability options can prevent the need for recovery. To justify the cost of using availability options, you need to understand the following:
• • •
the value of the data on your system the cost of a scheduled or unscheduled outage your availability requirements
The following availability options can complement your backup strategy:
• • • • • • •
journal management access-path protection auxiliary storage pools device parity protection mirrored protection dual systems clustered systems
You should compare these options and decide which are best suited to your business needs. For details about availability options, their benefits versus costs, and how to implement them, refer to IBM's iSeries Information Center at http://publib.boulder.ibm.com/pubs/html/as400/infocenter.htm. We'll look more closely at each availability option in a moment, but first, it's helpful to be acquainted with the following terms, which are often used in discussing system availability:
•
An outage is a period of time during which the system is unavailable to users. During a scheduled outage, you deliberately make your system unavailable to users. You might use a
• • •
scheduled outage to run batch work, back up your system, or apply PTFs. An unscheduled outage is usually caused by a failure of some type. High availability means that the system has no unscheduled outages. In continuous operations, the system has no scheduled outages. Continuous availability means that the system has neither scheduled nor unscheduled outages.
Journal Management for Backup and Recovery You can use journal management (often referred to as journaling a file or an access path) to recover the changes to database files (or other objects) that have occurred since your last complete backup. You use a journal to define which files and access paths you want to protect. A journal receiver contains the entries (called journal entries) that the system adds when events occur that are journaled, such as changes to database files, changes to other journaled objects, or security-related events. You can use the remote journal function to set up journals and journal receivers on a remote iSeries system. These journals and journal receivers are associated with journals and journal receivers on the source system. The remote journal function lets you replicate journal entries from the source system to the remote system.
Access-Path Protection An access path describes the order in which the records in a database file are processed. Because different programs may need to access the file’s records in different sequences, a file can have multiple access paths. Access paths in use at the time of a system failure are at risk of corruption. If access paths become corrupted, the system must rebuild them before you can use the files again. This can be a very time-consuming process. You should consider an access-path protection plan to limit the time required to recover corrupted access paths. The system offers two methods of access-path protection:
• •
system-managed access-path protection (SMAPP) explicit journaling of access paths
You can use these methods independently or together. By using journal management to record changes to access paths, you can greatly reduce the amount of time it takes to recover access paths should doing so become necessary. Using journal entries, the system can recover access paths without the need for a complete rebuild. This can result in considerable time savings. With SMAPP, you can let the system determine which access paths to protect. The system makes this determination based on access-path target recovery times that you specify. SMAPP provides a simple way to reduce recovery time after a system failure, managing the required environment for you. You can use explicit journaling, even when using SMAPP, to ensure that certain access paths critical to your business are protected. The system evaluates the protected and unprotected access paths to develop its strategy for meeting your access-path recovery targets.
Auxiliary Storage Pools Your system may have many disk units attached to it for auxiliary storage of your data that, to your system, look like a single unit of storage. When the system writes data to disk, it spreads the data across all of these units. You can divide your disk units into logical subsets known as auxiliary storage pools (ASPs) which don't necessarily correspond to the physical arrangement of disks. You can then assign objects to particular ASPs, isolating them on particular disk units. When the system now writes to these objects, it spreads
the information across only the units within the ASP. ASPs provide a recovery advantage if the system experiences a disk unit failure that results in data loss. In such a case, recovery is required only for the objects in the ASP containing the failed disk unit. System objects and user objects in other ASPs are protected from the disk failure. In addition to the protection that isolating objects to particular ASPs provides, the use of ASPs provides a certain level of flexibility. When you assign the disk units on your system to more than one ASP, each ASP can have different strategies for availability, backup and recovery, and performance.
Device Parity Protection Device parity protection is a hardware availability function that protects against data loss due to disk unit failure or damage to a disk. To protect data, the disk controller or input/output processor (IOP) calculates and saves a parity value for each bit of data. The disk controller or IOP computes the parity value from the data at the same location on each of the other disk units in the device parity set. When a disk failure occurs, the data can be reconstructed by using the parity value and the values of the bits in the same locations on the other disks. The system continues to run while the data is being reconstructed. The overall goal of device parity protection is to provide high availability and to protect data as inexpensively as possible. If possible, you should protect all the disk units on your system with either device parity protection or mirrored protection (covered next). In many cases, your system remains operational during repairs. Device parity protection is designed to prevent system failure and to speed the recovery process for certain types of failures, not as a substitute for a good backup and recovery strategy. Device parity protection doesn’t protect you if you have a site disaster or user error. It also doesn’t protect against system outages caused by failures in other disk-related hardware (e.g., disk controllers, disk I/O).
Mirrored Protection Mirrored protection is a software availability function that protects against data loss due to failure or damage to a disk-related component. The system protects your data by maintaining two copies of the data on two separate disk units. When a disk-related component fails, the system continues to operate without interruption, using the mirrored copy of the data until repairs are complete on the failed component. When you start mirrored protection or add disk units to an ASP that has mirrored protection, the system creates mirrored pairs using disk units that have identical capacities. The goal is to protect as many disk-related components as possible. To provide maximum hardware redundancy and protection, the system tries to pair disk units from different controllers, IOPs, and buses. Different levels of mirrored protection are possible, depending on the duplicated hardware. For instance, you can duplicate
• • • •
disk units disk controllers disk IOPs a bus
If a duplicate exists for the failing component and attached hardware components, the system remains available during the failure. Remote mirroring support lets you have one mirrored unit within a mirrored pair at the local site and the second mirrored unit at a remote site. For some systems, standard DASD mirroring will remain the best choice; for others, remote DASD mirroring provides important additional capabilities.
Dual Systems System installations with very high availability requirements use a dual-systems approach, in which two systems maintain some or all data. If the primary system fails, the secondary system can take over critical application programs. The most common way to maintain data on the secondary system is through journaling. The primary system transmits journal entries to the secondary system, where a user-written program uses them to update files and other journaled objects in order to replicate the application environments of the primary system. Users sometimes implement this by transmitting journal entries at the application layer. The remote journal function improves on this technique by transmitting journal entries to a duplicate journal receiver on the secondary system at the licensed internal code layer. Several software packages are available from independent software vendors to support dual systems.
Clustered Systems A cluster is a collection or group of one or more systems that work together as a single system. The cluster is identified by name and consists of one or more cluster nodes. Clustering let you efficiently group your systems together to create an environment that approaches 100 percent availability. Preparing and Managing Your Backup Media OS/400’s save commands support different types of devices (including save file, tape, diskette, and optical). For a backup strategy, you should always back up to a tape device. Choose a tape device and tape media that has the performance capabilities and density capacity that will meet your backup window and any requirements you have for running an unattended backup. Preparing and managing your tape media is an important part of your backup operations. You need to be able to easily locate the correct media to perform a successful system recovery. You’ll need to use sets of tapes and implement a rotation schedule. An important part of a good backup strategy is to have more than one set of backup media. When you perform a system recovery, you may need to go back to an older set of tape media if your most recent set is damaged or if you discover a programming error that has affected data on your most recent backup media. At a minimum, you should rotate three sets of media, as follows: Backup Backup 1 Backup 2 Backup 3 Backup 4 Backup 5 Backup 6 . . .
Media set Set 1 Set 2 Set 3 Set 1 Set 2 Set 3 . . .
You may find that the easiest method is to have a different set of media for each day of the week. This strategy makes it easier for the operator to know which set to mount for backup.
Cleaning Your Tape Devices It’s important to clean your tape devices regularly. The read-write heads can collect dust and other material that can cause errors when reading or writing to tape media. If you’re using new tapes, it’s especially important to clean the device because new tapes tend to collect more material on the read-
write heads. For specific recommendations, refer to your tape drive’s manual.
Preparing Your Tapes for Use To prepare tape media for use, you’ll need to use the InzTap (Initialize Tape) command. (Some tapes come pre-initialized.) When you initialize tapes, you’re required to give each tape a new-volume identifier (using the InzTap command’s NewVol parameter) and a density (Density parameter). The new-volume identifier identifies the tape as a standard-labeled tape that can be used by the system for backups. The density specifies the format in which to write the data on the tape based on the tape device you’re using. You can use the special value *DevType to easily specify that the format be based on the type of tape device being used. When initializing new tapes, you should also specify Check(*No); otherwise, the system tries to read labels from the volume on the specified tape device until the tape completely rewinds. Here’s a sample command to initialize a new tape volume:
InzTap Dev(Tap01) + NewVol(A23001) + Check(*No) + Density(*DevType) Tip: It’s important to initialize each tape only once in its lifetime and give each tape volume a different volume identifier so tape-volume error statistics can be tracked.
Naming and Labeling Your Tapes Initializing each tape volume with a volume identifier helps ensure that your operators load the correct tape for the backup. It’s a good idea to choose volume-identifier names that help identify tape-volume contents and the volume set to which each tape belongs. The following table illustrates how you might initialize your tape volumes and label them externally in a simple backup strategy. Each label has a prefix that indicates the day of the week (A for Monday, B for Tuesday, and so on), the backup operation (option number from menu SAVE), and the media set with which the tape volume is associated. Volume Naming — Part of a Simple Backup Strategy Volume name External label B23001 Tuesday-Menu SAVE, option 23-Media set 1 B23002 Tuesday-Menu SAVE, option 23-Media set 2 B23003 Tuesday-Menu SAVE, option 23-Media set 3 E21001 Friday-Menu SAVE, option 21-Media set 1 E21002 Friday-Menu SAVE, option 21-Media set 2 E21003 Friday-Menu SAVE, option 21-Media set 3 Volume names and labels for a medium backup strategy might look like this: Volume Naming — Part of a Medium Backup Strategy Volume name External label E21001 Friday-Menu SAVE, option 21-Media set 1 E21002 Friday-Menu SAVE, option 21-Media set 2 AJR001 Monday-Save journal receivers-Media set 1 AJR002 Monday-Save journal receivers-Media set 2 ASC001 Monday-Save changed Nothing is simpler, but not everyone can afford the outage that this type of save requires. BRMS is an effective solution in backing up only what's really required. BRMS also lets you easily schedule a backup that includes a SavSys (Save System) operation, which isn't so easy using just OS/400. In addition to these capabilities, BRMS offers step-by-step recovery information, printed after backups are complete. Recovery no longer consists of operators clenching the desk with white knuckles at 4:00 a.m., trying desperately to recover the system in time for the users who'll arrive at 8:00 a.m., without any idea what's going on or how long the process will take. With native OS/400 commands, the only feedback that recovery personnel get is the occasional change to the message line on line 25 of the screen as the recovery takes place. BRMS changes
this with full and detailed feedback during the recovery process — with an auto-refresh screen, updated as each library is restored. Following are some of the features that contribute to the robustness of BRMS:
•
• •
•
• • • •
Data archive — Data archive is important for organizations that must keep large volumes of history data yet don't require rapid access to this information. BRMS can archive data from DASD to tape and track information about objects that have been archived. Locating data in the archives is easy, and the restore can be triggered from a work-with screen. Dynamic data retrieval — Dynamic retrieval for database files, document library objects, and stream files is possible with BRMS. Once archived with BRMS, these objects can be automatically restored upon access within user applications. No changes are required to user applications to initiate the restore. Media management — In a large single- or multisystem environment, control and management of tape media is critical. BRMS allows cataloging of an entire tape inventory and manages the media as they move from location to location. This comprehensive inventory-management system provides many reports that operators can use as instructions. Parallel save and restore — BRMS supports parallel save and restore, reducing the backup and recovery times of very large objects and libraries by 'spreading' data across multiple tape drives. This method is in contrast to concurrent save and restore, in which the user must manage the splitting of data. With parallel save and restore, operations end at approximately the same time for all tape drives. Lotus Notes Servers backup — BRMS supports backup of online Lotus Notes Servers, including Domino and Quickplace Lotus Notes Servers. Flexible backup options — You can define different backup scenarios and execute the ones appropriate for particular circumstances. Spooled file backup — Unlike OS/400 save and restore functions, BRMS provides support for backing up spooled files. Spooled file backup is important to a complete backup, and BRMS lets you tailor spooled file backup to meet your needs. Storage alternatives — You can save to a tape device, a Media Library device, a save file, or a Tivoli Storage Manager server (previously known as an ADSM server).
It is these features, and more, that make BRMS a popular solution for many installations. Later, we'll take a closer look at some of these capabilities.
Getting Started with BRMS BRMS brings with it a few new save/restore concepts as well as some new terminology. For instance, you'll find repeated references to the following terms when working with BRMS:
• • • • •
media — a tape cartridge or save file that will hold the objects being backed up media identifier — a name given to a physical piece of media media class — a logical grouping of media with similar physical and/or logical characteristics (e.g., density) policy — a set of commonly used defaults (e.g., device, media class) that determine how BRMS performs its backup backup control group — a grouping of items (e.g., libraries, objects, stream files) to back up
You're probably thinking that 'media' and 'media identifier' aren't such new terms. True, but most people don't think of save files as media, and media identifier is typically thought to mean volume identifier. Policies and backup control groups are concepts central to BRMS in that they govern the backup process. IBM provides default values in several policies and control groups. You can use these defaults or define your own for use in your save/restore operations. Policies are templates for managing backups and media management operations. They act as a control point for defining operating characteristics. The standard BRMS package provides the following policies:
• •
System Policy — The System Policy is conceptually similar to system values. It contains general defaults for many BRMS operations. Backup Policy — The Backup Policy determines how the system performs backups. It contains defaults for backup operations.
• • •
Recovery Policy — The Recovery Policy defines how the system typically performs recovery operations. Media Policies — Media Policies control media-related functionality. For instance, they determine where BRMS finds tapes needed for a backup. Move Policies — Move Policies define the way media moves through storage locations from creation time through expiration.
In pre-V5R1 releases of OS/400, BRMS is shipped with two default backup control groups, *SysGrp (system group) and *BkuGrp (backup group). The *SysGrp control group backs up all system data, and the *BkuGrp control group backs up all user data. You can back up your entire system using these two control groups, but doing so requires two backup commands, one for each group. To back up your entire system using a single control group, you can create a new backup control group that includes the following BRMS special values as backup items: Seq 10 20 30 40 50
Backup items *SavSys *IBM *AllUsr *AllDLO *Link
The time required to back up the system using this full backup control group is less than that required to use a combination of the *SysGrp and *BkuGrp backup control groups. The *SysGrp control group contains the special value *SavSys, which saves the licensed internal code, OS/400, user profiles, security data, and configuration data. The *BkuGrp control group contains the special values *SavSecDta and *SavCfg, which also save user profiles, security data, and configuration data. If you use the two control groups *SysGrp and *BkuGrp, you save the user profiles, security data, and configuration data twice. This redundancy in saved data contributes to the additional backup time when using control groups *SysGrp and *BkuGrp. Starting with V5R1, BRMS includes a new, full-system default backup control group, *System, that combines the function of groups *SysGrp and *BkuGrp. Note that none of the full backup control groups discussed so far saves spooled files. If spooled files are critical to your business, you'll need to create a backup list of your spooled files to be included in your full backup control group (more about how to do this later).
Saving Data in Parallel with BRMS As I mentioned, BRMS supports parallel save/restore function. This support is intended for use with large objects and libraries. Its goal is to reduce backup and recovery times by evenly dividing data across multiple tape drives. You typically define parallel resources when you work with backup control groups. You specify both a maximum number of resources (devices) and a minimum number of resources to be used during the backup. For example, you could specify 32 for maximum resources and 15 for minimum resources. When the backup is submitted, the system checks for available tape resources. If it can't find 32 available tape devices, the backup will be run with the minimum of 15. It's not a requirement that the number of devices used for the backup be used on the restore. However, to reduce the number of tape mounts, it's best to use the same number of tape devices on the restore. Starting with V5R1, the special values *AllProd, *AllTest, *AllUsr, *ASP01-*ASP99, and *IBM are supported on BRMS parallel saves, with the objects being 'spread' at the library level. Restores for objects saved in parallel with these special values are still done in a serial mode.
Online Backup of Lotus Notes Servers with BRMS In today's working environment, users demand 24x7 access to their mail and other Lotus Notes databases, yet it's also critical that user data be backed up frequently and in a timely way. BRMS Online Lotus Notes Servers Backup support meets these critical needs. With this support, you can save Lotus Notes databases while they're in use, without requiring users to exit the system. Prior save-while-active support required ending applications to reach a checkpoint or the use of commitment control or journaling. Another alternative was to invest in an additional server, replicate the server
data, and perform the backup from the second server. Online Lotus Notes Servers Backup with BRMS avoids these requirements. Installation of BRMS automatically configures control groups and policies that help you perform online backup of your Lotus Notes Servers. The Online Lotus Notes Servers Backup process allows the collection of two backups into one entity. BRMS and Domino or Quickplace accomplish this using a BRMS concept called a package. The package is identified by the PkgID (Package identifier) parameter on the SavBRM (Save Object using BRM) command. Domino or Quickplace will back up the databases while they are online and in use. When the backup is completed, a secondary file is backed up and associated with the first backup using the package concept. The secondary file contains all the changes that occurred during the online backup, such as transaction logs or journaling information. When you need to recover a Lotus Notes Server database that was backed up using BRMS Online Backup, BRMS calls Domino or Quickplace through recovery exits that let Domino or Quickplace apply any changes from the secondary file backup to the database that was just restored. This recovery process maintains the integrity of the data.
Restricted-State Saves Using BRMS You can use the console monitor function of BRMS to schedule unattended restricted-state saves. This support is meaningful because with OS/400 save functions, restricted-state saves must be run interactively from a display in the controlling subsystem. BRMS's support means you can run an unattended SavSys operation to save the OS/400 licensed internal code and operating system (or other functions you want to run in a restricted state). You simply specify the special value *SavSys on the StrBkuBRM (Start Backup using BRM) command or within a BRMS control group to perform a SavSys. You can temporarily interrupt the console-monitoring function to enter OS/400 commands and then return the console to a monitored state. Console monitoring lets users submit the SavSys job to the job scheduler instead of running the save interactively. You can use the Submit to batch parameter on the StrBkuBRM command to enter *Console as a value, thereby performing your saves in batch mode. Thus, you don't have to be nearby when the system save is processed. However, you must issue this command from the system console because BRMS runs the job in subsystem QCtl. If you try to start the console monitor from your own workstation, BRMS sends a message indicating that you're not in a correct environment to start the console monitor.
Backing Up Spooled Files with BRMS With BRMS, you can create a backup list that specifies the output queues you want to save. You can then specify this backup list on your backup control groups. You create a spooled file backup list using command WrkLBRM (Work with Lists using BRM). You simply add a list, specifying
• • •
*Bku for the Use field a value for the List name (e.g., SaveSplF) *Spl for the Type field
When you press Enter, the Add Spooled File List panel (Figure 16.1) is displayed. (The figure shows the panel after backup information has been entered.)
Including Spooled File Entries in a Backup List Now, you can update the backup list by adding the output queues you want to save. Within a spooled file list, you can save multiple output queues by selecting multiple sequence numbers. When you add an output queue to the list, you can filter the spooled files to save by specifying values for spooled file name, job name, user name, or user data. For example, if you want to save only spooled files that belong to user A, you can specify user A's name in the User field. Generic names are also allowed.
The sample setup in Figure 16.1 saves output queue Prt01 in library QUsrSys. If you leave the Outq field at its default value *All, BRMS saves all spooled files from all output queues in library QUsrSys. To exclude an output queue, you can use the *Exc value. Once you set up your backup list, you can add it to your daily, weekly, or monthly backup control group as a backup item with a list type of *Spl. Note that BRMS doesn't support incremental saves of spooled files. If you specify an incremental save for a list type of *Spl, all spooled files in the list are saved. BRMS doesn't automatically clear the output queues after the spooled files are successfully saved. After you've successfully saved your spooled files, you can use the WrkSplFBRM (Work with Spooled Files using BRM) command to display the status of your saves. The WrkSplFBRM panel displays your spooled files in the order in which they were created on the system.
Restoring Spooled Files Saved Using BRMS BRMS doesn't automatically restore spooled files when you restore your user data during a system recovery. To restore saved spooled files, use the WrkSplFBRM command and select option 7 (Restore spooled file) on the resulting screen. From the Select Recovery Items panel that appears, you can specify the spooled files you want to restore. By default, BRMS restores spooled file data in the output queue from which the data was saved. If necessary, you can change any of the BRMS recovery defaults by pressing F9 on the Select Recovery Items screen. During the save and restore operations, BRMS retains spooled file attributes, names, user names, user data fields, and, in most cases, job names. During the restore operation, OS/400 assigns new job numbers, system dates, and times; the original dates and times aren't restored. Be aware that BRMS saves spooled files as a single folder, with multiple documents (spooled members) within the folder. During the restore, BRMS searches the tape label for the folder and restores all the documents. If your spooled file save happens to span multiple tape volumes, you'll be prompted to load the first tape to read the label information before restoring the documents on the subsequent tapes. To help with recovery, consider saving your spooled files on a separate tape using the *Load exit in a control group, or split your spooled file saves so you use only one tape at a time.
The BRMS Operations Navigator Interface With V5R1, BRMS has an Operations Navigator (OpsNav) interface that makes setting up and managing your backup and recovery strategy even easier. Using wizards, you can simplify the common operations you need to perform, such as creating a backup policy, adding tape media to BRMS, preparing the tapes for use, adding items to a backup policy, and restoring backed-up items. If you're currently using BRMS, you may not find all the functionality in OpsNav that you have with the greenscreen version. However, watch for additional features in future releases of BRMS Operations Navigator. You may still want to use the graphical interface to perform some of the basic operations. If so, you'll need to be aware of some differences between the green-screen and the OpsNav interfaces.
Terminology Differences The OpsNav version of BRMS uses some different terminology than the green-screen BRMS. Here are some key terms: New terminology Definition Backup history
Information about each of the objects backed up using BRMS. The backup history includes any items backed up using a backup policy. In the greenscreen interface, the equivalent term is media information.
Backup policy
Defaults that control what data is backed up, how it is backed up, and where it is backed up. In the green-screen interface, a combination of a backup control group and a media policy would make up a backup policy. Also, there
is no system policy in the OpsNav interface. All information needed to perform a backup is included in the backup policy. Media pool
A group of media with similar density and capacity characteristics. In the green-screen interface, this is known as a media class.
Functional Differences As of this writing, the current version of BRMS Operations Navigator lets you
• • • • • • • • • • •
run policies shipped with BRMS view the backup history view the backup and recovery log create and run a backup policy back up individual items restore individual items schedule items to be backed up and restored print a system recovery report customize user access to BRMS functions and components run BRMS maintenance activities add, display, and manage tape media
Some functions unavailable in the current release of BRMS Operations Navigator but included in the green-screen interface include
• • • • • • • •
move policies tape library support backup to save files backup of spooled files parallel backup networked systems support advanced functions, such as hierarchical storage management (HSM) BRMS Application Client for Tivoli Storage Manager
Backup and Recovery with BRMS OpsNav BRMS Operations Navigator is actually a plug-in to OpsNav. A plug-in is a program that's created separately from OpsNav but, when installed, looks and behaves like the rest of the graphical user interface of OpsNav.
Backup Policies One ease-of-use advantage offered by BRMS OpsNav is that you can create backup policies to control your backups. A backup policy is a group of defaults that controls what data is backed up, how it is backed up, and where it is backed up. Once you've defined your backup policies, you can run your backup at any time or schedule your backup to run whenever it fits into your backup window. Three backup policies come with BRMS:
• • •
*System — backs up the entire system *SysGrp — backs up all system data *BkuGrp — backs up all user data
If you have a simple backup strategy, you can implement your strategy using these three backup policies. If you have a medium or complex strategy, you create your own backup policies. When you back up your data using a BRMS backup policy, information about each backed-up item is stored in the backup history. This information includes the item name, the type of backup, the date of the backup, and the
volume on which the item is backed up. You can specify the level of detail you want to track for each item in the properties for the policy. You can then restore items by selecting them from the backup history. You also use the backup history information for system recoveries.
Creating a BRMS Backup Policy You can use the New Backup Policy wizard in OpsNav to create a new BRMS backup policy. To access the wizard: 1. 2.
Expand Backup, Recovery and Media Services. Right-click Backup policies, and select New policy.
The wizard gives you the following options for creating your backup policies: Option
Description
Back up all system and user data
Enables you to do a full system backup of IBM-supplied data and all user data (spooled files are not included in this backup)
Back up all user data Enables you to back up the data that belongs to users on your system, such as user profiles, security data, configuration data, user libraries, folders, and objects in directories Back up Lotus server data online
Enables you to perform an online backup of Lotus Domino and Quickplace servers
Back up a customized set of objects
Enables you to choose the items you want to back up
After creating a backup policy, you can choose to run the backup policy immediately or schedule it to run later. If you want to change the policy later, you can do so by editing the properties of the policy. Many customization options that aren't available in the New Backup Policy wizard are available in the properties of the policy. To access the policy properties, right-click the policy and select Properties.
Backing Up Individual Items In addition to using backup policies to back up your data, you can choose to back up individual files, libraries, or folders using the OpsNav hierarchy. You can also choose to back up just security or configuration data. Using OpsNav, simply right-click the item you want to back up and select Backup.
Restoring Individual Items If a file becomes corrupted or accidentally deleted, you may need to restore individual items on your system. If you use backup policies to back up items on your system, you can restore those items from the backup history. When you restore an item from the backup history, you can view details about the item, such as when it was backed up and how large it is. If there are several versions of the item in the backup history, you can select which version of the item you want to restore. You can also restore items that you backed up without using a backup policy. However, for these items, you don't have the benefit of using the backup history to make your selection. Fortunately, you can use the OpsNav Restore wizard to restore individual items on your system, whether they were backed up with a backup policy or not. To access the wizard in OpsNav, right-click Backup, Recovery and Media Services and select Restore.
Scheduling Unattended Backup and Restore Operations Earlier, you saw how to schedule unattended restricted-state saves using the console monitor and the StrBkuBRM command. Of course, you can also schedule non-restricted-state save and restore operations.
In addition, you can use OpsNav to schedule your backup. To do so, you simply use the OpsNav New Policy wizard to create and schedule a backup. If you need to schedule an existing backup policy, you can do so by rightclicking its entry under Backup Policies in OpsNav and selecting Schedule. If the save operation requires a restricted-state system, you need only follow the console monitor instructions presented by OpsNav when you schedule the backup. Tip: When you schedule a backup policy to be run, remember that only the items scheduled to be backed up on the day you run the policy will be backed up. For example, say you have a backup policy that includes the library MyLib. In the policy properties, you schedule MyLib for backup every Thursday. If you schedule the policy to run on Thursday, the system backs up MyLib. However, if you schedule the same policy to run on any other day, the system does not back up MyLib. You can also schedule restore operations in much the same manner as backup operations using OpsNav. Restore operations, however, are scheduled less often than backup operations.
System Recovery Report BRMS produces a complete system recovery report that guides you through an entire system recovery. The report lets you know exactly which tape volumes are needed to recover your system. When recovering your entire system, you should use the report in conjunction with OS/400 Backup and Recovery (SC41-5304). Keep the recovery report with your tape volumes in a secure and safe off-site location.
BRMS Security Functions BRMS provides security functions via the Functional Usage Model, which lets you customize access to selected BRMS functions and functional components by user. You must use the OpsNav interface to access the Functional Usage Model feature. You can let certain users use specific functions and components while letting others use and change specific functions and components. You can grant various types of functional usage to all users or to specified users only. Each BRMS function, functional component, and specific backup and media management item (e.g., policy, control group) has two levels of authority access:
•
•
Access or No Access — At the first level of authority access using the Functional Usage Model, a user either has access to a BRMS function or component or has no access to it. If a user has access, he or she can use and view the function or component. With this basic level of access, a user can process a specific item (e.g., a library, a control group) in a backup operation but can't change the item. Specific Change or No Change — The second level of authority access lets a user change a specific function, component, or item. For example, to change a backup list, a user must have access to the specific backup list. Similarly, to change a media policy, a user must have access to the specific media policy.
The Functional Usage Model provides lists of existing items (e.g., control groups, backup lists, media and move policies) for which you can grant specific access. With the Functional Usage Model, you can give a user both types of access (so the user can both use and change a particular function, component, or item) or only one type of access (e.g., access to use but not to change a particular function, component, or item).
Security Options for BRMS Functions, Components, and Items In the backup area, the following usage levels are available:
•
• •
Basic Backup Activities — Users with Basic Backup Activities access can use and view the backup policy, control groups, and backup lists. With use access, these users can also process backups by using backup control groups (i.e., using the StrBkuBRM command) or by saving libraries, objects, or folders (SavLibBRM, SavObjBRM, or SavFlrLBRM). A user without Basic Backup Activities access can't see backup menu options or command parameter options. Backup Policy — Users with Backup Policy access can change the backup policy (in addition to using and viewing it). Users without access to the backup policy cannot change it. Backup Control Groups — Users with Backup Control Groups access can change specific backup control groups (in addition to using and viewing them). A user can find a list of his or her existing backup control
•
groups under the backup control groups heading in OpsNav. You can grant a user access to any number of specific control groups. Users without access to the backup control groups cannot change them. Backup Lists — Users with Backup Lists access can change specific backup lists (in addition to using and viewing them). A user can find a list of his or her existing backup lists under the backup lists heading in OpsNav. You can grant a user access to any number of specific backup lists. Users without access to a backup list cannot change it.
In the recovery area, the following usage levels are available:
•
•
Basic Recovery Activities — Users with Basic Recovery Activities access can use and view the recovery policy. They can also use the WrkMedIBRM (Work with Media Information using BRM) command to process basic recoveries, command RstObjBRM (Restore Object using BRM), and command RstLibBRM (Restore Library using BRM). Users without Basic Recovery Activities access can't see recovery menu options or command parameter options. Recovery Policy — Users with Recovery Policy access can change the recovery policy (in addition to using and viewing it). Users without access to the recovery policy can't change it.
In the area of media management, the following usage levels are available:
•
• •
•
• • • •
Basic Media Activities — Users with Basic Media Activities access can perform basic media-related tasks, such as using and adding media to BRMS. Users with this access can also use and view (but not change) media policies and media classes. Users without Basic Media Activities access can't see related menu options or command parameter options. Advanced Media Activities — Users with Advanced Media Activities access can perform media-related tasks such as expiring, removing, and initializing media. Media Policies — Users with Media Policies access can change specific media policies (in addition to using and viewing them). A user can find a list of his or her existing media policies under the media policies heading in OpsNav. You can grant a user access to any number of media policies. Users without access to a media policy cannot change it. Media Classes — Users with Media Classes access can change specific media classes (in addition to using and viewing them). A user can find a list of his or her existing media classes under the media classes heading in OpsNav. You can grant a user access to any number of media classes. Users without access to a media class cannot change it. Media Information — Users with Media Information access can change media information with command WrkMedIBRM (Work with Media Information). Basic Movement Activities — Users with Basic Movement Activities access can manually process or display MovMedBRM (Move Media using BRM) commands, but they can't change them. Move Verification — Users with Move Verification access can perform move verification tasks. Move Policies — Users with Move Policies access can change specific move policies (in addition to using and viewing them). A user can find a list of his or her existing move policies under the move policies heading in OpsNav. You can grant a user access to any number of move policies. Users without access to a move policy cannot change it.
In the system area, the following usage options are available:
• • • • • • •
Basic System-related Activities — Users with Basic System-related Activities access can use and view device panels and commands. They can also view and display auxiliary storage pool (ASP) panels and commands. Users with this access level can also use and view the system policy. Devices — Users with Devices access can change device-related information. Users without this access can't change device-related information. Auxiliary Storage Pools — Users with ASP access can change information about BRMS ASP management. Maintenance — Users with Maintenance access can schedule and run maintenance operations. System Policy — Users with System Policy access can change system policy parameters. Log — Users with Log access can remove log entries. Any user can display log information, but only those with Log access can remove log entries. Initialize — Users with Initialize access can use the InzBRM (Initialize BRM) command.
Media Management
BRMS makes media management simple by maintaining an inventory of your tape media. It keeps track of what data is backed up on which tape and which tapes have available space. When you run a backup, BRMS selects the tape to use from the available pool of tapes. BRMS prevents a user from accidentally writing over active files or using the wrong tape. Before you can use any tape media with BRMS, you need to add it to the BRMS inventory and initialize it. You can do this using OpsNav's Add media wizard (under Media, right-click Tape Volumes and select Add). You can also use the green-screen BRMS command AddMedBRM (Add Media to BRM). Once you've added tape media to the BRMS inventory, you can view the media based on criteria you specify, such as the volume name, status, media pool, or expiration date. This gives you the capability to manually expire a tape and make it available for use in the BRMS media inventory. To filter which media you see in the list, under Media, right-click Tape Volumes and select Include. To view information about a particular tape volume or perform an action on that volume, right-click the volume and select the action you want to perform from the menu.
BRMS Housekeeping You should perform a little BRMS housekeeping on a daily basis. The BRMS maintenance operation automatically performs BRMS cleanup on your system, updates backup information, and runs reports. BRMS maintenance performs these functions:
• • • • • • • • • • • •
expires media removes media information removes migration information (180 days old) removes log entries (from beginning entry to within 90 days of current date) runs cleanup retrieves volume statistics audits system media changes journal receivers prints expired media report prints version report prints media information prints recovery reports
You can run BRMS maintenance using OpsNav (right-click Backup, Recovery and Media Services and select Run Maintenance) or using BRMS command StrMntBRM (Start Maintenance for BRM).
Check It Out As you can see, BRMS provides some powerful features for simplifying and managing many aspects of iSeries backup and recovery. Keep in mind that BRMS isn't a replacement for your backup and recovery strategy; rather, it's a tool that can help you implement and carry out such a strategy. There's a lot more to BRMS than what's been covered here. For the complete details, see Backup, Recovery and Media Services (SC41-5345), as well as the BRMS home page (http://www.as400.ibm.com/service/brms.htm) and IBM's iSeries Information Center (http://publib.boulder.ibm.com/pubs/html/as400/infocenter.htm).
Chapter 17 - Defining a Subsystem We've all found ourselves lost at some time or other. It's not that we're dumb. We've simply gone to an unfamiliar place without having the proper orientation. You may have experienced a similar feeling of discomfort the first few times you signed on to your AS/400. Perhaps you submitted a job and then wondered, 'How do I find that job?' or 'Where did it go?' Although I'm sure you have progressed beyond these initial stages of bewilderment, you may still need a good introduction to the concepts of work management on the AS/400.
Work management on the AS/400 refers to the set of objects that define jobs and how the system processes those jobs. With a good understanding of work management concepts, you can easily perform such tasks as finding a job on the system, solving problems, improving performance, or controlling job priorities. I can't imagine anyone operating an AS/400 in a production environment without having basic work management skills to facilitate problem solving and operations. Let me illustrate two situations in which work management could enhance system operations. Perhaps you are plagued with end users who complain that the system takes too long to complete short jobs. You investigate and discover that, indeed, the system is processing short jobs slowly because they spend too much time in the job queue behind long-running end-user batch jobs, operator-submitted batch jobs, and even program compiles. You could tell your operators not to submit jobs, or you could have your programmers compile interactively, but those approaches would be impractical and unnecessary. The answer lies in understanding the work management concepts of multiple subsystems and multiple job queues. Perhaps when your 'power users' and programmers share a subsystem, excessive peaks and valleys in performance occur due to the heavy interaction of these users. Perhaps you want to use separate storage pools (i.e., memory pools) based on user profiles so that you can place your power users in one pool, your programmers in another, and everyone else in a third pool, thereby creating consistent performance for each user group. You could do this if you knew the work management concepts of memory management. Learning work management skills means learning how to maximize system resources. My goal for this and the next two chapters is to teach you the basic skills you need to effectively and creatively manage all the work processed on your AS/400.
Getting Oriented Just as a road map gives you the information you need to find your way in an unfamiliar city, Figure 17.1 (401 KB - yes, 401! Might want to go have some coffee while you wait for it to download.) serves as a guide to understanding work management. It shows the basic work management objects and how they relate to one another. The objects designated by a 1 represent jobs that enter the system, the objects designated by a 2 represent parts of the subsystem description, and the objects designated by a 3 represent additional job environment attributes (e.g., class, job description, and user profile) that affect the way a job interacts with the system. You will notice that all the paths in Figure 17.1 lead to one destination -- the subsystem. In the Roman Empire all roads led to Rome. On the AS/400, all jobs must process in a subsystem. So what better place to start our study of work management than with the subsystem?
Defining a Subsystem A subsystem, defined by a subsystem description, is where the system brings together the resources needed to process work. As shown in Figure 17.2, the subsystem description contains seven parts that fall into three categories. Let me briefly introduce you to these components of the subsystem description.
• •
•
•
Subsystem attributes provide the general definition of the subsystem and control its main storage allocations. The general definition includes the subsystem name, description, and the maximum number of jobs allowed in the subsystem. Storage pool definitions are the most significant subsystem attributes. A subsystem's storage pool definition determines how the subsystem uses main storage for processing work. The storage pool definition lets a subsystem either share an existing pool of main storage (e.g., *BASE and *INTERACT) with other subsystems, establish a private pool of main storage, or both. The storage pool definition also lets you establish the activity level -- the maximum number of jobs allowed in the subsystem -- for a particular storage pool. Work entries define how jobs enter the subsystem and how the subsystem processes that work. They consist of autostart job entries, workstation entries, job queue entries, communications entries, and prestart job entries. Autostart job entries let you predefine any jobs you want the system to start automatically when it starts the subsystem. Workstation entries define which workstations the subsystem will use to receive work. You can use a workstation entry to initiate an interactive job when a user signs on to the system or when a user transfers an interactive job from another subsystem. You can create workstation entries for specific workstation names (e.g., DSP10 and OH0123), for generic names (e.g., DSP*, DP*, and OH*), or by the type of
•
workstations (e.g., 5251, 3476, and 3477). Job queue entries define the specific job queues from which to receive work. A job queue, which submits jobs to the subsystem for processing, can only be allocated by one active subsystem. A single subsystem, however, can allocate multiple job queues, prioritize them, and specify for each a maximum number of active jobs. Communications entries define the communications device associated with a remote location name from which you can receive a communications evoke request. Prestart job entries define jobs that start on a local system before a remote system sends a communications request. When a communications evoke request requires the program running in the prestart job, the request attaches to that prestart job, thereby eliminating all overhead associated with initiating a job and program. Routing entries identify which programs to call to control routing steps that will execute in the subsystem for a given job. Routing entries also define in which storage pool the job will be processed and which basic execution attributes (defined in a job class object associated with a routing entry) the job will use for processing.
All these components of the subsystem description determine how the system uses resources to process jobs within a subsystem. I will expand upon my discussion of work entries in Chapter 18 and my discussion of routing entries in Chapter 19. Now that we've covered some basic terms, let's take a closer look at subsystem attributes and how subsystems can use main storage for processing work.
Main Storage and Subsystem Pool Definitions When the AS/400 is shipped, all of main storage resides in two system pools: the machine pool (*MACHINE) and the base pool (*BASE). You must define the machine pool to support your system hardware; the amount of main storage you allocate to the machine pool is hardware-dependent and varies with each AS/400. For more information about calculating the required machine pool size, see Chapter 2 and IBM's AS/400 Programming: Work Management Guide (SC41-8078). The base pool is the main storage that remains after you reserve the machine pool. You can designate *BASE as a shared pool for all subsystems to use to process work, or you can divide it into smaller pools of shared and private main storage. A shared pool is an allocation of main storage where multiple subsystems can process work. *MACHINE and *BASE are both examples of shared pools. Other shared storage pools that you can define include *INTERACT (for interactive jobs), *SPOOL (for printers), and *SHRPOOL1 to *SHRPOOL10 (for pools that you can define for your own purposes). You can control shared pool sizes by using the CHGSHRPOOL (Change Shared Storage Pool) or WRKSHRPOOL (Work with Shared Storage Pools) commands. Figure 17.3 shows a WRKSHRPOOL screen, on which you can modify the pool size or activity level simply by changing the entries. The AS/400's default controlling subsystem (QBASE) and the default spooling subsystem (QSPL) are configured to take advantage of shared pools. QBASE uses the *BASE pool and the *INTERACT pool, while QSPL uses *BASE and *SPOOL. To see what pools a subsystem is using, you use the DSPSBSD (Display Subsystem Description) command. For instance, when you execute the command
DSPSBSD QBASE OUTPUT(*PRINT) you will find the following pool definitions for QBASE listed (if the defaults have not been changed):
QBASE
((1 *BASE) (2 *INTERACT))
Parentheses group together two definitions, each of which can contain two distinct parts (the subsystem pool number and size). In this example of the QBASE pool definitions, the (1 *BASE) represents the subsystem pool number 1 and a size of *BASE, meaning that the system will use all of *BASE as a shared pool. A third part of the pool definition, the activity level, doesn't appear for *BASE because system value QBASACTLVL maintains the activity level. The second pool definition for QBASE is (2 *INTERACT). Because you can use the CHGSHRPOOL or WRKSHRPOOL commands to modify the activity level for shared pool *INTERACT, the activity level is not listed as part of the subsystem description, nor is it specified when you use the CRTSBSD or CHGSBSD commands. Be careful not to confuse subsystem pool numbering with system pool numbering. The AS/400's two predefined system pools, *MACHINE and *BASE, are defined as system pool number 1, and system pool number 2, respectively. Pool numbering within a subsystem is unique to that subsystem, and only the routing entries in that subsystem use it to determine which pool jobs will use, based on the routing data associated with each job. As subsystems define new storage pools (shared or private) in addition to the two predefined system pools, the system simply assigns the next available system pool number to use as a reference on the WRKSYSSTS display. For example, with the above pools for QBASE and the following pools for QSPL,
QSPL
((1 *BASE) (2 *SPOOL))
the system pool numbering might correspond to the subsystem pool numbering as shown in Figure 17.4.
A private pool is a specific allocation of main storage reserved for one subsystem. It's common to use a private pool when the system uses the controlling subsystem QCTL instead of QBASE. If you change your controlling subsystem to QCTL, the system startup program starts several subsystems (i.e., QINTER, QBATCH, QCMN, and QSPL) at IPL that are designed to support specific types of work. Although using QBASE as the controlling subsystem lets you divide main storage into separate pools, using QCTL is inherently easier to manage and administer in terms of controlling the number of jobs and performance tuning. IBM ships the following pool definitions for the multiple subsystem approach:
QCTL ((1 *BASE)) QINTER ((1 *BASE) (2 *INTERACT)) QBATCH ((1 *BASE)) QCMN ((1 *BASE)) QSPL ((1 *BASE) (2 *SPOOL)) As you can see, the initial configuration of these subsystems is like the initial configuration of subsystem QBASE, in that shared pools reserve areas of main storage for specific types of jobs. However, pool sharing does not provide optimum performance in a diverse operations environment where various types of work process simultaneously. In such cases, subsystems with private pools may be necessary to improve performance. Look at the pool definitions in Figure 17.5, in which two interactive subsystems (QINTER and QPGMR) provide private pools for both end users and programmers. Both QINTER and QPGMR define specific amounts of main storage to be allocated to the subsystem instead of sharing the *INTERACT pool. Also, both storage definitions require a specific activity level, whereas shared pool activity levels are maintained as part of the shared pool definitions (using the CHGSHRPOOL or WRKSHRPOOL commands). The private pool configuration in this example, with private main storage and private activity levels, prevents unwanted contention for resources between end users and programmers. Figure 17.5 also demonstrates how you can use multiple batch subsystems. Three batch subsystems (QBATCH, DAYQ, and QPGMRB, respectively) provide for daytime and nighttime processing of operator-submitted batch jobs, daytime end-user processing of short jobs, and program compiles. A separate communications subsystem, QCMN, is configured to handle any communications requests, and QSPL handles spooling.
The decision about whether to use shared pools or private pools should depend on the storage capacity of your system. On one hand, because shared pools ensure efficient use of main storage by letting more than one subsystem share a storage pool, it's wise to use shared pools if you have a system with limited main storage. On the other hand, private pools provide a reserved pool of main storage and activity levels that are constantly available to a subsystem without contention from any other subsystem. They are easy to manage when dealing with multiple subsystems. Therefore, private pools are a wise choice for a system with ample main storage.
Starting a Subsystem A subsystem definition is only that -- a definition. To start a subsystem, you use the STRSBS (Start Subsystem) command. Figure 17.6 outlines the steps your system takes to activate a subsystem after you execute a STRSBS command. First, it uses the storage pool definition to allocate main storage for job processing. Next, it uses the workstation entries to allocate workstation devices and present the workstation sign-on displays. If the system finds communications entries, it uses them to allocate the named devices. The system then allocates job queues so that when the subsystem completes the start-up process, the subsystem can receive jobs from the job queues. Next, it starts any defined prestart or autostart jobs. When the system has completed all these steps, the subsystem is finally ready to begin processing work. Now that I've introduced you to subsystems, look over IBM's AS/400 Programming: Work Management Guide and make a sketch of your system's main storage pool configuration to see how your subsystems work. Chapter 18 examines work entries and where jobs come from, and Chapter 19 discusses routing and where jobs go. When we're done with all that, you'll find yourself on Easy Street -- with the skills you need to implement a multiple subsystem work environment.
Chapter 18 - Where Jobs Come From One of OS/400's most elegant features is the concept of a 'job,' a unit of work with a tidy package of attributes that lets you easily identify and track a job throughout your system. The AS/400 defines this unit of work with a job name, a user profile associated with the job, and a computer-assigned job number; it is these three attributes that make a job unique. For example, when a user signs on to a workstation, the resulting job might be known to the system as Job name . . . : DSP10 (Workstation ID) User profile . : WMADDEN Job number . : 003459 Any transaction OS/400 completes is associated with an active job executing on the system. But where do these jobs come from? A job can be initiated when you sign on to the system from a workstation, when you submit a batch job, when your system receives a communications evoke request from another system, when you submit a prestart job, or when you create autostart job entries that the system automatically executes when it starts the associated subsystem. Understanding how jobs get started on the system is crucial to grasping AS/400 work management concepts. So let's continue Chapter 17's look at the subsystem description by focusing on work entries, the part of the description that defines how jobs gain access to the subsystem for processing.
Types of Work Entries There are five types of work entries: workstation, job queue, communications, prestart job, and autostart job. The easiest to understand is the workstation entry, which describes how a user gains access to a particular subsystem (for interactive jobs) using a workstation. To define a workstation entry, you use the ADDWSE (Add Work Station Entry) command. A subsystem can have as many workstation entries as you need, all of which have the following attributes:
• • • •
WRKSTNTYPE (workstation type) or WRKSTN (workstation name) JOBD (job description name) MAXACT (maximum number of active workstations) AT (when to allocate workstation)
When defining a workstation entry, you can use either the WRKSTNTYPE or WRKSTN attribute to specify which workstations the system should allocate. For instance, if you want to allocate all workstations, you specify WRKSTNTYPE(*ALL) in the workstation entry. This entry tells the system to allocate all workstations, regardless of the type (e.g., 5250, 5291, 3476, or 3477). Or you can use the WRKSTNTYPE attribute in one or more workstation entries to tell the system to allocate a specific type of workstation (e.g., WRKSTNTYPE(3477)). You can also define workstation entries using the WRKSTN attribute to specify that the system allocate workstations by name. You can enter either a specific name or a generic name. For example, an entry defining WRKSTN(DSP01) tells the subsystem to allocate device DSP01. The generic entry, WRKSTNN(OHIO*), tells the subsystem to let any workstation whose name begins with 'OHIO' establish an interactive job. You must specify a value for either the WRKSTNTYPE parameter or the WRKSTN parameter. In addition, you cannot mix WRKSTNTYPE and WRKSTN entries in the same subsystem. If you do, the subsystem recognizes only the entries that define workstations by the WRKSTN attribute and ignores any entries using the WRKSTNTYPE attribute. The JOBD workstation entry attribute specifies the job description for the workstation entry. You can give this attribute a value of *USRPRF (the default) to tell the system to use the job description named in the user profile of the person who signs on to the workstation. Or you can specify a value of *SBSD to tell the system to use the job description of the subsystem. You can also use a qualified name of an existing job description. For security reasons, it's wise to use the default value *USRPRF for the JOBD attribute so that a user profile is required to sign on to the workstation. If you use the value *SBSD or a job description name and there is a valid user profile associated with the job description via the USER attribute, any user can simply press Enter and sign on to the subsystem. In such a situation, the user then assumes the user ID associated with the default job description named on the workstation entry. There may be times when you want to define a workstation entry so that one user profile is always used when someone accesses the system via a particular workstation (e.g., if you wanted to disseminate public information at a courthouse, mall, or school). In such cases, be sure to construct such configurations so that only certain workstation entries have a job description that provides this type of access. The workstation entry's MAXACT attribute determines the maximum number of workstations allowed in the subsystem at one time. When this limit is reached, the subsystem must de-allocate one workstation before it can allocate another. The value that you should normally use for this attribute is the default, *NOMAX, because you typically control (i.e., you physically limit) the number of devices. In fact, supplying a number for this attribute could cause confusion if one day the limit is reached and some poor soul has to figure out why certain workstations aren't functioning. It could take days to find this seldom-used attribute and change the value. The AT attribute tells the system when to allocate the workstation. The default value, AT(*SIGNON), tells the system to allocate the workstation (i.e., initiate a sign-on screen at the workstation) when the subsystem is started. AT(*ENTER) tells the system to let jobs enter the subsystem only via the TFRJOB (Transfer Job) command. (To transfer a job into an interactive subsystem, a job queue and a subsystem description job queue entry must exist.) Now you're acquainted with the workstation entry attributes, but how can you use workstation entries? Let's say you want to process all your interactive jobs in subsystem QINTER. When you look at the default workstation entries for QINTER, you see the following: CRTSBSD (Create Subsystem Description) commands:
The first set of values tells the system to allocate all workstations to subsystem QINTER when the subsystem is started. The second set of values tells the system to let the console transfer into the subsystem, but not to allocate the device. What about a multiple subsystem environment for interactive jobs? Let's say you want to configure three subsystems: one for programmers (PGMRS), one for local end-user workstations (LOCAL), and one for remote end-user workstations (REMOTE). How can you make sure the system allocates the workstations to the correct subsystem? Perhaps you're thinking you can create individual workstation entries for each device. You can, but such a method would be a nightmare to maintain, and it would require you to end the subsystem each time you added a new device. Likewise, it would be impractical to use the WRKSTNTYPE attribute, because defining types does not necessarily define specific locations for certain workstations. So you have only two good options for ensuring that the correct subsystem allocates the devices. One is to name your various workstations so you can use generic WRKSTN values in the workstation entry. For example, you can allocate programmers' workstations to the proper subsystem by first giving them names like PGMR01 or PGMR02 and then creating a workstation entry that specifies WRKSTN(PGMR*). You might preface all local end-user workstation names with ADMN and LOC and then create workstation entries in the local subsystem using WRKSTN(ADMN*) and WRKSTN(LOC*). For the remote subsystem, you could continue to create workstation entries using generic names like the ones described above, or simply specify WRKSTNTYPE(*ALL), which would cause the subsystem to allocate the remaining workstations. However, you will need to read on to learn how subsystems allocate workstations to ensure that those workstations in the programmer and local subsystems are allocated properly. Your second option for ensuring that the correct subsystem allocates the devices is to use routing entries to reroute workstation jobs to the correct subsystem (I will explain how to do this in the next chapter).
Conflicting Workstation Entries Can workstation entries in different subsystems conflict with each other? You bet they can! Consider what happens when two different subsystems have workstation entries that allocate the same device. If AT(*SIGNON) is specified in the workstation entry, the first subsystem will allocate the device, and the device will show a sign-on display. When the system starts another subsystem with a workstation entry that applies to that same device (with AT(*SIGNON) specified), the subsystem will try to allocate it. If no user is signed on to the workstation, the second subsystem will allocate the device. This arrangement isn't all bad. In fact, you can make it work for you. Imagine that you want to establish an interactive environment for two subsystems: QINTER (for all end-user workstations) and QPGMR (for all programmer workstations). You supply WRKSTNTYPE(*ALL) for subsystem QINTER and WRKSTN(PGMR*) for subsystem QPGMR. To ensure that each workstation is allocated to the proper subsystem, you should start QINTER first. Consequently, the system will allocate all workstations to QINTER. After a brief delay, start QPGMR, which will then allocate (from QINTER) only the workstations whose names begin with 'PGMR'. Every workstation has its rightful place by simply using the system to do the work. What about you? Can you see how your configuration is set up to let interactive jobs process? Take a few minutes to examine the workstation entries in your system's subsystems. You can use the DSPSBSD (Display Subsystem Description) command to display the work entries that are part of the subsystem description.
Job Queue Entries Job queue entries control job initiation on your system and define how batch jobs enter the subsystem for processing. To submit jobs for processing, you must assign one or more job queues to a subsystem. A job queue entry associates a job queue with a subsystem. The attributes of a job queue entry are as follows:
• • • •
JOBQ (job queue name) MAXACT (maximum number of active jobs from this job queue) SEQNBR (sequence number used to determine order of selection among all job queues) MAXPTYn (maximum number of active jobs with this priority)
The JOBQ attribute, which is required, defines the name of the job queue you are attaching to the subsystem. The subsystem will search this job queue to receive jobs for processing. You can name only one job queue for a job queue entry, but you can define multiple job queue entries for a subsystem. The MAXACT attribute defines the maximum number of jobs that can be active in the subsystem from the job queue named in this entry. This attribute controls only the maximum number of jobs allowed into the subsystem from the job queue. The default for MAXACT is 1, which lets only one job at a time from this job queue process in the subsystem. The MAXACT (yes, same name) attribute of the subsystem description controls the maximum number of jobs in the subsystem from all entries (e.g., job queue and communications entries). You can use the SEQNBR attribute to sequence multiple job queue entries associated with the subsystem. The subsystem searches each job queue in the order specified by the SEQNBR attribute of each job queue entry. The default for this attribute is 10, which you can use to define only one subsystem job queue entry; however, when defining multiple job queue entries, you should determine the appropriate sequence numbers desired to prioritize the job queues. The MAXPTYn attribute is similar to the MAXACT attribute except that MAXPTYn controls the number of active jobs from a job queue that have the same priority (e.g., MAXPTY1 defines the maximum for jobs with priority 1, MAXPTY2 defines the maximum number for jobs with priority 2). The default for MAXPTY1 through MAXPTY9 is *NOMAX. To illustrate how job queue entries work together to create a proper batch environment, Figure 18.1 shows a scheme that includes three subsystems: DAYSBS, NIGHTSBS, and BATCHSBS. DAYSBS processes daytime, short-running end-user batch jobs. NIGHTSBS processes nighttime, long-running end-user batch jobs. BATCHSBS processes operator-submitted requests and program compiles. To create the batch work environment in Figure 18.1, you first create the subsystems using the following CRTSBSD (Create Subsystem Description) commands:
CRTSBSD SBSD(QGPL/DAYSBS) POOL((1 *BASE) (2 400 1)) MAXACT(1) CRTSBSD SBSD(QGPL/NIGHTSBS) POOL((1 *BASE) (2 2000 2)) MAXACT(2) CRTSBSD SBSD(QGPL/BATCHSBS) POOL((1 *BASE) (2 1500 3) MAXACT(3) Notice that each subsystem has an established maximum number of active jobs (MAXACT(n)). The maximum limit matches the activity level specified in the subsystem pool definition so that each active job is assigned an activity level without having to wait for one. The next step is to create the appropriate job queues with the following CRTJOBQ (Create Job Queue) commands:
CRTJOBQ CRTJOBQ CRTJOBQ CRTJOBQ
JOBQ(QGPL/DAYQ) JOBQ(QGPL/NIGHTQ) JOBQ(QGPL/PGMQ) JOBQ(QGPL/BATCHQ)
Then, add the job queue entries to associate the job queues with the subsystems:
ADDJOBQE ADDJOBQE ADDJOBQE ADDJOBQE
SBSD(DAYSBS) JOBQ(DAYQ) MAXACT(*NOMAX) SEQNBR(10) SBSD(NIGHTSBS) JOBQ(NIGHTQ) MAXACT(*NOMAX) SEQNBR(10) SBSD(BATCHSBS) JOBQ(PGMQ) MAXACT(1) SEQNBR(10) SBSD(BATCHSBS) JOBQ(BATCHQ) MAXACT(2) SEQNBR(20)
Now let's walk through this batch work environment. Subsystem DAYSBS is a simple configuration that lets one job queue feed jobs into the subsystem. Because the MAXACT attribute value of DAYSBS is 1, only one job filters into the subsystem at a time, despite the fact that you specified the attribute MAXACT(*NOMAX) for the DAYQ job queue entry. Later, you can change the subsystem pool size and activity level, along with the MAXACT subsystem attribute, to let more jobs from the job queue process without having to re-create the job queue entry to modify MAXACT. The configuration of NIGHTSBS is similar to the configuration of DAYSBS, except that it lets two jobs process at the same time. This subsystem is inactive during the day and starts at night via the STRSBS (Start Subsystem) command. When a subsystem is inactive, no job queues are allocated and no jobs are processed. Therefore, application programs can send batch jobs to the NIGHTQ job queue, where they wait to process at night. When NIGHTSBS starts, the system allocates job queue NIGHTQ and jobs can be processed. To show you how job queues can work together to feed into one subsystem, I configured the BATCHSBS subsystem with two job queue entries. Notice that BATCHSBS supports a maximum of three jobs (MAXACT(3)). Job queue entry PGMQ lets one job from that queue be active (MAXACT(1)), while job queue entry BATCHQ lets two jobs be active (MAXACT(2)). As with workstation entries, job queue entries can conflict if you define the same job queue as an entry for more than one subsystem. When a subsystem starts, the job queues defined in the job queue entries are allocated. And when a job queue is allocated to an active subsystem, that job queue cannot be allocated to another subsystem until the first subsystem ends. In other words, first come, first served... or first come, first queued!
Communications Entries After you establish a workstation and a physical connection between remote sites, you need a communications entry, which enables the subsystem to process the program start request. If there are no communications entries, the system rejects any program start request. There's no real pizazz to this entry; you simply need it to link the remote system with your subsystem. A communications entry has the following attributes:
• • • • • •
DEV (name or type of communications device) RMTLOCNAME (remote location name) JOBD (job description name) DFTUSR (default user profile name) MODE (mode description name) MAXACT (maximum number of jobs active with this entry)
The DEV attribute specifies the particular device (e.g., COMMDEV or REMSYS) or device type (e.g., *APPC) needed for communications. The RMTLOCNAME attribute specifies the remote location name you define when you use the CRTDEVxxxx command to create the communications device. There is no default for the DEV or the RMTLOCNAME attribute. As with the WRKSTNTYPE and WRKSTN attributes, you must specify one or the other, but not both. The next two attributes, JOBD and DFTUSR, are crucial. JOBD specifies the job description to associate with this entry. As you do with the workstation entry, you should use the default value *USRPRF to ensure that a user profile is used and that the system uses the job description associated with the user making the program start request. As with workstation entries, using a specific job description can cause a security problem if that job description names a default user. DFTUSR defines the default user for the communications entry. You should specify *NONE for this attribute to ensure that any program start request supplies a valid user profile and password. The MODE attribute defines specific communications boundaries and variables. For more information about the MODE attribute, see the CRTMODD (Create Mode Description) command description in IBM's AS/400 Programming: Control Language Reference (SC41-0030).
The MAXACT attribute defines the maximum number of program start requests that can be active at any time in the subsystem for this communications entry. You can add a communications entry by using the ADDCMNE (Add Communications Entry) command, as in the following example:
ADDCMNE SBSD(COMMSBS) RMTLOCNAME(NEWYORK) JOBD(*USRPRF) DFTUSR(*NONE) MODE(*ANY) MAXACT(*NOMAX)
+
If you are communicating already and you want to know what entries are configured, use the DSPSBSD (Display Subsystem Description) command to find out.
Prestart Job Entries The prestart job entry goes hand-in-hand with the communications entry, telling the subsystem which program to start when the subsystem itself is started. The program does not execute -- the system simply performs all the opens and initializes the job named in the prestart job entry and then waits for a program start request for that particular program. When the system receives a program start request, it starts a job by using the prestart program that is ready and waiting, thus saving valuable time in program initialization. The prestart job entry is the only work entry that defines an actual program and job class to be used. (Other jobs get their initial routing program from the routing data entries that are part of the subsystem description.) The two key attributes of the prestart job entry are PGM and JOBD. The PGM attribute specifies the program to use and the JOBD attribute specifies the job description to be used. To add a prestart job entry, use an ADDPJE (Add Prestart Job Entry) command similar to the following:
ADDPJE SBSD(COMMSBS) PGM(OEPGM) JOBD(OEJOBD) Then, when the communications entry receives a program start request (an EVOKE) and processes the request, it will compare the program evoke to the prestart job program defined. In this case, if the program evoke is also OEPGM, the system has no need to start a job because the prestart job is already started.
Autostart Job Entry An autostart job entry specifies the job to be executed when the subsystem starts. For instance, if you want to print a particular history report each time the system is IPLed, you can add the following autostart job entry to the controlling subsystem description:
ADDAJE SBSD(sbs_name) JOB(HISTORY) JOBD(MYLIB/HISTJOBD) The JOB and JOBD attributes are the only ones the autostart job entry defines, which means that the job description must use the request data or routing data to execute a command or a program. In the example above, HISTJOBD would have the correct RQSDTA (Request Data) attribute to call the program that generates the history report (e.g., RQSDTA('call histpgm')). The job HISTORY, defined in the autostart job entry, starts each time the associated subsystem starts, ensuring that the job runs whether or not anyone remembers to submit it. OS/400 uses an autostart job entry to assist the IPL process. When you examine either the QBASE or QCTL subsystem description (using the DSPSBSD command), you will find that an autostart job entry exists to submit the QSTRUPJD job using the job description QSYS/QSTRUPJD. This job description uses the request data to call a program used in the IPL process.
Where Jobs Go Now we've seen where jobs come from on the AS/400 -- but where do they go? I'll address that question in the next chapter when we look at how routing entries provide the final gateway to subsystem processing. One reminder. If you decide to create or modify the system-supplied work management objects such as subsystem descriptions and job queues, you should place the new objects in a user-defined library. When you are
ready to start using your new objects, you can change the system startup program QSYS/QSTRUP to use your new objects for establishing your work environment (to change the system startup program, you modify the CL source and recompile the program). By having your new objects in your own library, you can easily document any changes.
Chapter 19 - Demystifying Routing So far, I have explained how jobs are defined and started on the AS/400. We've seen that jobs are processed in a subsystem, which is where the system combines all the resources needed to process work. And we've seen how work entries control how jobs gain access to the subsystem. Now we need to talk about routing, which determines how jobs are processed after they reach the subsystem. I am constantly surprised by the number of AS/400 programmers who have never fully examined routing. In fact, it's almost as though routing is some secret whose meaning is known by only a few. In this chapter, I concentrate on subsystem routing entries to prove to you, once and for all, that you have nothing to fear! The AS/400 uses routing to determine where jobs go. To understand routing, it might help to think of street signs, which control the flow of traffic from one place to another. The AS/400 uses the following routing concepts to process each and every job:
• • •
Routing data -- A character string, up to 80 characters long, that determines the routing entry the subsystem will use to establish the routing step. Routing entry -- A subsystem description entry, which you create, that determines the program and job class the subsystem will use to establish a routing step. Routing step -- The processing that starts when the routing program executes.
To execute in a subsystem, AS/400 jobs must have routing data. Routing data determines which routing entry the subsystem will use. For most jobs, routing data is defined by either the RTGDTA (Routing Data) parameter of the job description associated with the job or by the RTGDTA parameter of the SBMJOB (Submit Job) command. Now let's look at each of these job types to see how routing data is defined for each.
Routing Data for Interactive Jobs Users gain access to a given subsystem for interactive jobs via workstations, defined by workstation entries. The key to determining routing data for an interactive job is the JOBD (Job Description) parameter of the workstation entry that the subsystem uses to allocate the workstation being used. If the value for the JOBD parameter is *USRPRF, the routing data defined on the job description associated with the user profile is used as the routing data for the interactive job. If the value of the JOBD parameter of the workstation entry is *SBSD (which instructs the system to use the job description that has the same name as the subsystem description) or an actual job description name, the routing data of the specified job description will be used as the routing data for the interactive job. Let me give you a couple examples. Let's say you create a user profile using the CRTUSRPRF (Create User Profile) command and do not enter a specific job description. The system uses QDFTJOBD (the default job description) for that user profile. Executing DSPJOBD QDFTJOBD reveals that the RTGDTA attribute has a value of QCMDI. When a user signs on to a workstation that uses a subsystem workstation entry where *USRPRF is defined as the JOBD attribute, the routing data for that interactive job would be the routing data defined on the job description associated with the user profile; in this case, the JOBD would be QDFTJOBD, and the routing data would be QCMDI.
Now look at Figure 19.1, in which the workstation entry defines SPJOBD as the job description. Instead of using the job description associated with the user profile, the subsystem uses the SPJOBD job description to establish job attributes, including the RTGDTA value of SPECIAL.
Routing Data for Batch Jobs Establishing routing data for a batch job is simple; you use the RTGDTA parameter of the SBMJOB (Submit Job) command. The RTGDTA parameter on this command has four possible values:
• •
• •
JOBD -- the routing data of the job description. RQSDTA -- the value specified in the RQSDTA (Request Data) parameter on the SBMJOB command. (Because the request data represents the actual command or program to process, specifying *RQSDTA is practical only if specific routing entries have been established in a subsystem to start specific routing steps based on the command or program being executed by a job.) QCMDB -- the default routing data used by the IBM-supplied subsystems QBASE or QBATCH to route batch jobs to the CL processor QCMD (more on this later). routing-data -- up to 80 characters of user-defined routing data.
Keeping these values in mind, let's look at a SBMJOB command. To submit a batch job that sends the operator the message 'hi,' you would enter the command
SBMJOB JOB(MESSAGE) CMD('SNDMSG'hi' TOMSGQ(QSYSOPR)') This batch job would use the routing data of QCMDB. How do I know that? Because, as I stated above, the value QCMDB is the default. If you submit a job using the SBMJOB command without modifying the default value for the RTGDTA parameter, the routing data is always QCMDB -- as long as this default has not been changed via the CHGCMDDFT (Change Command Default) command. Now examine the following SBMJOB command:
SBMJOB JOB(PRIORITY) CMD('call user-pgm') RTGDTA('high-priority')
+
In this example, a routing data character string ('high-priority') is defined. By now you are probably wondering just how modifying the routing data might change the way a job is processed. We'll get to that in a minute. Figure 19.2 provides an overview of how the routing data for a batch job is established. A user submits a job via the SBMJOB command. The RTGDTA parameter of the SBMJOB command determines the routing data, and the resulting job (012345/USER_XX/job_name) is submitted to process in a subsystem. We can pick any of the four possible values for the RTGDTA attribute on the SBMJOB command and follow the path to see how that value eventually determines the routing data for the submitted batch job. If you specify RTGDTA(*JOBD), the system examines the JOBD parameter of the SBMJOB command and then uses either the user profile's job description or the actual job description named in the parameter. If you define the RTGDTA parameter as *RQSDTA, the job uses the value specified in the RQSDTA (Request Data) parameter of the SBMJOB command as the routing data. Finally, if you define the RTGDTA parameter as QCMDB or any userdefined routing data, that value becomes the routing data for the job.
Routing Data for Autostart, Communications, and Prestart Jobs As you may recall from Chapter 18, an autostart job entry in the subsystem description consists of just two attributes: the job name and the specific job description to be used for processing. The routing data of a particular job description is the only source for the routing data of an autostart job. For communications jobs (communications evoke requests), the subsystem builds the routing data from the program start request, which always has the value PGMEVOKE starting in position 29, immediately followed by the desired program name. The routing data is not taken from a permanent object on the AS/400, but is instead derived from the program start request that the communications entry in the subsystem receives and processes. Prestart jobs use no routing data. The prestart job entry attribute, PGM, specifies the program to start in the subsystem. The processing of this program is the routing step for that job.
The Importance of Routing Data When a job enters a subsystem, the subsystem looks for routing data that matches the compare value in one or more routing entries of the subsystem description -- similar to the way you would check your written directions to see which highway exit to take. The subsystem seeks a match to determine which program to use to establish the routing step for that job. Routing entries, typically defined when you create a subsystem, are defined as part of the subsystem description via the ADDRTGE (Add Routing Entry) command. Before we take a closer look at the various attributes of a routing entry, let me explain how routing entries relate to routing data. Figure 19.3 shows how the subsystem uses routing data for an interactive job. When USER_XX signs on to workstation DSP01, the interactive job is started, and the routing data (QCMDI) is established. When the job enters the subsystem, the system compares the routing data in the job to the routing data of each routing entry until it finds a match. (The search is based on the starting position specified in the routing entry and the literal specified as the compare value.) In Figure 19.3, the compare value for the first routing entry (SEQNBR(10)) and the routing data for job 012345/USER_XX/DSP01 are the same. Because the system has found a match, it executes the program defined in the routing entry (QCMD in library QSYS) to establish the routing step for the job in the subsystem. In addition to establishing the routing step, the routing entry also provides the job with specific runtime attributes based on the job class specified. In this case, the specified class is QINTER. Jobs that require routing data (all but prestart jobs) follow this same procedure when being started in the subsystem. Now that you have the feel of how this process works, let's talk about routing entries and associated job classes. In Chapter 18, I said that routing entries identify which programs to call, define which storage pool the job will be processed in, and specify the execution attributes the job will use for processing. As shown in Figure 19.3, a routing entry consists of a number of attributes: sequence number, compare value, starting position, program, class, maximum active, and pool ID. Each attribute is defined when you use the ADDRTGE command to add a routing entry to a subsystem description. It's important that you understand these attributes and how you can use them to create the routing entries you need for your subsystems. The sequence number is simply a basic numbering device that determines the order in which routing entries will be compared against routing data to find a match. When assigning a sequence number, you need to remember two rules. First, always use the compare value *ANY with SEQNBR(9999) so it will be used only when no other match can be found. (Notice that routing entry SEQNBR(9999) in Figure 19.3 has a compare value of *ANY.) Second, when using similar compare values, use the sequence numbers to order the values from most to least specific. For example, you would arrange the values PGMR, PGMRS, AND PGMRS1 this way:
Sequence Number 10 20 30
Compare Value 'PGMRS1' 'PGMRS' 'PGMR'
Placing the least specific value (PGMR) first would cause a match to occur even when the intended value (e.g., PGMRS1) is more specific.
The compare value and starting position attributes work together to search a job's routing data for a match. For example, if the value (ROUTE 5) is used, the system searches the job's routing data starting in position 5 for the value ROUTE. The compare value can be any characters you want (up to 80). The important thing is to use a compare value that matches some routing data that identifies a particular job or job type. Why go to this trouble? Because you can use this matching routing entry to determine a lot about the way a job is processed on the system (e.g., subsystem storage pool, run priority, and time slice). The PGM attribute determines what program is called to establish the routing step for the job being processed. Remember, a routing step simply starts the program named in the routing entry. Normally, this program is QCMD (the IBM CL processor), but it can be any program. When QCMD is the routing program, it waits for a request message to process. For an interactive job, the request message would be the initial program or menu request; for a batch job, it would be the request data (i.e., the command or program to execute). If the routing program is a user-defined program, the program simply executes. The routing entry program is the first program executed in the routing step. The routing entry can be used to make sure that a specific program is executed when certain routing data is found, regardless of the initial program or specific request data for a job. Later in this chapter, I explain how this might be beneficial to you.
Runtime Attributes The CLASS (job class) is an important performance-related object that defines the run priority of the job, as well as the time slice for a job. (The time slice is the length of time, in CPU milliseconds, a job will process before being bumped from the activity level to wait while another job executes a time slice.) A routing entry establishes a job's run priority and time slice much the way speed limit or yield signs control the flow of traffic. For more information on these performance-related attributes of the CLASS object, see IBM's AS/400 Programming: Work Management Guide (SC41-8078). In Figure 19.3, all the routing entries use class QINTER, which is defined to represent the run priority and time slice typical for an interactive job. Because you would not want to process a batch job using these same values, the system also has an IBM-supplied class, called QBATCH, that defines attributes more typical for batch job processing. If you look at the subsystem description for QBASE or QBATCH, you will find the following routing entry:
Sequence Number 10
Compare Value 'QCMDB'
Program QSYS/QCMD
Class QBATCH
This entry uses program QCMD and directs the system to use class QBATCH to define the runtime attributes for jobs having routing data QCMDB. To route jobs with the correct routing program and job class, the systemsupplied routing data for the default batch job description QBATCH is QCMDB. You can use different classes to create the right performance mix. MAXACT determines the maximum number of active jobs that can use a particular routing entry. You will rarely need to change this attribute's default (*NOMAX). The last routing entry attribute is the POOLID (subsystem storage pool ID). As I explained in Chapter 17, the subsystem definition includes the specific storage pools the subsystem will use. These storage pools are numbered in the subsystem, and these numbers are used only within that particular subsystem description; they do not match the numbering scheme of the system pools. The routing entry attribute POOLID tells the system which subsystem storage pool to use for processing this job. Look at the following pool definition and abbreviated routing entry:
Pool Definition: ((1 *BASE) (2 10000 20)) Sequence Number 10
Compare Value 'QCMDI'
Pool ID 1
This routing entry tells the system to use subsystem pool number 1 (*BASE). Considering that 10,000 KB of storage is set aside in pool number 2, this routing entry is probably incorrectly specifying pool number 1. Beginners
commonly make the mistake of leaving the default value in the routing entry definition when creating their own subsystems and defining their own routing entries. Just remember to compare the pool definition with the routing entry definition to ensure that the correct subsystem pool is being used.
Is There More Than One Way to Get There? So far, we've discussed how routing data is created, how routing entries are established to search for that routing data, and how routing entries establish a routing step for a job and control specific runtime attributes of a job. Now for one more hurdle... A job can have more than one routing step. But why would you want it to? One reason might be to use a new class to change the runtime attributes of the job. After a job is started, you can reroute it using the RRTJOB (Reroute Job) command or transfer it to another subsystem using the TFRJOB (Transfer Job) command. Both commands have the RTGDTA parameter, which lets you modify the job's current routing data to establish a new routing step. Suppose you issue the following command during the execution of a job:
RRTJOB RTGDTA('FASTER') RQSDTA(*NONE) Your job would be rerouted in the same subsystem but use the value FASTER as the value to be compared in the routing entries.
Do-It-Yourself Routing To reinforce your understanding of routing and tie together some of the facts you've learned about work management, consider the following example. Let's say you want to place programmers, OfficeVision/400 (OV/400) users, and general end users in certain subsystems based on their locations or functions. You need to do more than just separate the workstations; you need to separate the users, no matter what workstation they are using at the time. Figures 19.4a through 19.4f describe the objects and attributes needed to define such an environment. Figure 19.4a lists three job descriptions that have distinct routing data. User-defined INTERJOBD has QINTER as the routing data. OFFICEJOBD and PGMRJOBD have QOFFICE and QPGMR specified, respectively, as their routing data. (Note that the routing data need not match the job description name.) To enable users to work in separate subsystems, you first need to create or modify their user profiles and supply the appropriate job description based on the subsystem in which each user should work. In our example, general end users would have INTERJOBD, OV/400 users would have OFFICEJOBD, and programmers would have the job description PGMRJOBD.
Next, you must build subsystem descriptions that use the routing entries associated with the job descriptions. Figure 19.4b shows some sample subsystem definitions. All three subsystems use the WRKSTNTYPE (workstation type) entry with the value *ALL. However, only the workstation entry in QINTER uses the AT(*SIGNON) entry to tell the subsystem to allocate the workstations. This means that subsystem QINTER allocates all workstations and QOFFICE and QPGMR (both with AT(*ENTER)) only allocate workstations as jobs are transferred into those subsystems. Also, notice that each workstation entry defines JOBD(*USRPRF) so that the routing data from the job descriptions of the user profiles will be the routing data for the job. After a user signs on to a workstation in subsystem QINTER, the routing entries do all the work. The first routing entry looks for the compare value QOFFICE. When it finds QOFFICE, program QOFFICE in library SYSLIB is called to establish the routing step. In Figure 19.4c, program QOFFICE simply executes the TFRJOB command to transfer this particular job into subsystem QOFFICE. However, if you look carefully at Figure 19.4c, you will see that the TFRJOB command also modifies the routing data to become QCMDI, so that when the job enters subsystem QOFFICE, routing data QCMDI matches the corresponding routing entry and uses
program QCMD and class QOFFICE. If an error occurs on the TFRJOB command, the MONMSG CPF0000 EXEC(RRTJOB RTGDTA(QCMDI)) command reroutes the job in the current subsystem. Figure 19.4d shows how class QOFFICE might be created to provide the performance differences needed for OV/400 users. Look again at Figure 19.4b. The next routing entry in the QINTER subsystem looks for compare value QPGMR. When it finds QPGMR, it calls program QPGMR (Figure 19.4e) to transfer the job into subsystem QPGMR. Routing data QCMDI calls program QCMD and then processes the initial program or menu of the user profile. The same is true for routing data *ANY. In our example, subsystems QOFFICE and QPGMR use similar routing entries to make sure each job enters the correct subsystem. Notice that each subsystem has a routing entry that searches for QINTER. If this compare value is found, program QINTER (Figure 19.4f) is called to transfer the job into subsystem QINTER. As intimidating as they may at first appear, routing entries are really plain and simple. Basically, you can use them to intercept jobs as they enter the subsystem and then control the jobs using various run-time variables. I strongly recommend that you take the time to learn how your system uses routing entries. Start by studying subsystem descriptions to learn what each routing entry controls. Once you understand them, you will find that you can use routing entries as solutions to numerous work management problems.
Chapter 20 - File Structures Getting a handle on AS/400 file types can be puzzling. If you count the various types of files the AS/400 supports, how many do you get? The answer is five. And 10. The AS/400 supports five types of files — database files, source files, device files, DDM files, and save files. So if you count types, you get five. However, if you count the file subtypes — all the objects designated as OBJTYPE(*FILE) — you get 10. Still puzzled? Figure 20.1 lists the five file types that exist on the AS/400, as well as the 10 subtypes and the specific CRTxxxF (Create xxx File) commands used to create them. Each file type (and subtype) contains unique characteristics that provide unique functions on the AS/400. In this chapter, I look at the various types of files and describe the way each file type functions.
Structure Fundamentals If there is any one AS/400 concept that is the key to unlocking a basic understanding of application development, it is the concept of AS/400 file structure. It's not that the concept is difficult to grasp; it's just that there are quite a few facts to digest. So let's start by looking at how files are described. On the AS/400, all files are described at four levels (Figure 20.2). First is the object-level description. A file is an AS/400 object whose object type is *FILE. The AS/400 maintains the same object description information for a file (e.g., its library and size) as it does for any other object on the system. You can look at the object-level information with the DSPOBJD (Display Object Description) command. The second level of description the system maintains for *FILE objects is a file-level description. The file description is created along with the file when you execute a CRTxxxF command. It describes the attributes or characteristics of a particular file and is embedded within the file itself. You can display or print a file description with the DSPFD (Display File Description) command. The file subtype is one of the attributes maintained as part of the file description. This allows OS/400 to present the correct format for the description when using the DSPFD command. This also provides OS/400 with the ability to determine which commands can operate on which types of files. For instance, the DLTF (Delete File) command
works for any type of file on the system, but the ADDPFM (Add Physical File Member) command only works for physical files. OS/400 uses the description of the file to maintain and enforce each file's object identity. The third level of descriptive information the system maintains for files is the record-level description. This level describes the various, if more than one, record formats that exist in the file. A record format describes a set of fields that make a record. If the fourth level of description — field descriptions — is not used when creating the file, the record format is described by a specific record length. All files have at least one record format, and logical files can have multiple record formats (we'll cover this topic in a future chapter). Applications perform I/O by using specific record formats. An application can further break the record format into fields by either explicitly defining those fields within the application or by working with the external field definitions if they are defined for a record format. While there is the DSPOBJD command and the DSPFD command, there is no Display Record Description command. You use the DSPF command and the DSPFFD (Display File Field Description) command to display or print the record-level information. The final level of descriptive information the system maintains for files is the field-level description. Field descriptions do not exist for all types of files; tape files, diskette files, DDM files, and save files have no field descriptions because they have no fields. (In the case of DDM files, the field descriptions of the target system file are used.) For the remaining files — physical, logical, source, display, printer, and ICF — a description of each field and field attribute is maintained. You can use the DSPFFD command to display or print the field-level descriptions for a file.
Data Members: A Challenge Now that you know how files are described, you need a challenge! We now need to consider a particular organizational element that applies only to database and source files, the two types of files that actually contain records of data. You may be saying, 'Wait, you don't have to tell us that. Each file is described (as discussed), and each file has records, right?' I wish it were that simple, but on the AS/400 there is an additional element of file organization, the data member, that has caused even the best application programmers to cry in anguish, just as Martin Luther did, until they discover the truth. Now that I have your attention (and you're trying to remember just who Martin Luther was — look under Church History: The Reformation), I will impart the truth to you and save you any future anguish. Examine Figure 20.3, which introduces you to the concept of the file data member. You traditionally think of a file containing a set of records, and usually an AS/400 database file has a description and a data member that contains all the records that exist in that database file. If you create a physical file using the CRTPF (Create Physical File) command and take the defaults for member name and maximum number of members, which are MBR(*FILE) and MAXMBRS(1), respectively, you will create a file that contains only one data member, and the name of that member will be the same name as the file itself. So far, so good. Now comes the tricky part. Believe it or not, AS/400 database and source files can have no data members. If you create a physical file and specify MBR(*NONE), the file will be created without any data member for records. If you try to add records to that file, the system will issue an error stating that no data member exists. You would have to use the ADDPFM command to create a data member in the file before you could add records to the file. At the other end of the scale is the fact that you can have multiple data members in a file. A source file offers a good example.Figure 20.4 represents the way a source file is organized. Each source member is a different data member in the file. When you create a new source member, you are actually creating another data member in this physical source file. Whether you are using PDM (Programming Development Manager) or SEU (Source Entry Utility), by specifying the name of the source member you want to work with, you are instructing the software to override the file to use that particular member for record retrieval. Consider another example — a user application that views both current and historical data by year. Each year represents a unique set of records. This type of application might use a database file to store each year's records in separate data members, using the year itself to construct the name of the data member. Figure 20.5 represents how this application might use a single physical file to store these records. As you can see, each year has a unique data member, and each member has a various number of records. All members have the same description in terms of record format and fields, but each member contains unique data. The applications that access this data must use the OVRDBF (Override with Database File) command to open the correct data member for record retrieval.
Wow! No database members... one database member... multiple database members... Why? That's a fair question. Using multiple data members provides a unique manner to handle data that uses the same record format and same field descriptions and yet must be maintained separately for business reasons. One set of software can be written to support the effort, but the data can be maintained, even saved, separately. Having sorted through the structure of AS/400 files and dealt with data members, let's look specifically at the types of files and how they are used.
Database Files Database files are AS/400 objects that actually contain data or provide access to data. Two types of files are considered database files — physical files and logical files. A physical file, denoted as TYPE(*FILE) and ATTR(PF), has file-, record-, and field-level descriptions and can be created with or without using externally described source specifications. Physical files — so called because they contain your actual data (e.g., customer records) — can have only one record format. The data entered into the physical file is assigned a relative record number based on arrival sequence. As I indicated earlier, database files can have multiple data members, and special program considerations must be implemented to ensure that applications work with the correct data members. You can view the data that exists in a specific data member of a file using the DSPPFM (Display Physical File Member) command. A logical file, denoted as TYPE(*FILE) and ATTR(LF), is created in conjunction with physical files to determine how data will be presented to the requester. For those of you coming from a S/36, the nearest kin to a logical file is an index or alternate index. Logical files contain no data but instead are used to specify key fields, select/omit logic, field selection, or field manipulation. The key fields serve to specify the access paths to use for accessing the actual data records that reside in physical files. Logical files must be externally described using DDS and can be used only in conjunction with externally described physical files.
Source Files A source file, like QRPGSRC where RPG source members are maintained, is simply a customized form of a physical file; and as such, source files are denoted as TYPE(*FILE) and ATTR(PF). (Note: If you work with objects using PDM, physical data files and physical source files are distinguished by using two specific attributes — PFDTA and PF-SRC.) All source files created using the CRTSRCPF (Create Source Physical File) command have the same record format and thus the same fields. When you use the CRTSRCPF command, the system creates a physical file that allows multiple data members. Each program source is one physical file member. When you edit a particular source member, you are simply editing a specific data member in the file.
Device Files Device files contain no actual data. They are files whose descriptions provide information about how an application is to use particular devices. The device file must contain information valid for the device type the application is accessing. The types of device files are display, printer, tape, diskette, and ICF. Display files, denoted by the system as TYPE(*FILE) and ATTR(DSPF), provide specific information relating to how an application can interact with a workstation. While a display file contains no data, the display file does contain various record formats that represent the screens the application will present to the workstation. Each specific record format can be viewed and maintained using IBM's Screen Design Aid (SDA), which is part of the Application Development Tools licensed program product. Interactive high-level language (HLL) programs include the workstation display file as one of the files to be used in the application. The HLL program writes a display file record format to the screen to present the end user with formatted data and then reads that format from the screen when the end user presses Enter or another appropriate function key. Whereas I/O to a database file accesses disk storage, I/O to a display file accesses a workstation. Printer files, denoted by the system as TYPE(*FILE) and ATTR(PRTF), provide specific information relating to how an application can spool data for output to a writer. The print file can be created with a maximum record length specified and one format to be used with a HLL program and program-described printing, or the print file can be created from external source statements that define the formats to be used for printing. Like display files, the print files themselves contain no data and therefore have no data member associated with them. When an application
program performs output operations to a print file, the output becomes spooled data that can be printed on a writer device. Tape files, denoted by the system as TYPE(*FILE) and ATTR(TAPF), provide specific information relating to how an application can read or write data using tape media. The description of the tape file contains information such as the device name for tape read/write operations, the specific tape volume requested (if a specific volume is desired), the density of the tape to be processed, the record and block length to be used, and other essential information relating to tape processing. Without the use of a tape file, HLL programs cannot access the tape media devices. Diskette files, denoted by the system as TYPE(*FILE) and ATTR(DKTF), are identical to tape files except that these files support diskette devices. Diskette files have attributes that describe the volume to be used and the record and block length. ICF (Intersystem Communications Function) files, denoted by the system as TYPE(*FILE) and ATTR(ICFF), provide specific attributes to describe the physical communications device used for application peer-to-peer communications programming. When a local application wants to communicate with an application on a remote system, the local application turns to the ICF file for information regarding the physical device to use for those communications. The ICF file also contains record formats used to read and write data from and to the device and the peer program.
DDM Files DDM (Distributed Data Management) files, denoted by the system as TYPE(*FILE) and ATTR(DDMF), are objects that represent files that exist on a remote system. For instance, if your customer file exists on a remote system, you can create a DDM file on the local system that specifically points to that customer file on the remote system. DDM files provide you with an interface that lets you access the remote file just as you would if it were on your local system. You can compile programs using the file, read records, write records, and update records while the system handles the communications. Figure 20.6 represents a typical DDM file implementation.
Save Files Save files, denoted by the system TYPE(*FILE) and ATTR(SAVF), are a special form of file designed specifically to handle save/restore data. You cannot determine the file-, record-, and field-level descriptions for a save file. The system creates a specific description used for all save files to make them compatible with save/restore operations. Save files can be used to receive the output from a save operation and then be used as input for a restore operation. This works just as performing save/restore operations with tape or diskette, except that the saved data is maintained on disk, which enhances the save/restore process because I/O to the disk file is faster than I/O to a tape or diskette device. Save file data also can be transmitted electronically or transported via a sneaker network or overnight courier network to another system and then restored. We have briefly looked at the various types of files that exist on the AS/400. Understanding these objects is critical to effective application development and maintenance on the AS/400. One excellent source for further reading is IBM's Programming: Data
Chapter 21 - So You Think You Understand File Overrides 'Try using OvrScope(*Job).'
How many times have you heard this advice when a file override wasn't working as intended? Changing your application to use a job-level override may produce the intended results, but doing so is a bit like replacing a car's engine because it has a fouled spark plug. Actually, with a fully functional new engine, the car will always run right again. A job-level override, on the other hand, may or may not produce the desired results, depending on your application's design. And even if the application works today, an ill-advised job-level override coupled with modifications may introduce application problems in the future. If you're considering skipping this article because you believe you already understand file overrides, think again! I know many programmers, some excellent, who sincerely believe they understand this powerful feature of OS/400 — after all, they've been using overrides in their applications for years. However, I've yet to find anyone who does fully understand overrides. So, read on, surprise yourself, and learn once and for all how the system processes file overrides. Then put this knowledge to work to get the most out of overrides in your applications.
Anatomy of Jobs Before examining file overrides closely, you need to be familiar with the parts of a job's anatomy integral to the function of overrides. The call stack and activation groups both play a key role in determining the effect overrides have in your applications. Jobs typically consist of a chain of active programs, with one program calling another. The call stack is simply an ordered list of these active programs. When a job starts, the system routes it to the beginning program to execute and designates that program as the first entry in the call stack. If the program then calls another program, the system assigns the newly called program to the second call stack entry. This process can continue, with the second program calling a third, the third calling a fourth, and so on, each time adding the new program to the end of the call stack. The call stack therefore reflects the depth of program calls. Consider the following call stack: ProgramA ProgramB ProgramC ProgramD You can see four active programs in this call stack. In this example, the system called ProgramA as its first program when the job started. ProgramA then called ProgramB, which in turn called ProgramC. Last, ProgramC called ProgramD. Because these are nested program calls, each program is at a different layer in the call stack. These layers are known as call levels. In the example, ProgramA is at call level 1, indicating the fact that it is the first program called when the job started. ProgramB, ProgramC, and ProgramD are at call levels 2, 3, and 4, respectively. As programs end, the system removes them from the call stack, reducing the number of call levels. For instance, when ProgramD ends, the system removes it from the call stack, and the job then consists of only three call levels. If ProgramC then ends, the job consists of only two call levels, with ProgramA and ProgramB making up the call stack. This process continues until ProgramA ends, at which time the job ends. So far, you've seen that when one program calls another, the system creates a new, higher call level at which the called program runs. The called program then begins execution, and when it ends, the system removes it from the call stack, returning control to the calling program at the previous call level. That's the simple version, but there's a little more to the picture. First, it's possible for one program to pass control to another program without the newly invoked program running at a higher call level. For instance, with CL's TfrCtl (Transfer Control) command, the system replaces (in the call stack) the program issuing the command with the program to which control is to be transferred. Not only does this action result in the invoked program running at the same call level as the invoking program, but the invoking program is also completely removed from the chain of programs making up the call stack. Hence, control can't be returned to the program that issued the TfrCtl command. Instead, when the newly invoked program ends, control returns to the program at the immediately preceding call level.
You may recall that earlier I said that as programs end, the system removes them from the call stack. In reality, when a program ends, the system removes from the call stack not only the ending program but also any program at a call level higher than that of the ending program. You might be thinking about our example and scratching your head, wondering, 'How can ProgramB end before ProgramC?' Consider the fact that ProgramD can send an escape message to ProgramB's program message queue. This event results in the system returning control to ProgramB's error handler. This return of control to ProgramB results in the system removing from the call stack all programs at a call level higher than ProgramB — namely, ProgramC and ProgramD. ProgramB's design then determines whether it is removed from the call stack. If it handles the exception, ProgramB is not removed from the call stack; instead, processing continues in ProgramB. You should also note that under normal circumstances, the call stack begins with several system programs before any user-written programs appear. In fact, system programs will likely appear throughout your call stack. This point is important only to demonstrate that the call stack isn't simply a representation of user-written programs as they are called. In addition to an understanding of a job's call levels, you need a basic familiarity with activation groups to comprehend file overrides. You're probably familiar with the fact that a job is a structure with its own allocated system resources, such as open data paths (ODPs) and storage for program variables. These resources are available to programs executed within that job but are not available to other jobs. Activation groups, introduced with the Integrated Language Environment (ILE), are a further division of jobs into smaller substructures. As is the case with jobs, activation groups consist of private system resources, such as ODPs and storage for program variables. An activation group's allocated resources are available only to program objects that are assigned to, and running in, that particular activation group within the job. You assign ILE program objects to an activation group when you create the program objects. Then, when you execute these programs, the system creates the activation group (or groups) to which the programs are assigned. A job can consist of multiple activation groups, none of which can access the resources unique to the other activation groups within the job. For example, although multiple activation groups within a job may open the same file, each activation group can maintain its own private ODP. In such a case, programs assigned to the same activation group can use the ODP, but programs assigned to a different activation group don't have access to the same ODP. A complete discussion of activation groups could span volumes. For now, it's sufficient simply to note that activation groups exist, that they are substructures of a job, and that they can contain their own set of resources not available to other activation groups within the job.
Override Rules The rules governing the effect overrides have on your applications fall into three primary areas: the override scope, overrides to the same file, and the order in which the system processes overrides. After examining the details of each of these areas, we'll look at a few miscellaneous rules. Scoping an Override An override's scope determines the range of influence the override will have on your applications. You can scope an override to the following levels:
•
•
•
Call level — A call-level override exists at the level of the process that issues the override, unless the override is issued using a call to program QCmdExc; in that case, the call level is that of the process that called QCmdExc. A call-level override remains in effect from the time it is issued until the system replaces or deletes it or until the call level in which the override was issued ends. Activation group level — An activation-grouplevel override applies to all programs running in the activation group associated with the issuing program, regardless of the call level in which the override is issued. In other words, only the most recently issued activation-grouplevel override is in effect. An activation-group level override remains in effect from the time the override is issued until the system replaces it, deletes it, or deletes the activation group. These rules apply only when the override is issued from an activation group other than the default activation group. Activation-grouplevel overrides issued from the default activation group are scoped to call-level overrides. Job level — A job-level override applies to all programs running in the job, regardless of the activation group or call level in which the override is issued. Only the most recently issued job-level override is in effect. A job-level override remains in effect from the time it is issued until the system replaces or deletes it or until the job in which the override was issued ends.
You specify an override's scope when you issue the override, by using the override command's OvrScope (Override scope) parameter. Figure 1 depicts an ILE application's view of a job's structure, along with the manner in which you can specify overrides. First, notice that two activation groups, the default activation group and a named activation group, make up the job. All jobs have as part of their structure the default activation group and can optionally have one or more named activation groups. Original Program Model (OPM) programs can run only in the default activation group. Figure 1 shows two OPM programs, Program1 and Program2, both running in the default activation group. Because OPM programs can't be assigned to a named activation group, jobs that run only OPM programs consist solely of the default activation group. ILE program objects, on the other hand, can run in either the default activation group or a named activation group, depending on how you assign the program objects to activation groups. If any of a job's program objects are assigned to a named activation group, the job will have as part of its structure that named activation group. In fact, if the job's program objects are assigned to different named activation groups, the job will have each different named activation group as part of its structure. Figure 1 shows five ILE programs: Program3 and Program4 are both running in the default activation group, and Program5, Program6, and Program7 are running in a named activation group. The figure not only depicts the types of program objects that can run in the default activation group and in a named activation group; it also shows the valid levels to which you can scope overrides. Programs running in the default activation group, whether OPM or ILE, can issue overrides scoped to the job level or to the call level. ILE programs running in a named activation group can scope overrides not only to these two levels but to the activation group level as well. Figure 1 portrays each of these possibilities.
Overriding the Same File Multiple Times One feature of call-level overrides is the ability to combine multiple overrides for the same file so that each of the different overridden attributes applies. Consider the following program fragments, which issue the OvrPrtF (Override with Printer File) command: ProgramA:
OvrPrtF Call
File(Report) OutQ(Sales01) + OvrScope(*CallLvl) Pgm(ProgramB)
ProgramB:
OvrPrtF Call
File(Report) Copies(3) + OvrScope(*CallLvl) Pgm(PrintPgm)
When program PrintPgm opens and spools printer file Report, the overrides from both programs are combined, resulting in the spooled file being placed in output queue Sales01 with three copies set to be printed. Now, consider the following program fragment: ProgramC:
OvrPrtF OvrPrtF Call
File(Report) OutQ(Sales01) + OvrScope(*CallLvl) File(Report) Copies(3) + OvrScope(*CallLvl) Pgm(PrintPgm)
What do you think happens? You might expect this program to be functionally equivalent to the two previous programs, but it isn't. Within a single call level, only the most recent override is in effect. In other words, the most recent override replaces the previous override in effect. In the case of ProgramC, the Copies(3) override is in effect, but the OutQ(Sales01) override is not. This feature provides a convenient way to replace an override within a single call level without the need to first delete the previous override. It's also fun to show programmers ProgramA and ProgramB, explain that things worked flawlessly, and then ask them to help you figure out why things didn't work right after you changed the application to look like ProgramC! When they finally figure out that only the most recent override within a program is in effect, show them your latest modification — ProgramA:
OvrPrtF TfrCtl
File(Report) OutQ(Sales01) + OvrScope(*CallLvl) Pgm(ProgramB)
ProgramB:
OvrPrtF Call
File(Report) Copies(3) + OvrScope(*CallLvl) Pgm(PrintPgm)
— and watch them go berserk again! This latest change is identical to the first iteration of ProgramA and ProgramB, except that rather than issue a Call to ProgramB from ProgramA, you use the TfrCtl command to invoke ProgramB. Remember, TfrCtl doesn't start a new call level. ProgramB will simply replace ProgramA on the call stack, thereby running at the same call level as ProgramA. Because the call level doesn't change, the overrides aren't combined. You may need to point out to the programmers that they didn't really figure it out at all when they determined that only the most recent override within a program is in effect. The rule is: Only the most recent override within a call level is in effect.
The Order of Applying Overrides You've seen the rules concerning the applicability of overrides. In the course of a job, many overrides may be issued. In fact, as you've seen, many may be issued for a single file. When many overrides are issued for a single file, the system constructs a single override from the overridden attributes in effect from all the overrides. This type of override is called a merged override. Merged overrides aren't simply the result of accumulating the different overridden file attributes, though. The system must also modify, or replace, applicable attributes that have been overridden multiple times and remove overrides when an applicable request to delete overrides is issued. To determine the merged override, the system follows a distinct set of rules that govern the order in which overrides are processed. The system processes the overrides for a file when it opens the file and uses the following sequence to check and apply overrides: 1. 2. 3. 4.
call-level overrides up to and including the call level of the oldest procedure in the activation group containing the file open (beginning with the call level that opens the file and progressing in decreasing call-level sequence) the most recent activation-grouplevel overrides for the activation group containing the file open call-level overrides lower than the call level of the oldest procedure in the activation group containing the file open (beginning with the call level immediately preceding the call level of the oldest procedure in the activation group containing the file open and progressing in decreasing call-level sequence) the most recent job-level overrides
This ordering of overrides can get tricky! It is without a doubt the least-understood aspect of file overrides and the source of considerable confusion and errors. To aid your understanding, let's look at an example. Figure 2A shows a job with 10 call levels, programs in the default activation group and in two named activation groups (AG1 and AG2), and overrides within each call level and each activation group. Before we look at how the system processes these overrides, see whether you can determine the file that ProgramJ at call level 10 will open, as well as the attribute values that will be in effect due to the job's overrides. In fact, try the exercise twice, the first time without referring to the ordering rules.
Figure 2B reveals the results of the job's overrides. Did you arrive at these results in either of your tries? Let's walk, step by step, through the process of determining the overrides in effect for this example. Step 1 — call-level overrides up to and including the call level of the oldest procedure in the activation group containing the file open Checking call level 10 shows that the system opens file Report1 in activation group AG1. The oldest procedure in activation group AG1 appears at call level 2. Therefore, in step 1, the system processes call-level overrides beginning with call level 10 and working up the call stack through call level 2. When the system processes call level 2, step 1 is complete. a. b. c. d.
e.
There is no call-level override for file Report1 at call level 10. There is no call-level override for file Report1 at call level 9. There is no call-level override for file Report1 at call level 8. There is no call-level override for file Report1 at call level 7. Call level 6 contains a call-level override for file Report1. The Copies attribute for file Report1 is overridden to 7. Active overrides at this point: Copies(7)
f.
Call level 5 shows an activation-grouplevel override, but the program is running in the default activation group. Remember, activation-grouplevel overrides issued from the default activation group are scoped to call-level overrides. Therefore, the system processes this override as a call-level override. The CPI attribute for file Report1 is overridden to 13.3, and the previous Copies attribute value is replaced with this latest value of 6. Active overrides at this point: CPI(13.3) Copies(6)
g.
There is no call-level override for file Report1 at call level 4. Call level 3 contains a call-level override for file Report1. The LPI attribute for file Report1 is overridden to 9, and the previous Copies attribute value is replaced with this latest value of 4. Active overrides at this point: LPI(9) CPI(13.3) Copies(4)
h.
i.
There is no call-level override for file Report1 at call level 2.
Step 1 is now complete. Call level 2 contains the oldest procedure in activation group AG1 (the activation group containing the file open). Step 2 — the most recent activation-grouplevel overrides for the activation group containing the file open The system now checks for the most recently issued activation-grouplevel override within activation group AG1, where file Report1 was opened. a. b. c.
There is no activation-grouplevel override for file Report1 at call level 10. There is no activation-grouplevel override for file Report1 in activation group AG1 at call level 9. The activation-grouplevel override in call level 9 is in activation group AG2 and is therefore not applicable. Call level 8 contains an activation-grouplevel override in activation group AG1 for file Report1. The FormFeed attribute for file Report1 is overridden to *Cut, the previous LPI attribute value is replaced with this latest value of 12, and the previous Copies attribute value is replaced with this latest value of 9.
Active overrides at this point: LPI(12) CPI(13.3) FormFeed(*Cut) Copies(9) Step 2 is now complete. The system discontinues searching for activation-grouplevel overrides because this is the most recently issued activation-grouplevel override in activation group AG1. Step 3 — call-level overrides lower than the call level of the oldest procedure in the activation group containing the file open Remember, call level 2 is the call level of the oldest procedure in activation group AG1. The system begins processing call-level overrides at the call level preceding call level 2. In this case, there is only one call level lower than call level 2.
a.
Call level 1 contains a call-level override for file Report1. The OutQ attribute for Report1 is overridden to Prt01, and the previous Copies attribute value is replaced with this latest value of 2.
Active overrides at this point: LPI(12) CPI(13.3) FormFeed(*Cut) OutQ(Prt01) Copies(2) Step 3 is now complete. The call stack has been processed through call level 1. Step 4 — the most recent job-level overrides The system finishes processing overrides by checking for the most recently issued job-level override for file Report1. a. b. c. d.
There is no job-level override for file Report1 at call level 10. There is no job-level override for file Report1 at call level 9. There is no job-level override for file Report1 at call level 8. Call level 7 contains a job-level override for file Report1. Notice that the program runs in activation group AG2 rather than AG1. Job-level overrides can come from any activation group. The previous Copies attribute value is replaced with this latest value of 8.
Active overrides at this point: LPI(12) CPI(13.3) FormFeed(*Cut) OutQ(Prt01) Copies(8) Step 4 is now complete. The system discontinues searching for job-level overrides because this is the most recently issued job-level override. This completes the application of overrides. The final merged override that will be applied in call level 10 is
LPI(12) CPI(13.3) FormFeed(*Cut) OutQ(Prt01) Copies(8) All other attribute values come from the file description for printer file Report1. It's easy to see how this process could be confusing and lead to the introduction of errors in applications! Now, let's make the process even more confusing! In the previous example, our HLL program (ProgramJ) opened file Report1, and no programs issued an override to the file name. What do you think happens when you override the file name to a different file using the ToFile parameter on the OvrPrtF command? Once the system issues an override that changes the file, it searches for overrides to the new file, not the original. Let's look at a slightly modified version of our example. Figure 2C contains the new programs. Only two of the original programs have been changed in this new example. In ProgramC at call level 3, the ToFile parameter has been added to the OvrPrtF command, changing the file to be opened from Report1 to Report2. And ProgramB at call level 2 now overrides printer file Report2 rather than Report1. Figure 2D shows the results of the overrides. Again, let's step through the process of determining the overrides in effect for this example. Step 1 — call-level overrides up to and including the call level of the oldest procedure in the activation group containing the file open Checking call level 10 shows that the system opens file Report1 in activation group AG1. The oldest procedure in activation group AG1 appears at call level 2. Therefore, in step 1, the system processes call-level overrides beginning with call level 10 and working up the call stack through call level 2. When the system processes call level 2, step 1 is complete. a. b. c. d.
There is no call-level override for file Report1 at call level 10. There is no call-level override for file Report1 at call level 9. There is no call-level override for file Report1 at call level 8. There is no call-level override for file Report1 at call level 7.
e.
Call level 6 contains a call-level override for file Report1. The Copies attribute for file Report1 is overridden to 7. Active overrides at this point: Copies(7)
f.
Call level 5 shows an activation-grouplevel override, but the program is running in the default activation group. Again, activation-grouplevel overrides issued from the default activation group are scoped to calllevel overrides. Therefore, the system processes this override as a call-level override. The CPI attribute for file Report1 is overridden to 13.3, and the previous Copies attribute value is replaced with this latest value of 6. Active overrides at this point: CPI(13.3) Copies(6)
g. h.
There is no call-level override for file Report1 at call level 4. Call level 3 contains a call-level override for file Report1. The LPI attribute for file Report1 is overridden to 9, and the previous Copies attribute value is replaced with this latest value of 4. Notice that the printer file has also been overridden to Report2. This is especially noteworthy because the system will now begin searching for overrides to file Report2 rather than file Report1. Active overrides at this point: ToFile(Report2) LPI(9) CPI(13.3) Copies(4)
i.
There is no call-level override for file Report2 at call level 2.
Step 1 is now complete. Call level 2 contains the oldest procedure in activation group AG1 (the activation group containing the file open). Step 2 — the most recent activation-grouplevel overrides for the activation group containing the file open The system now checks for the most recently issued activation-grouplevel override within activation group AG1 where file Report1 (actually Report2 now) was opened. a. b. c. d. e. f. g. h. i.
There is no activation-grouplevel override for file Report2 at call level 10. There is no activation-grouplevel override for file Report2 in activation group AG1 at call level 9. The activation-grouplevel override in call level 9 is in activation group AG2 and is therefore not applicable. There is no activation-grouplevel override for file Report2 at call level 8. There is no activation-grouplevel override for file Report2 at call level 7. There is no activation-grouplevel override for file Report2 at call level 6. There is no activation-grouplevel override for file Report2 at call level 5. There is no activation-grouplevel override for file Report2 at call level 4. There is no activation-grouplevel override for file Report2 at call level 3. Call level 2 contains an activation-grouplevel override in activation group AG1 for file Report2. The FormType attribute for file Report2 is overridden to FormB, the previous LPI attribute value is replaced with this latest value of 7.5, and the previous Copies attribute value is replaced with this latest value of 3.
Active overrides at this point: ToFile(Report2) LPI(7.5) CPI(13.3) FormType(FormB) Copies(3) Step 2 is now complete. The system discontinues searching for activation-grouplevel overrides because this is the most recently issued activation-grouplevel override in activation group AG1. Step 3 — call-level overrides lower than the call level of the oldest procedure in the activation group containing the file open Again, call level 2 is the call level of the oldest procedure in activation group AG1. The system begins processing call-level overrides at the call level preceding call level 2 (i.e., call level 1). a.
There is no call-level override for file Report2 at call level 1.
Step 3 is now complete. The call stack has been processed through call level 1.
Step 4 — the most recent job-level overrides The system finishes processing overrides by checking for the most recently issued job-level override for file Report2. a. b. c. d. e. f. g. h. i. j.
There is no job-level override for file Report2 at call level 10. There is no job-level override for file Report2 at call level 9. There is no job-level override for file Report2 at call level 8. There is no job-level override for file Report2 at call level 7. There is no job-level override for file Report2 at call level 6. There is no job-level override for file Report2 at call level 5. There is no job-level override for file Report2 at call level 4. There is no job-level override for file Report2 at call level 3. There is no job-level override for file Report2 at call level 2. There is no job-level override for file Report2 at call level 1.
Step 4 is now complete. There are no job-level overrides for file Report2. This completes the application of overrides. The final merged override that will be applied to printer file Report2 in call level 10 is
LPI(7.5) CPI(13.3) FormType(FormB) Copies(3) All other attribute values come from the file description for printer file Report2.
Protecting an Override In some cases, you may want to protect an override from the effect of other overrides to the same file. In other words, you want to ensure that an override issued in a program is the override that will be applied when you open the overridden file. You can protect an override from being changed by overrides from lower call levels, the activation group level, and the job level by specifying Secure(*Yes) on the override command. Figure 3 shows excerpts from two programs, ProgramA and ProgramB, running in the default activation group and with call-level overrides only. ProgramA simply issues an override to set the output queue attribute value for printer file Report1 and then calls ProgramB. ProgramB in turn calls two HLL programs, HLLPrtPgm1 and HLLPrtPgm2, both of which function to print report Report1. Before the call to each of these programs, ProgramB issues an override to file Report1 to change the output queue attribute value. When you call ProgramA, the system first issues a call-level override that sets Report1's output queue attribute to value Prt01. Next, ProgramA calls ProgramB, thereby creating a new call level. ProgramB begins by issuing a calllevel override, setting Report1's output queue attribute value to Prt02. Notice that the OvrPrtF command specifies the Secure parameter with a value of *Yes. ProgramB then calls HLL program HLLPrtPgm1 to open and print Report1. Because this call-level OvrPrtF command specifies Secure(*Yes), the system does not apply call-level overrides from lower call levels — namely, the override in ProgramA that sets the output queue attribute value to Prt01. HLLPrtPgm1 therefore places the report in output queue Prt02. ProgramB continues with yet another call-level override, setting Report1's output queue attribute value to Prt03. Because this override occurs at the same call level as the first override in ProgramB, the system replaces the call level's override. However, this new override doesn't specify Secure(*Yes). Therefore, the system uses the calllevel override from call level 1. This override changes the output queue attribute value from Prt03 to Prt01. ProgramB finally calls HLLPrtPgm2 to open and spool Report1 to output queue Prt01. These two overrides in ProgramB clearly demonstrate the behavioral difference between an unsecured and a secured override.
Explicitly Removing an Override The system automatically removes overrides at certain times, such as when a call level ends, when an activation group ends, and when the job ends. However, you may want to remove the effect of an override at some other time. The DltOvr (Delete Override) command makes this possible, letting you explicitly remove overrides. With this command, you can delete overrides at the call level, the activation group level, or the job level as follows:
Call level: DltOvr File(File1) OvrScope(*) Activation group level: DltOvr File(File2) OvrScope(*ActGrpDfn) Job level: DltOvr File(File3)
OvrScope(*Job) Value *ActGrpDfn is the default value for the DltOvr command's OvrScope (Override scope) parameter. If you don't specify parameter OvrScope on the DltOvr command, this value is used. The command's File parameter also supports special value *All, letting you extend the reach of the DltOvr command. This option gives you a convenient way to remove overrides for several files with a single command.
Miscellanea I've covered quite a bit of ground with these rules of overriding files. In addition to the rules you've already seen, I'd like to introduce you to a few tidbits you might find useful. You've probably grown accustomed to the way a CL program lets you know when you've coded something erroneously — the program crashes with an exception! However, specify a valid, yet wrong, file name on an override, and the system gives you no warning you've done so. This seemingly odd behavior is easily explained. Consider the following code:
OvrPrtF
File(Report1) OutQ(Prt01)
Call
Pgm(HLLPrtPgm)
However, HLLPrtPgm opens file Report2, not Report1. The system happily spools Report2 without any regard to the override. Although this is clearly a mistake in that you've specified the wrong file name in the OvrPrtF command, the system has no way of knowing this. The system can't know your intentions. Remember, this override could be used somewhere else in the job, perhaps even in a different call level. The second tidbit involves a unique override capability that exists with the OvrPrtF command. OvrPrtF's File parameter supports special value *PrtF, letting you extend the reach of an override to all printer files (within the override scoping rules, of course). All rules concerning the application of overrides still apply. Special value *PrtF simply gives you a way to include multiple files with a single override command. Also, you may recall an earlier reference to program QCmdExc and how its use affects the scope of an override. This program's primary purpose is to serve as a vehicle that lets HLL programs execute system commands. You can therefore use QCmdExc from within an HLL program to issue a file override. Remember that when you issue an override using this method, the call level is that of the process that invoked QCmdExc. You should note that override commands may or may not affect system commands. For more information about overrides and system commands, see 'Overrides and System Commands,'.
Important Additional Override Information With the major considerations of file overrides covered, let's now take a brief look at some additional override information of note. Overriding the Scope of Open File At times, you'll want to share a file's ODP among programs in your application. For instance, when you use the OpnQryF (Open Query File) command, you must share the ODP created by OpnQryF or your application won't use the ODP created by OpnQryF. To share the ODP, you specify Share(*Yes) on the OvrDbF (Override with Database File) command. You can also explicitly control the scope of open files (ODPs) using the OpnScope (Open scope) parameter on the OvrDbF command. You can override the open scope to the activation group level and the job level. Non-File Overrides
In addition to file overrides, the system provides support for overriding message files and program device entries used in communications applications. You can override the message file used by programs by using the OvrMsgF (Override with Message File) command. However, the rules for applying overrides with OvrMsgF are quite different from those with other override commands. You can override only the name of the message file used, not the attributes. During the course of normal operations, the system frequently sends various types of messages to various types of message queues. OvrMsgF provides a way for you to specify that when sending a message for a particular message ID, the system should first check the message file specified in the OvrMsgF for the identified message. If the message is found, the system sends the message using the information from this message file. If the message isn't found, the system sends the message using the information from the original message file. Using the OvrICFDevE (Override ICF Program Device Entry) command, you can issue overrides for program device entries. Overrides for program device entries let you override attributes of the Intersystem Communications Function (ICF) file that provides the link between your programs and the remote systems or devices with which your program communicates. Overrides and Multithreaded Jobs The system provides limited support for overrides in multithreaded jobs. Some restrictions apply to the provided support. The system supports the following override commands:
• • • •
OvrDbF — You can issue this command from the initial thread of a multithreaded job. Only overrides scoped to the job level or an activation group level affect open operations performed in a secondary thread. OvrPrtF — You can issue this command from the initial thread of a multithreaded job. As with OvrDbF, only overrides scoped to the job level or an activation group level affect open operations performed in a secondary thread. OvrMsgF — You can issue this command from the initial thread of a multithreaded job. This command affects only message file references in the initial thread. Message file references performed in secondary threads are not affected. DltOvr — You can issue this command from the initial thread of a multithreaded job.
The system ignores any other override commands in multithreaded jobs. File Redirection You can use overrides to redirect input or output to a file of a different type. For instance, you may have an application that writes directly to tape using a tape file. If at some time you'd like to print the information that's written to tape, you can use an override to accomplish your task. When you redirect data to a different file type, you use the override appropriate for the new target file. In the case of our example, you would override from the tape file to a printer file using the OvrPrtF command. I mention file redirection so that you know it's a possibility. Of course, many restrictions apply when using file redirection, so if you decide you'd like to use the technique, refer to the documentation. IBM's File Management provides more information about file redirection. You can find this manual on the Internet at IBM's iSeries Information Center (http://publib.boulder.ibm.com/pubs/html/as400/infocenter.htm).
Chapter 22 - Logical Files For many years, IBM sold the S/38 on the premise that it was the 'logical choice.' Yes, that play on words was corny, but true. One of the S/38's strongest selling points was the relational database implementation provided by logical files, and the AS/400 has inherited that feature. Logical files on the AS/400 provide the flexibility needed to build a database for an interactive multiuser environment. As I said in the last chapter, there are two kinds of database files: physical files and logical files. Physical files contain data; logical files do not. Logical files control how data in physical files is presented, most commonly using
key fields (whose counterpart on the S/36 is the alternate index) so that data can be retrieved in key-field sequence. However, the use of key fields is not the only function logical files provide. Let me introduce you to the following basic concepts about logical files:
• • • •
record format definition/physical file selection key fields select/omit logic multiple logical file members
Record Format Definition/Physical File Selection To define a logical file, you must select the record formats to be used and the physical files to be referenced. You can use the record format found in the physical file, or you can define a new record format. If you use the physical file record format, every field in that record format is accessible through the logical file. If you create a new record format, you must specify which fields will exist in the logical file. A logical file field must either reference a field in the physical file record format or be derived by using concatenation or substring functions. Because the logical file does not contain any data, it must know which physical file to access for the requested data. You use the DDS PFILE keyword to select the physical file referenced by the logical file record format. You specify the physical file in the PFILE keyword as a qualified name (i.e., library_namefile_name) or as the file name alone. Figure 22.1a lists the DDS for physical file HREMFP, and Figure 22.1b shows the DDS for logical file HREMFL1. Notice that the logical file references the physical file's record format (HREMFR). Consequently, every field in the physical file will be presented in logical file HREMFL1. Also notice that the PFILE keyword in Figure 22.1b references physical file HREMFP. In Figure 22.1c, logical file HREMFL2 defines a record format not found in PFILE-referenced HREMFP. Therefore, this logical file must define each physical file field it will use. A logical file can thus be a projection of the physical file -- that is, contain only selected physical file fields. Notice that fields EMEMP#, EMSSN#, and EMPAYR all appear in the physical file but are not included in file HREMFL2.
Key Fields Let's look at Figures 22.1b and 22.1c again to see how key fields are used. File HREMFL1 identifies field EMEMP# as a key field (in DDS, key fields are identified by a K in position 17 and the name of the field in positions 19 through 28). When you access this logical file by key, the records will be presented in employee number sequence. The logical file simply defines an access path for the access sequence -- it does not physically sort the records. The UNIQUE keyword in this source member tells the system to require a unique value for EMEMP# for each record in the file, thus establishing EMEMP# as the primary key to physical file HREMFP. Should the logical file be deleted, records could be added to the physical file with a non-unique key, giving rise to a question that has been debated over the years: Is it better to use a keyed physical file or a keyed logical file to establish a file's primary key? You could specify EMEMP# as the key in the DDS for physical file HREMPF and enforce it as the primary key using the UNIQUE keyword. Making the primary key a part of the physical file has a distinct advantage: The primary key is always enforced because the physical file cannot be deleted without deleting the data. Even if all dependent logical files were deleted, the primary key would be enforced. However, placing the key in the physical file also has a disadvantage. Should the access path for a physical file data member be damaged (a rare, but possible, occurrence), the damaged access path prevents access to the data. Your only recourse in that case would be to delete the member and restore it from a backup. Another minor inconvenience is that any time you want to process the file in arrival sequence (e.g., to maximize retrieval performance), you must use the OVRDBF (Override with Database File) command or specify arrival sequence in your high-level language program. Placing the primary key in a logical file, as I did in Figure 22.1b, ensures that access path damage results only in the need to recompile the logical file -- the physical file remains intact. This method also means that you can access the physical file in arrival sequence. As I mentioned earlier, the negative effect is that deleting the logical file results in leaving the physical file without a primary key.
Let me make a few comments concerning the issue of where to place the primary key. Access path maintenance is costly; when records are updated, the system must determine whether any key fields have been modified, requiring the access path to be updated. The overhead for this operation is relatively small in an interactive environment where changes are made randomly based on business demands. However, for files where batch purges or updates result in many access path updates, the overhead can be quite detrimental to performance. With that in mind, here are some suggestions.
•
• •
For work files, which are frequently cleared and reloaded, create the physical file with no keys, and place the primary and alternate keys in logical files. Then delete the logical files (access paths) before you clear and reload the file. The update will be much faster with no access path maintenance to perform. After the update, rebuild or restore the logical files. The same method works best for very large files. When you need to update the entire file, you can delete the logical files, perform the update, and then rebuild or restore the logical files. For files updated primarily through interactive maintenance programs, putting the key in the physical file poses no performance problems.
The UNIQUE keyword is also expensive in terms of system overhead, so you should use it only to maintain the primary key. Logical file HREMFL2 specifies three key fields -- EMLNAM (employee last name), EMFNAM (employee first name), and EMMINT (employee middle initial). The UNIQUE keyword is not used here because the primary key is the employee number and there is no advantage in requiring unique names (even if you could ensure that no two employees had the same name). A primary key protects the integrity of the file, while alternative keys provide additional views of the same data.
Select/Omit Logic Another feature that logical files offer is the ability to select or omit records from the referenced physical file. You can use the keywords COMP, VALUES, and RANGE to provide select or omit statements when you build logical files. Figure 22.2 shows logical file HREMFL3. Field EMTRMD (employee termination date) is used with keyword COMP to compare values, forming a SELECT statement (notice the S in position 17). This DDS line tells the system to select records from the physical file in which field EMTRMD is equal to 0 (i.e., no termination date has been entered for that employee). Therefore, when you create logical file HREMFL3, OS/400 builds indexed entries in the logical file only for records in which employee termination date is equal to zero, thus omitting terminated employees (EMTRMD NE 0). When a program accesses the logical file, it reads only the selected records. Before looking at some examples, I want to go over some of the basic rules for using select/omit statements. 1. You can use select/omit statements only if the logical file specifies key fields (the value *NONE in positions 19 through 23 satisfies the requirement for a key field) or if the logical file uses the DYNSLT keyword. (I'll go into more detail about this keyword later.) 2. To locate the field definitions for fields named on a select/omit statement, OS/400 first checks the field name specified in positions 19 through 28 in the record format definition and then checks fields specified as parameters on CONCAT (concatenate) or RENAME keywords. If the field name is found in more than one place, OS/400 uses the first occurrence of the field name. 3. Select/omit statements are specified by an S or an O in position 17. Multiple statements coded with an S or an O form an OR connective relationship. The first true statement is used for select/omit purposes. 4. You can follow a select/omit statement with other statements containing a blank in position 17. Such additional statements form an AND connective relationship with the initial select or omit statement. All related statements must be true before the record is selected or omitted. 5. You can specify both select and omit statements in the same file, but the following rules apply: a. If you specify both select and omit for a record format, OS/400 processes the statements only until one of the conditions is met. Thus, if a record satisfies the first statement or group of related statements, the record is processed without being tested against the subsequent select/omit statements.
b. If you specify both select and omit, you can use the ALL keyword to specify whether records that do not meet any of the specified conditions should be selected or omitted. c. If you do not use the ALL keyword, the action taken for records not satisfying any of the conditions is the converse of the last statement specified. For example, if the last statement was an omit, the record is selected. Now let's work through a few select/omit examples to see how some of these rules apply. Consider the statements in Figure 22.3. Based on rule 3, OS/400 selects any record in which employee termination date equals 0 or employee type equals H (i.e., hourly). Both statements have an S coded in position 17, representing an OR connective relationship. Contrast the statements in Figure 22.3 with the statements in Figure 22.4. Notice that the second statement in Figure 22.4 does not have an S or an O in position 17. According to rule 4, the second statement is related to the previous statement by an AND connective relationship. Therefore, both comparisons must be true for a record to be selected, so all current hourly employees will be selected. To keep it interesting, let's change the statements to appear as they do in Figure 22.5. At first glance, you might think this combination of select and omit would provide the same result as the statements in Figure 22.4. However, it doesn't -- for two reasons. As rule 5a explains, the order of the statements is significant. In Figure 22.5, the first statement determines whether employee type equals H. If it does, the record is selected and the second test is not performed, thus allowing records for terminated hourly employees to be selected. The second reason the statement in Figures 22.4 and 22.5 produce different results is because of the absence of the ALL keyword, which specifies how to handle records that do not meet either condition. According to rule 5c, records that do not meet either comparison are selected because the system performs the converse of the last statement listed (e.g., the omit statement). Figure 22.6 shows the correct way to select records for current hourly employees using both select and omit statements. The ALL keyword in the last statement tells the system to omit records that don't meet the conditions specified by the first two statements. In general, however, it is best to use only one type of statement (either select or omit) when you define a logical file. By limiting your definitions this way, you will avoid introducing errors that result when the rules governing the use of select and omit are violated. Select/omit statements give you dynamic selection capabilities via the DDS DYNSLT keyword. DYNSLT lets you defer the select/omit process until a program requests input from the logical file. When the program reads the file, OS/400 presents only the records that meet the select/omit criteria. Figure 22.7 shows how to code the DYNSLT keyword. So now I guess you are wondering just how this differs from an example without the DYNSLT keyword. It differs in one significant way: performance. In the absence of the DYNSLT keyword, OS/400 builds indexed entries only for those records that meet the stated select/omit criteria. Access to the correct records is faster, but the overhead of maintaining the logical file is increased. When you use DYNSLT, all records in the physical file are indexed, and the select/omit logic is not performed until the file is accessed. You only retrieve records that meet the select/omit criteria, but the process is dynamic. Because DYNSLT decreases the overhead associated with access path maintenance, it can improve performance in cases where that overhead is considerable. As a guideline, if you have a select/omit logical file that uses more than 75 percent of the records in the physical file member, the DYNSLT keyword can reduce the overhead required to maintain that logical file without significantly affecting the retrieval performance of the file, because most records will be selected anyway. If the logical file uses less than 75 percent of the records in the physical file member, you can usually maximize performance by omitting the DYNSLT keyword and letting the select/omit process occur when the file is created.
Multiple Logical File Members The last basic concept you should understand is the way logical file members work. The CRTLF (Create Logical File) command has several parameters related to establishing the member or members that will exist in the logical file. These parameters are MBR (the logical file member name), DTAMBRS (the physical file data members upon which the logical file member is based), and MAXMBRS (the maximum number of data members the logical file can contain). The default values for these parameters are *FILE, *ALL, and 1, respectively.
Typically, a physical file has one data member. When you create a logical file to reference such a physical file, these default values instruct the system to create a logical file member with the same name as the logical file itself, base this logical file member on the single physical file data member, and specify that a maximum of one logical file member can exist in this file. When creating applications with multiple-data-member physical files, you often don't know precisely what physical and logical members you will eventually need. For example, for each user you might add members to a temporary work file for each session when the user signs on. Obviously, you (or, more accurately, your program) don't know in advance what members to create. In such a case, you would normally
•
Create the physical file with no members:
CRTPF FILE(TESTPF) MBR(*NONE) •
Create the logical file with no members:
CRTLF FILE(TESTLF) MBR(*NONE) •
For every user that signs on, add a physical file member to the physical file:
ADDPFM FILE(TESTPF) MBR(TESTMBR) TEXT('Test PF Data Member') •
For every physical file member, add a member to the logical file and specify the physical file member on which to base the logical member:
ADDLFM FILE(TESTLF) MBR(TESTMBR) DTAMBRS((TESTPF TESTMBR)) + TEXT('Test LF Data Member') When a logical file member references more than one physical file member, and your application finds duplicate records in the multiple members, the application processes those records in the order in which the members are specified on the DTAMBRS parameter. For instance, if the CRTLF command specifies
CRTLF FILE(TESTLIB/TESTLF) MBR(ALLYEARS) + DTAMBRS((YRPF DT1988) (YRPF DT1989) (YRPF DT1990)) a program that processes logical file member ALLYEARS first reads the records in member DT1988, then in member DT1989, and finally in member DT1990.
Keys to the AS/400 Database Understanding logical files will take you a long way toward creating effective database implementations on the AS/400. Since I have introduced the basic concepts only, I strongly recommend that you spend some time in the manuals to increase your knowledge about logical files. Start with the description of the CRTLF command in IBM's Programming: Control Language Reference (SC41-0030) and also refer to Chapter 3, 'Setting Up Logical Files,' in the AS/400 Database Guide (SC41-9659). As you master the methods presented, you will discover many ways in which logical files can enhance your applications.
Chapter 23 - File Sharing As the father of two young children (ages 4 and 9), I have learned that to maintain peace in the house, my wife and I must either teach our children to share or buy two of everything. Those of you who can identify with this predicament know that in reality peace occurs only when you do a little of both -- sometimes you teach, and sometimes you buy. The AS/400 inherited a performance-related virtue from the S/38 that lets you 'teach' your programs to share file resources. I call it a performance-related virtue because the benefit of teaching your programs to share boosts performance for many applications. However, as is the case with children, there will be times when sharing doesn't provide any benefits and, in fact, is more trouble than it's worth. In this chapter, as we continue to examine files on
the AS/400, we will focus on the SHARE (Share Open Data Path) attribute and how you can use it effectively in your applications. You may already be familiar with the general concept of file sharing, a common feature for many operating systems that lets more than one program open the same file. When each program opens the file, a unique set of resources is established to prevent conflict between programs. This type of file sharing is automatic on the AS/400 unless you specifically prevent it by allocating a file for exclusive operations (using the ALCOBJ (Allocate Object) command). The SHARE attribute does not control this automatic function. On the AS/400, SHARE is a file attribute. It goes beyond normal file sharing to let programs within the same job share the open data path (ODP) established when the file was originally opened in the job. This means that programs share the file status information (i.e., the general and file-dependent I/O feedback areas), as well as the file pointer (i.e., a program's current record position in a file). As we further examine the SHARE attribute, you will see that this type of sharing enhances modular programming performance, but that you must manage it effectively to prevent conflicts between programs. The SHARE attribute is valid for database, source, device, distributed data management, and save files. You can establish the SHARE attribute or modify it for a file using any of the CRTxxxF (Create File), CHGxxxF (Change File), or OVRxxxF (Override with File) commands. The valid values are *YES and *NO. If SHARE(*NO) is specified for a file, each program operating on that file in the same job must establish a unique ODP.
Sharing Fundamentals While sharing ODPs can be a window to enhancing performance, doing so can also generate programming errors if you try to share without understanding a few simple fundamentals. The first fundamental pertains to open options that programs establish. When a program opens a file, the options specified on the OPNDBF (Open Data Base File) command or by the high-level language definition of the file determine the open options. The open options are *INP (input only), *OUT (output only), and *ALL (input, output, update, and delete operations). These options are significant when you use shared ODPs. If you specify SHARE(*YES) for a file, the initial program's open of the file must use all the open options required for any subsequent programs in the same job. For example, if PGMA opens file TEST (specified with SHARE(*YES)) with the open option *INP (for input only), and then PGMB, which requires the open option *ALL (for an update or delete function) is called, PGMB will fail. Besides sharing open options, programs also share the file pointer, a capability that is both powerful and problematic. Figure 23.1 displays the eight records that exist in file TEST. In Figures 23.2a and 23.2b are RPG programs TESTRPG1 and TESTRPG2, respectively, which alternately read a record in file TEST. After TESTRPG1 reads a record, it calls TESTRPG2, which then reads a record in file TEST. TESTRPG2 calls TESTRPG1, which reads another record, and so on. Both programs use print device file QPRINT to generate a list of the records read.
When the SHARE attribute for both file TEST and file QPRINT is SHARE(*NO), the output generated appears as displayed in Figure 23.3. Each program reads all eight records because each program uses a unique ODP. If you change file TEST or override it to specify SHARE(*YES), the programs generate the lists displayed in Figure 23.4. Each program reads only four records, because the programs share the same ODP. Finally, if you also change or override the attribute of file QPRINT to be SHARE(*YES), the output appears as shown in Figure 23.5. Both programs share print file QPRINT and, while each program reads only four records, the output is combined in a single output file. One common misconception is that using SHARE(*YES) alters the way in which the database manager performs record locking -- a conclusion you could easily reach if you confuse record locking with file locking. It is true that when you specify SHARE(*YES), file locking is handled differently than when you specify SHARE(*NO); when you specify SHARE(*YES), the first open establishes the open options. Thus, if the first open of a file with SHARE(*YES) uses option *ALL, every program using that file obtains a SHRUPD (Shared Update) lock on that file. This lock occurs even when a particular program normally opens the file with *INP open options.
Record locking, on the other hand, is not controlled by the open options, but by the RPG compiler. The program compiler determines which locks are needed for any input operations in the program and creates the object code to make them happen during program execution. Thus, programs perform record locking on files with SHARE(*YES) the same way they perform record locking on files with SHARE(*NO). Let me stress that this fact alone does not prevent the problems you must address when you write multiple programs to perform with files having SHARE(*YES) in an on-line update environment. But record locking, in and of itself, is not a serious concern. The real hazard is that because SHARE(*YES) lets programs share the file pointer, programs can easily become confused about which record is actually being retrieved, updated, or output if you fail to write the programs so they recognize and manage the shared pointer. The following example illustrates this potential problem. PGMA first reads file TEST for update purposes. Then PGMA calls PGMB, which also reads file TEST for update. If PGMB ends before performing the update, the file pointer remains positioned at the record read by PGMB. If PGMA then performs an update, PGMA updates the values of the current record variables (from the first read in PGMA) into the record PGMB read because that is where the file pointer is currently positioned. While you would never purposely code this badly, you might accidentally cause the same problem in your application if you fit program modules together without considering the current value of the SHARE attribute on the files. The moral of the story is this: When calling programs that use the same file, always reposition the file pointer after the called program ends, unless you are specifically coding to take advantage of file pointer positioning within those applications.
Sharing Examples The most popular use of the SHARE attribute is to open files at the menu level when users frequently enter and exit applications on that menu. Figure 23.6 illustrates a simple order-entry menu with five options, each of which represents a program that uses one or more of the described files. If SHARE(*NO) is defined for each file, then each time one of these programs is called, an ODP is created for each program file. If users frequently switch between menu options, they experience delays each time a file is opened. The coding example in Figure 23.7 provides a solution to this problem. First, the OVRDBF (Override with Database File) command specifies SHARE(*YES) for each file identified. Then, OPNDBF opens each file with the maximum open options required for the various applications. The overhead required to open the files affects the menu program only. When users select an option on the menu, the respective program need not open the file, and thus the programs are initiated more quickly. Remember, however, to plan carefully when using SHARE to open files, keeping in mind the above-mentioned guidelines about placing the file pointer. The SHARE attribute also comes in handy when you write applications that provide on-line inquiries into related files. Figure 23.8 outlines an order-entry program that opens several files and that lets the end user call a customer inquiry program or item master inquiry program to look up specific customers or items. Either program uses a file already opened by the initial program. By including the statements in Figure 23.9 in a CL program that calls the order-entry program, you can ensure that the ODP for these files is shared, reducing the time needed to access the two inquiry programs. There is no doubt that SHARE is a powerful attribute. Unfortunately, the power it provides can introduce errors (specifically, the wrong selection of records due to file pointer position) unless you understand it and use it carefully. SHARE(*YES) can shorten program initiation steps and can let programs share vital I/O feedback information. If you're using batch programs that typically open files, process the records, and then remain idle until the next night, SHARE(*YES) will buy you nothing. But if you're considering highly modular programming designs, SHARE(*YES) is a must. For more information about SHARE, see IBM's Programming: Data Base Guide (SC419659) and Programming: Control Language Reference (SC41-0030).
Chapter 24 - CL Programming: You're Stylin' Now! The key to creating readable, maintainable code is establishing and adhering to a set of standards about how the code should look. Standards give your programs a consistent appearance -- a style -- and create a comfortable environment for the person reading and maintaining the code. They also boost productivity. Programmers with a consistent style don't think about how to arrange code; they simply follow clearly defined coding standards, which become like second nature through habit. And programmers reading such code can directly interpret the program's actions without the distraction of bad style. Good coding style transcends any one language. It's a matter of professionalism, of doing your work to the best of your abilities and with pride.
Although most CL programs are short and to the point, a consistent programming style is as essential to CL as it is to any other language. When I started writing CL, I used the prompter to enter values for command parameters. Today, I still use the prompter for more complex commands or to prompt for valid values when I'm not sure what to specify. The prompter produces a standard of sorts. Every command begins in column 14, labels are to the left of the commands, and the editor wraps the parameters onto continuation lines like a word processor wraps words when you've reached the margin. While using the prompter is convenient, code generated this way can be extremely difficult to read and maintain. Let's look at CL program CVTOUTQCL (Figure 24.1), which converts the entries of an output queue listing into a database file. Another application can then read the database file and individually process each spool file (e.g., copy the contents of the spool file to a database file for saving or downloading to a PC). Without a program such as CVTOUTQCL, you would have to jot down the name of each output queue entry and enter each name into the CPYSPLF (Copy Spool File) command or any other command you use to process the entry. Now compare the code in Figure 24.1 to the version of CVTOUTQCL shown in Figure 24.2. The programs' styles are dramatically different. Figure 24.1's code is crowded and difficult to read, primarily because of the CL prompter's default layout. In addition, this style lacks elements such as helpful spacing, code alignment, and comments that help you break the code down into logical, readable chunks. Figure 24.2's code is much more readable and comprehensible. An informative program header relates the program's purpose and basic functions. The program also features more attractive code alignment, spacing that divides the code into distinct sections, indentation for nested DO-ENDDO groups, and mnemonic variable names. Let's take a closer look at the elements responsible for Figure 24.2's clarity and some coding guidelines you can use to produce sharp CL code with a consistent appearance.
Write a descriptive program header. If the first source statement in your CL program is the PGM statement, something's missing. All programs, including CL programs, need an introduction. To create a stylish CL program, first write a program header that describes the program's purpose and basic function. Figure 24.2's program header provides the basic information a programmer needs to become familiar with the program's purpose and function. An accurate introduction helps programmers who come after you feel more comfortable as they debug or enhance your code. The program header begins with the program's name, followed by the author's name and the date created. An essential piece of the program header is the 'program type,' which identifies the type of code that follows. CL program types include the CPP (command processing program), the VCP (validity checking program), the CPO (command prompt override program), the MENU (menu program), and the PROMPT (prompter). You may use other categories or different names to describe the types of CL programs. But whatever you call it, you should identify the type of program you are writing and label it appropriately in the header. Another important part of the introduction is a description of what the program does. State the program's purpose concisely, and, in the program summary, outline the basic program functions to familiarize the programmer with how the program works. You should detail the summary only in terms of what happens and what events occur (e.g., building a file or copying records). A good program header also includes a revision summary, featuring a list of revisions, the dates they were made, and the names of those who made them. If you don't have a standard CL program header, create a template of one in a source member called CLHEADER (or some other obvious name) and copy the member into each CL program. Fill in the current information for each program, and remember to maintain the information as part of the quality control checks you perform on production code. While an up-to-date program header is valuable, an outdated one can be misleading and harmful.
Format your programs to aid understanding. Determining where to start each statement is one of the most basic coding decisions you can make. If you're used to prompting each CL statement, your first inclination would be to begin each one in column 14. While you should use prompting when necessary to enter proper parameter values, the resulting alignment of commands, keywords, and values creates code that is difficult, at best, to read and maintain. Over the years, I've collected several guidelines about where to place code and comments within CL programs.
For starters, begin all comments in column 1, and make comment lines a standard length. Beginning comments in column 1 gives you the maximum number of columns to type the comment. And establishing a consistent comment line length (i.e., the number of spaces between the beginning /* and the closing */) makes the program look neat and orderly. Comments should also stand out in the source. In Figure 24.2, a blank line precedes and follows each comment line to make it more visible. Notice that comments describing a process are boxed in by lines of special characters (I use the = character). Nobody wants to read code in which comments outnumber program statements. But descriptive (not cryptic) comments that define and describe the program's basic sections and functions are helpful road signs. A second guideline is to begin all label names in column 1 on a line with no other code (or at the appropriate nesting level, if located within an IF-THEN or IF-THEN-ELSE construct). Labels in CL programs serve as targets of GOTO statements. The AS/400 implementation of CL requires you to use GOTO statements to perform certain tasks that other languages can accomplish through a subroutine or a DO WHILE construct. (CL/free, a precompiler for CL that supports subroutines and other language enhancements, lets you create more-structured CL programs.) Because labels provide such a basic function, they should clearly reveal entry points into specific statements. Starting a label name in column 1 and placing it alone on the line helps separate it from subsequent code. Notice in Figure 24.2 how you can quickly scan down column 1 and locate the labels (e.g., GLOBAL_ERR, CLEAN_UP, RSND_BGN). However, notice the placement of labels RSND_RPT and RSND_END (at B and C). Instead of beginning these two labels in column 1, I indented them to the expected nesting level to promote comprehension of the overall process. The code following the indented labels remains indented to help the labels stand out and to make the IF-THEN construct more readable. To offset command statements from comments and labels, start commands in column 3. Beginning commands in column 3 -- rather than the prompter's default starting in column (14) -- gives you much more room to enter keywords and values. It also gives you more room to arrange your code. The exception to this guideline concerns using the DO command as part of an IF-THEN or IF-THEN-ELSE construct. To help identify what code is executed in a DO group, I recommend that you indent the code in each DO group. A simple indented DO-ENDDO group might appear as follows:
IF ('condition') DO CL statement CL statement ENDDO A multilevel set of DO-ENDDO groups, including an ELSE statement, might appear like this:
IF ('condition') DO IF ('condition') DO CL statement CL statement IF ('condition') DO CL statement ENDDO ENDDO ELSE DO CL statement CL statement ENDDO ENDDO Notice that the IF and ENDDO statements -- and thus the logic -- are clearly visible.
Simplify and align command parameters. When you use the prompter to enter values for command parameters, Source Entry Utility (SEU) automatically places the selected keywords and values into the code. Several simple guidelines can greatly enhance the way commands, keywords, and values appear in your CL programs. First, omit the following common keywords when using the associated commands:
Command DCL CHGVAR IF ELSE GOTO MONMSG
Keyword VAR, TYPE, LEN VAR, VALUE COND, THEN CMD CMDLBL MSGID
The meanings of the parameter values are always obvious by position. Thus, the keywords just clutter up your code. The following statements omit unneeded keywords:
DCL &outq *CHAR 10 CHGVAR &outq (%SST(&i_ql_outq 1 10)) IF (&flag) GOTO FINISH GOTO RSND_RPT By starting commands in column 3 and following the indentation guidelines, you can type most commands on one line. But when you must continue the command to another line, you have several alternatives, as Figure 24.3 shows. The first alternative is to use the + continuation symbol, indent a couple of spaces on the next line, and continue entering command keywords and values. This is the simplest way to continue a command but the most difficult to read. The second alternative is to place as many keywords and values as possible on the first line and arrange the continuation lines so the additional keywords and values appear as columns under those on the first line. Although this option may be the easiest to read, creating the alignment is a major headache. The third alternative is simply to place each keyword and associated value on a separate line. This method is both simple to implement and easy to read. Thus, a second guideline is to place the entire command on one line when possible; otherwise, place the command and first keyword on the first line and each subsequent keyword on a separate line, using the + continuation symbol. A third guideline is to align the command and its parameters in columns when you repeat the same single-line command statement. This rule of thumb applies when you have a group of statements involving the same command. The DCL statement is a good example. Normally, one or more groups of DCL statements appear at the beginning of each CL program to define variables the program uses. Figure 24.2 shows how placing the DCL statement and parameter values in columns creates more readable code. This alignment rule also applies to multiple CHGVAR (Change Variable) commands. While you can apply the above rules to most commands, the IF command may require special alignment consideration. If the IF statement won't fit on a single line, use the DO-ENDDO construct. For example, the IF statement
IF (&fl2exist) CRTDUPOBJ OBJ(QACVTOTQ) FROMLIB(KWMLIB) OBJTYPE(*FILE) TOLIB(&outlib) NEWOBJ(&outfile)
+ +
should be written
IF (&fl2exist) DO CRTDUPOBJ OBJ(QACVTOTQ) FROMLIB(KWMLIB) OBJTYPE(*FILE) TOLIB(&outlib) NEWOBJ(&outfile) ENDDO
+ + + +
This construction implements guidelines discussed earlier and presents highly accessible code.
Align, shorten, and simplify for neatness. One of the most common symptoms of poor CL style is a general overcrowding of code. Such code moves from one statement to the next without any thought to organization, spacing, or neatness. The result looks more like a
blob of commands than a flowing stream of clear, orderly statements. To save yourself and others the eyestrain of trying to read a jumble of code, follow these suggestions for clean, crisp CL programs: Align all + continuation symbols so they stand out in the source code. In Figure 24.2, I've aligned all + continuation symbols in column 69. Not only does alignment give your programs a uniform appearance, but it also clearly identifies commands that are continued on several lines. I use the + symbol instead of the - for continuation because the + better controls the number of blanks that appear when continuing a string of characters. Both symbols include as part of the string blanks that immediately precede or follow the symbol on the same line. But when continuing a string onto the next line in your source, the + symbol ignores blanks that precede the first nonblank character in the next record. The - continuation symbol includes them. Use blank lines liberally to make code more accessible. Spacing between blocks of code and between comment lines and code can really help programmers identify sections of code, distinguish one command from another, and generally 'get into' the program. Blank lines don't cost you processing time, so feel free to space, space, space. Use the shorthand symbols ||, |<, and |> instead of the corresponding *CAT, *TCAT, and *BCAT operatives. Concatenation can be messy when you use a mixture of strings, variables, and the *CAT, *BCAT, and *TCAT operators. The shorthand symbols shorten and simplify expressions that use concatenation keywords and commands and clearly identify breaks between strings and variables.
Highlight variables with distinct, lowercase names. An essential part of CL style concerns how you use variables in your programs. I don't have any hard-and-fast rules, but I do have some suggestions. First, consider the names you assign variables. Give program parameters distinct names that identify them as parameters. In Figure 24.2, the two parameters processed by the CPP are &I_ql_outq and &i_ql_outf. The i_ in the names tells me both parameters are input-only ( io_ would have indicated a return variable). The ql_ tells me the parameters' values are qualified names (i.e., they include the library name). When program CVTOUTQCL calls program CVTOUTQR, it uses parameter &io_rtncode (A in Figure 24.2). The prefix indicates the parameter is both an input and an output variable, and the rest of the name tells me program CVTOUTQR will return a value to the calling program. A second guideline concerns variables that contain more than one value (e.g., a qualified name or the contents of a data area). You should extract the values into separate variables before using the values in your program. In Figure 24.2, input parameter &i_ql_outq is the qualified name of the output queue. Later in the program, you find the following two statements:
CHGVAR &outq CHGVAR &outqlib
(%SST(&i_ql_outq 1 10)) (%SST(&i_ql_outq 11 10))
These two statements divide the qualified name into separate variables. The separate variables let you code a statement such as
CHKOBJ OBJ(&outqlib/&outq) OBJTYPE(*OUTQ) instead of
CHKOBJ OBJ(%SST(&i_ql_outq 11 10)/ %SST(&I_ql_outq 1 10)) OBJTYPE(*OUTQ)
+
You should also define variables to represent frequently used literal values. For example, define values such as x, ', and 0 as variables [email protected], [email protected], and [email protected], and then use the variable names in tests instead of repeatedly coding the constants as part of the test condition. This guideline lets you define all of a program's constants in one set of DCL statements, which you can easily create and maintain at the start of the program source. In addition, notice the difference between the following statements:
IF (&value = ' ') DO IF (&value = [email protected]) DO
You can more easily digest the second statement because it explicitly tells you what value will result in execution of the DO statement. You may find that defining frequently used variables not only improves productivity, but also promotes consistency as programmers simply copy the variable DCL statements into new source members. A final guideline concerning variables is to type variable names in lowercase. The lowercase variable names contrast nicely with the uppercase commands/parameters. Although typing the names in lowercase may not be easy using SEU, the contrast in type will greatly improve the program's readability. Compare Figure 24.1 with Figure 24.2 again. Which code would you like to encounter the next time you examine a CL program for the first time? I hope you can use these guidelines to create a consistent CL style from which everyone in your shop can benefit. Remember: When you're trying to read a program you didn't write, appearance can be everything.
Sidebar: CL Coding Suggestions Sidebar: Command, RPG program, and physical file associated with CL program CVOUTQCL shown in Figure 24.2.
Chapter 25 - CL Programming: The Classics Since the inception of CL on the S/38 in the early eighties, programmers have been collecting their favorite and most useful CL techniques and programs. Over time, some of these have become classics. In this chapter, we'll visit three timeless programs and five techniques essential to writing classic CL. The five techniques:
• • • • •
Error/exception message handling String manipulation Outfile processing IF-THEN-ELSE and DO groups OPNQRYF (Open Query File) command processing
When I consider the CL programs I would label as classic, I find these techniques being employed to some degree. You may recognize the classic programs we'll visit as similar to something you have created. They provide functions almost always needed and welcomed by MIS personnel at any AS/400 installation. If you are new to the AS/400, I guarantee you will get excited about CL programming after you experience the power of these tools. And if you are an old hand at CL, you may have missed one of these classics. These programs are useful and the techniques valid on the S/38 as well, although some of the details will be different (e.g., the syntax of qualified object names and some outfile file and field names).
Classic Program #1: Changing Ownership If you ever face the problem of cleaning up ownership of objects on your system, you will find the CHGOBJOWN (Change Object Owner) command quite useful. You will also quickly discover that this command works for only one object at a time. Let's see . . . that means you must identify the objects that will have a new owner and then enter the CHGOBJOWN command for each of those objects. Or is there another way? When the solution includes the repetitious use of a CL command, you can almost always use a CL program to improve or automate that solution. To that end, try this first classic CL program, CHGOWNCPP. CHGOWNCPP demonstrates three of the fundamental CL programming techniques: message monitoring, string handling, and outfile processing. Let's take a quick look at how the program logic works and then examine how each technique is implemented. Program Logic. When you execute the command CHGOWN (Figure 25.1a), it invokes the command-processing program CHGOWNCPP (Figure 25.1b). A program-level message monitor traps any unexpected function check messages caused by unmonitored errors during program
execution. If it encounters an unexpected function check message, the MONMSG (Monitor Message) command directs the program to continue at the RSND_LOOP label. The CHKOBJ (Check Object) command verifies that the value in &NEWOWN is an actual user profile on the system. If the CHKOBJ command can't find the user profile on the system, a MONMSG command traps CPF9801. If this happens, an escape message is then sent to the calling program using the SNDPGMMSG command, and the CPP terminates. The DSPOBJD (Display Object Description) command generates the outfile QTEMP/CHGOWN based on the values for variables &OBJ and &OBJTYPE received from command CHGOWN. The program then processes the outfile until message CPF0864 ('End of file') is issued. For each record in the outfile, the CPP executes a CHGOBJOWN command to give ownership to the user profile specified in variable &NEWOWN. The variables &ODLBNM and &ODOBNM contain the object's library and object name, obtained from fields in the outfile file format QLIDOBJD. The value in variable &CUROWNAUT specifies whether the old owner's authority should be revoked or retained. When the CHGOBJOWN command is successful, the program sends a completion message to the calling program's message queue and reads the next record from the file. If the CHGOBJOWN command fails, the error message causes a function check, and the program-level message monitor passes control to the RSND_LOOP label. (Note: The CUROWNAUT parameter does not exist on the S/38 CHGOBJOWN command, so you would need to eliminate it, along with variable &CUROWNAUT in CHGOWNCPP.) After all records have been read, the next RCVF command generates error message CPF0864, and the command-level message monitor causes the program to branch to the FINISH label. The RSND_LOOP label is encountered only if an unexpected error occurs. This section of the program is a loop to receive the unexpected error messages and resend them to the calling program's message queue. The Technique Message Monitoring. The first fundamental technique we will examine is error/exception message handling. Monitoring for system messages within a CL program is a technique that both traps error/exception conditions and directs the execution of the program based on the error conditions detected. The CL MONMSG command provides this function. Program CHGOWNCPP uses both command-level and program-level message monitoring. A command-level message monitor lets you monitor for specific messages that might occur during the execution of a single command. For instance, in program CHGOWNCPP, MONMSG CPF9801 EXEC(DO) immediately follows the CHKOBJ command to monitor specifically for message CPF9801 ('Object not found'). If CPF9801 is issued as a result of the CHKOBJ command, the message monitor traps the message and invokes the EXEC portion of the MONMSG command -- in this instance, a DO command. Another example in the same program is the MONMSG command that comes immediately after the RCVF statement. If the RCVF command causes error message CPF0864, the message monitor traps the error and invokes the EXEC portion of that MONMSG -- in this instance, GOTO FINISH. What happens if an error occurs on a command and there is no command-level MONMSG to trap the error? If there is also no program-level MONMSG for that specific error message, the unexpected error causes function check message CPF9999, and if no program-level MONMSG for CPF9999 exists, the program ends in error. A program-level message monitor is a MONMSG command placed immediately after the last declare statement in a CL program. In our program example, there is a program-level MONMSG CPF9999 EXEC(GOTO RSND_LOOP). This MONMSG handles any unexpected error since all errors that are unmonitored at the command level eventually cause a function check. For instance, if the CHGOBJOWN command fails, an error message is issued that then generates function check message CPF9999. The program-level MONMSG traps this function check, and the EXEC command instructs the program to resume at label RNSD_LOOP and process those error messages. For more information on monitoring messages, see IBM's AS/400 manual Programming: Control Language Programmer's Guide (SC41-8077), or Appendix E of the AS/400 manual Programming: Control Language Reference (SC41-0030). String Handling. Another fundamental technique program CHGOWNCPP employs is string manipulation. The program demonstrates two forms of string handling -- substring manipulation and concatenation. The first is the
%SST (Substring) function. (%SST is a valid abbreviated form of the function %SUBSTRING -- both perform the same job.) The %SST function, which returns to the program a portion of a character string, has three arguments: the name of the variable containing the string, the starting position, and the number of characters in the string to extract. For instance, when the command CHGOWN passes the argument &OBJ to the CL program, the variable exists as a 20-character string containing the object name in positions 1 through 10 and the library name in positions 11 through 20. The CL program uses the %SST function in the CHGVAR (Change Variable) command (A in Figure 25.1b) to extract the library name and object name from the &OBJ variable into the &OBJNAM and &OBJLIB variables. The second form of string handling in this program is concatenation. The control language interface supports three distinct, built-in concatenation functions:
• • •
CAT (||): Concatenate - concatenates two string variables end to end; *TCAT (|<): Trim and concatenate -- concatenates two strings after trimming all blanks off the end of the first string; *BCAT (|>): Blank insert and concatenate - concatenates two strings after trimming all blanks off the end of the first string and then adding a single blank character to the end of the first string.
To see how these functions work, let's apply them to these variables (where /b designates a blank):
&VAR1 *CHAR 10 VALUE('John/b/b/b/b/b') and &VAR2 *CHAR 10 VALUE('Doe/b/b/b/b/b/b/b') The results of each operation are as follows:
&VAR1 || &VAR2 = John/b/b/b/b/b/bDoe &VAR1 |< &VAR2 = JohnDoe &VAR1 |> &VAR2 = John Doe The SNDPGMMSG command (B in Figure 25.1b) uses concatenation to build a string for the MSGDTA (Message Data) parameter. Notice that you can use a combination of constants and program variables to construct a single string during execution. The only limitation is that variables used with concatenation functions must be character variables because they will be treated as strings for these functions. You must convert any numeric variables to character variables before you can use them in concatenation. If the variables &ODLBNM, &ODOBNM, and &NEWOWN in the SNDPGMMSG command contain the values MYLIB, MYPROGRAM, and USERNAME, respectively, the SNDPGMMSG statement generates the message 'Ownership of object MYLIB/MYPROGRAM granted to user USERNAME.' Outfile Processing. The final fundamental technique demonstrated in program CHGOWNCPP is how to use an outfile. You can direct certain OS/400 commands to send output to a database file instead of to a display or printer. In this program, the DSPOBJD command generates the outfile QTEMP/CHGOWN. This file contains the full description of any objects selected. The file declared in the DCLF (Declare File) command is QADSPOBJ, the system-supplied file in library QSYS that serves as the externally defined model for the outfile generated by the DSPOBJD command. (Note: To get a list of the model outfiles provided by the system, you can execute the command 'DSPOBJD QSYS/QA* *FILE'.) Because file QADSPOBJ is declared in this program, the program will include the externally defined field descriptions when you compile it, allowing it to recognize and use those field names during execution. The next step in using an outfile in this program is creating the contents of the outfile using the DSPOBJD command. DSPOBJD uses the object name and type passed from command CHGOWN to create outfile QTEMP/CHGOWN. The outfile name is arbitrary, so I make a practice of giving an outfile the same name as the command or program that creates it. The program then executes the OVRDBF (Override with Database File) command to specify that the file QTEMP/CHGOWN is to be accessed whenever a reference is made to QADSPOBJ. This works because
QTEMP/CHGOWN is created with the same record format and fields as QADSPOBJ. Now when the program reads record format QLIDOBJD in file QADSPOBJ, the actual file it reads will be QTEMP/CHGOWN. These three fundamental CL techniques give you a good start in building your CL library, and the 'Change Owner of Object(s)' tool is definitely handy. You may have discovered the CHGLIBOWN (Change Library Owner) tool in library QUSRTOOL. This IBM-provided tool offers a similar function.
Classic Program #2: Delete Database Relationships The second utility is a real timesaver: the 'Delete Database Relationships' tool provided by command DLTDBR and CL program DLTDBRCPP. DLTDBR uses the same three fundamental techniques described above and adds a fourth: the IF-THEN clause. Let's take a quick look at the program logic and then discuss the IF-THEN technique. Program Logic. When you execute command DLTDBR (Figure 25.2a), the command-processing program DLTDBRCPP (Figure 25.2b) is invoked. As in CHGOWNCPP, a program-level MONMSG handles unexpected errors. The DSPDBR (Display Database Relations) command generates an outfile based on the file you specify when you execute the command DLTDBR. The CPP then processes this outfile until message CPF0864 ('End of File') is issued. For each record in the outfile, the program performs two tests as decision mechanisms for program actions. Both tests check whether or not the record read is a reference to a physical file (&WHRTYP = &PFTYPE). If the file is not a physical file, the program takes no action for that record; it just reads the next record. The first test (A in Figure 25.2b) determines whether dependencies exist for this physical file. &WHNO represents the total number of dependencies. If &WHNO is equal to zero, there are no dependencies for this file, and the program sends a message (using the SNDPGMMSG command) to that effect. The second test (B) checks whether &WHNO is greater than zero. If it is, the record represents a dependent file, and you can delete the file name specified in variables &WHRELI (dependent file library) and &WHREFI (dependent file name) with the DLTF (Delete File) command. When the DLTF is successful, the program sends a completion message to the calling program's message queue. The GOTO RCD_LOOP command sends control to the RCD_LOOP label to read the next record. If the DLTF command fails, the error message causes a function check, and the program-level message monitor directs the program to resume at the RSND_LOOP label. After all records have been read, the RCVF command generates error message CPF0864, and the commandlevel message monitor causes the program to branch to the FINISH label, where the program ends. As with the first program, you will encounter the RSND_LOOP label only if an unexpected error occurs. The Technique IF-THEN-ELSE and DO Groups. The IF-THEN clause lets you add decision support to your CL coding via the IF command, which has two parameters: COND (the conditional statement) and THEN (the action to be taken when the condition is satisfied). A simple IF-THEN statement would be
IF COND(&CODE = 'A') THEN(CHGVAR VAR(&CODE) VALUE('B')) In this example, if the value of variable &CODE is A, the CHGVAR command changes that value to B. To create code that is easier to read and interpret, it is usually best to omit the use of the keywords COND and THEN. The above example is much clearer when written as
IF (&CODE = 'A') CHGVAR VAR(&CODE) VALUE('B') Conditions can also take more complex forms, such as
IF ((&CODE = 'A' *OR &CODE = 'B') *AND (&NUMBER = 1)) GOTO CODEA
+
This example demonstrates several conditional tests. The *OR connective requires at least one of the alternatives -- (&CODE = 'A') or (&CODE = 'B') -- to be true to satisfy the first condition. The *AND connective then requires that (&NUMBER = 1) also be true before the THEN clause can be executed. If both conditions are met, the program executes the GOTO command. (For more information about how to use *AND and *OR connectives, see Chapter 2 of the AS/400 manual Programming: Control Language Programmer's Guide.) The ELSE command provides additional function to the IF command. Examine these statements:
IF (&CODE = 'A') CALL PGMA ELSE CALL PGMB The program executes the ELSE command if the preceding condition is false. You can also use the IF command to process a DO group. Examine the following statements:
IF (&CODE = 'A') DO CALL PGMA CALL PGMB CALL PGMC ENDDO If the condition in the IF command is true, the program executes the DO group until it encounters an ENDDO. The DO command also works with the ELSE command, as this example shows:
IF (&CODE = 'A') DO CALL PGMA CALL PGMB ENDDO ELSE DO CALL PGMD CALL PGME CALL PGMF ENDDO For more information about IF and ELSE commands, see the AS/400 manual Programming: Control Language Reference, (SC41-0030) or the Programming: Control Language Programmer's Guide.
Classic Program #3: List Program-File References The last of the classic CL programs and fundamental techniques, the 'Display Program References' tool, brings us face-to-face with one of the most powerful influences on CL programming -- the one and only OPNQRYF (Open Query File) command. As this program demonstrates, this classic technique is one of the richest and most powerful tools available through CL. Let's take a quick look at the program logic for this tool, provided via the LSTPGMREF (List Program References) command and the LSTPRCPP CL program. Then we can take a close look at the OPNQRYF command. Program Logic. When you execute command LSTPGMREF (Figure 25.3a), the command-processing program LSTPRCPP (Figure 25.3b) is invoked. LSTPRCPP uses the DSPPGMREF (Display Program References) command to generate an outfile based on the value you entered for the PGM parameter. The outfile LSTPGMREF then contains information about the specified programs and the objects they reference. Notice that this program does not use the DCLF statement. There is no need to declare the file format because the program will not access the file directly. You will also notice that the program uses the OVRDBF command, but the SHARE(*YES) parameter has been added. Because a CL program cannot send output to a printer, LSTPRCPP must call a high-level language (HLL) program to print the output. The OVRDBF is required so the HLL program, which references file QADSPPGM, can find outfile QTEMP/LSTPGMREF. The override must specify SHARE(*YES) to ensure that the HLL program will use the Open Data Path (ODP) created by the OPNQRYF
command instead of creating a new ODP and ignoring the work the OPNQRYF has performed. Files used with OPNQRYF require SHARE(*YES). After the DSPPGMREF command is executed, file LSTPGMREF contains records for program-file references as well as program references to other types of objects. The next step is to build an OPNQRYF selection statement in variable &QRYSLT that selects only *FILE object-type references and optionally selects the particular files named in the FILE parameter. LSTPRCPP uses IF tests to construct the selection statement. Then the CPP determines the sequence of records desired (based on the value entered for the OPT parameter in the LSTPGMREF command) and uses the OPNQRYF command to select the records and create access paths that will allow the HLL program to read the records in the desired sequence. The CL program then calls HLL program LSTPRRPG to print the selected records (I haven't provided code here -you will need to build your own version based on your desired output format). The outfile will appear to contain only the selected records, and they will appear to be sorted in the desired sequence. The Technique The OPNQRYF Command. Without doubt, one of the more powerful commands available to CL programmers is the OPNQRYF command. OPNQRYF uses the same system database query interface SQL uses on the AS/400. The command provides many functions, including selecting records and establishing keyed access paths without using an actual logical file or DDS. These two basic functions are the bread-and-butter classic techniques demonstrated in program LSTPRCPP. Record selection is accomplished with OPNQRYF's QRYSLT parameter. If you know the exact record selection criteria when you write the program, filling in the QRYSLT parameter is easy, and the selection string will be compiled with the program. But the real strength of ONPQRYF's record selection capability is that you can construct the QRYSLT parameter at runtime to match the particular user requirements specified during execution. Program LSTPRCPP demonstrates both the compile-time and runtime capabilities of OPNQRYF. When you write program LSTPRCPP, the requirement to include only references to physical files is a given. Therefore, you can use the statement CHGVAR VAR(&QRYSLT) VALUE('WHOBJT = 'F') to initially provide a value for &QRYSLT to satisfy that requirement. The &FILE value is unknown until execution time, so the code must allow this selection criteria to be specified dynamically. First, what are the possible values for the FILE parameter on command LSTPGMREF?
•
You may specify a value of *ALL. If you do, you should not add any selection criteria to the QRYSLT parameter. The &QRYSLT value would be
'WHOBJT = 'F' •
You may specify a generic value, such as IC* or AP??F*. If you enter a generic value, the CL program must determine that &FILE contains a generic name and then use OPNQRYF's %WLDCRD (wildcard) function to build the appropriate QRYSLT selection criteria. The %WLDCRD function lets you select a group of similarly named objects by specifying an argument containing a wildcard (e.g., * or ?). For instance, if you wanted to select all files beginning with the characters IC, you would use the argument IC*. An example of the &QRYSLT variable for this generic selection would be
'WHOBJT = 'F' *AND WHFNAM = %WLDCRD('IC*')' •
You may specify an actual file name. If you do, the CL program must first determine that fact and then simply use the compare function in OPNQRYF to build the value for the QRYSLT parameter. An example for this &QRYSLT variable would be
'WHOBJT = 'F' *AND WHFNAM = 'FILE_NAME' Examining the program, you will see that it performs a series of tests on the variable &FILE to determine how to build the QRYSLT parameter. If *ALL is the value for &FILE, all other IF tests are bypassed, and the program continues. If the program QCLSCAN finds the character * in the string &FILE, it uses the %WLDCRD function to build the appropriate QRYSLT parameter. If the program does not find *ALL and does not find a * in the name, the value of &FILE is assumed to represent an actual file name, and the program compares the value of &FILE to the
field WHFNAM for record selection. Obviously, the power of the QRYSLT parameter is in the hands of those who can successfully build the selection value based on execution-time selections. The second basic bread-and-butter technique is using OPNQRYF to build a key sequence without requiring additional DDS. Program LSTPRCPP tests the value of &OPT to determine whether the requester wants the records listed in *FILE (file library/file name) or *PGM (program library/program name) sequence. The appropriate OPNQRYF statement is executed based on the result of these tests (see A in Figure 25.3b). When &OPT is equal to &FILESEQ (which was declared with the value F), the OPNQRYF statement sequences the file using the field order of WHLNAM (file library), WHFNAM (file name), WHLIB (program library), WHPNAM (program name). When &OPT equals &PGMSEQ (declared with the value P), the key fields are in the order WHLIB, WHPNAM, WHLNAM, WHFNAM. No DDS is required. The HLL program called to process the opened file can provide internal level breaks based on the option selected. For more information concerning the use of the OPNQRYF command with database files, refer to IBM's Programming: Control Language Reference or Programming: Data Base Guide (SC41-9659). Classic CL programs and techniques are a part of the S/38 and now AS/400 heritage, but they're not simply oldies to be looked at and forgotten. Studying these programs and mastering these techniques will help you hone your skills and write some classic CL code of your own.
Chapter 26 - Processing Database Files with CL Once you've learned to write basic CL programs, you'll probably try to find more ways to use CL as part of your iSeries applications. In contrast to operations languages such as a mainframe's Job Control Language (JCL), which serves primarily to control steps, sorts, and parameters in a job stream, CL offers more. CL is more procedural, supports both database file (read-only) and display file (both read and write) processing, and lets you extend the operating-system command set with your own user-written commands. In this article, we examine one of those fundamental differences of CL: its ability to process database files. You'll learn how to declare a file, extract the field definitions from a file, read a file sequentially, and position a file by key to read a specific record. With this overview, you should be able to begin processing database files in your next CL program.
Why Use CL to Process Database Files? Before we talk about how to process database files in CL, let's address the question you're probably asking yourself: 'Why would I want to read records in CL instead of in an HLL program?' In most cases, you probably wouldn't. But sometimes, such as when you want to use data from a database file as a substitute value in a CL command, reading records in CL is a sensible programming solution. Say you want to perform a DspObjD (Display Object Description) command to an output file and then read the records from that output file and process each object using another CL command, such as DspObjAut (Display Object Authority) or MovObj (Move Object). Because executing a CL command is much easier and clearer from a CL program than from an HLL program, you'd probably prefer to write a single CL program that can handle the entire task. We'll show you just such a program a little later, after going over the basics of file processing in CL.
I DCLare! Perhaps the most crucial point in understanding how CL programs process database files is knowing when you need to declare a file in the program. The rule is simple: If your CL program uses the RcvF (Receive File) command to read a file, you must use the DclF (Declare File) command to declare that file to your program. DclF tells the compiler to retrieve the file and field descriptions during compilation and make the field definitions available to the program. The command has only one required parameter: the file name. To declare a file, you need only code in your program either
DclF File(YourFile) or
DclF File(YourLib/YourFile) When using the DclF command, you must remember three implementation rules. First, you can declare only one file - either a database file or a display file - in any CL program. This doesn't mean your program can't operate on other files - for example, using the CpyF (Copy File), OvrDbF (Override with Database File), or OpnQryF (Open Query File) command. It can. However, you can use the RcvF command to process only the file named in the DclF statement. Second, the DclF statement must come after the Pgm (Program) command in your program and must precede all executable commands (the Pgm and Dcl, or Declare CL Variable, commands are not executable). The third rule is that the declared file must exist when you compile the CL program. If you don't qualify the file name, the compiler must be able to find the file in the current library list during compilation.
Extracting Field Definitions When you declare a file to a CL program, the program can access the fields associated with that file. Fields in a declared file automatically become available to the program as CL variables - there's no need to declare the variables separately. When the file is externally described, the compiler uses the external record-format definition associated with the file object to identify each field and its data type and length. Figure 1 shows the DDS for sample file TestPF. To declare this file in a program, you code DclF TestPF The system then makes the following variables available to the program: Variable &Code &Number &Field
Type *Char 1 *Dec 5,0 *Char 30
Your program can then use these variables despite the fact that they're not explicitly declared. For instance, you could include in the program the statements
If
((&Code *Eq 'A') *And + (&Number *GT 10)) + ChgVar &Code ('B')
Notice that when you refer to the field in the program, you must prefix the field name with the ampersand character (&). All CL variables, including those implicitly defined using the DclF command and the file field definitions, require the & prefix when referenced in a program. What about program-described files - that is, files with no external data definition? Suppose you create the following file using the CrtPF (Create Physical File) command
CrtPF File(DiskFile) RcdLen(258) and then you declare file DiskFile in your CL program. As it does with externally defined files, the CL compiler automatically provides access to program-described files. Because there's no externally defined record format, however, the compiler recognizes each record in the file as consisting of a single field. That field is always named &FileName, where FileName is the name of the file. Therefore, if you code
DclF DiskFile your CL program recognizes one field, &DiskFile, with a length equal to DiskFile's record length. You can then extract the subfields with which you need to work. In CL, you extract the fields using the built-in function %Sst (or %Substring). The statements
ChgVar &Field1 (%Sst(&DiskFile 1 10)) ChgVar &Field2 (%Sst(&DiskFile 11 25)) ChgVar &Field3 (%Sst(&DiskFile 50 1)) extract three subfields from &DiskFile's single field. You'll need to remember two rules when using program-described files. First, you must extract the subfields every time you read a record from the file. Unlike RPG, CL has no global When the MonMsg (Monitor Message) command traps this message, control skips to ReadEnd, thus ending the loop. Unlike HLLs, CL doesn't let you reposition the file for additional processing after the program receives an end-offile message. Although you can execute an OvrDbF command containing a Position parameter after your program
receives an end-of-file message, any ensuing RcvF command simply elicits another end-of-file message. Two possible workarounds to this potential problem exist, but each has its restriction. You can use the first workaround if, and only if, you can ensure that the data in the file will remain static for the duration of the read cycles. The technique involves use of the RtvMbrD (Retrieve Member Description) command. Using this command's NbrCurRcd (CL variable for NBRCURRCD) parameter, you can retrieve into a program variable the number of records currently in the file. Then, in your loop to read records, you can use another variable to count the number of records read, comparing it with the number of records currently in the file. When the two numbers are equal, the program has read the last record in the file. Although the program has read the last record, the end-of-file condition is not yet set. The system sets this condition and issues the CPF0864 message indicating end-of-file only after attempting to read a record beyond the last record. Therefore, this technique gives you a way to avoid the end-of-file condition. You can then use the PosDbF (Position Database File) command to set the file cursor back to the beginning of the file. Simply specify *Start for the Position parameter, and you can read the file again! Remember, use this technique only when you can ensure that the data will in no way change while you're reading the file. The second circumvention is perhaps even trickier because it requires a little application design planning. Consider a simple CL program that does nothing more than perform a loop that reads all the records in a database file and exits when the end-of-file condition occurs (i.e., when the system issues message CPF0864). If you replace the statement
MonMsg (CPF0864) Exec(GoTo End) with
MonMsg If ChgVar TfrCtl EndDo
(CPF0864) Exec(Do) (&Stop *Eq 'Y') GoTo End &Stop ('Y') Pgm(YourPgm) Parm(&Stop)
where YourPgm is the name of the program containing the command, the system starts the program over again, thereby reading the file again. Notice that with this technique, you must add code to the program to prevent an infinite loop. In addition to the changes shown above, the program should accept the &Stop parameter. Fail to add these groups of code, and each time the system detects end-of-file, the process restarts. You also must add code to ensure that only those portions of the code that you want to be executed are executed. When possible, if you need to read a database file multiple times, we advise you to construct your application in such a way that you can call multiple CL programs (or one program multiple times, as appropriate). Each of these programs (or instances of a program) can then process the file once. This approach is the clearest and least errorprone method.
File Positioning One well-kept secret of CL file processing is that you can use it to retrieve records by key . . . sort of. The OvrDbF command's Position parameter lets you specify the position from which to start retrieving database file records. You can position the file to *Start or *End (you can also use the PosDbF command to position to *Start or *End), to a particular record using relative record number, or to a particular record using a key. To retrieve records by key, you supply four search values in the Position parameter: a key-search type, the number of key fields, the name of the record format that contains the key fields, and the key value. The key-search type determines where in the file the read-by-key begins by specifying which record the system is to read first. The key-search value specifies one of the following five key-search types:
• •
*KeyB (key-before) — The first record retrieved is the one that immediately precedes the record identified by the other Position parameter search values. *KeyBE (key-before or equal) — The first record retrieved is the one identified by the search values. If no record matches those values, the system retrieves the record that matches the largest previous value.
• • •
*Key (key-equal) — The first record retrieved is the one identified by the search values. (If your CL program calls an HLL program that issues a read-previous operation, the called program will retrieve the preceding record.) *KeyAE (key-after or equal) — The first record retrieved is the one identified by the search values. If no record matches those values, the system retrieves the record with the next highest key value. *KeyA (key-after) — The first record retrieved is the one that immediately follows the record identified by the search values.
As a simple example, let's assume that file TestPF has one key field, Code, and contains the following records: Code A B C E
Number 1 100 50 27
Field Text in Record 1 Text in Record 2 Text in Record 3 Text in Record 4
The statements
OvrDbF Position(*Key 1 TestPFR 'B') RcvF RcdFmt(TestPFR) specify that the record to be retrieved has one key field as defined in DDS record format TestPFR (Figure 1) and that the key field contains the value B. These statements will retrieve the second record (Code = B) from file TestPF. If the key-search type were *KeyB instead of *Key, the same statements would cause the RcvF command to retrieve the first record (Code = A). Key-search types *KeyBE, *KeyAE, and *KeyA would cause the RcvF statement to retrieve records 2 (Code = B), 2 (Code = B), and 3 (Code = C), respectively. Now let's suppose that the program contains these statements:
OvrDbF Position(&KeySearch 1 TestPFR 'D') RcvF RcdFmt(TestPFR) Here's how each &KeySearch value affects the RcvF results:
• • • • •
*KeyB — returns record 3 (Code = C) *KeyBE — returns record 3 (Code = C) *Key — causes an exception error because no match is found *KeyAE — returns record 4 (Code = E) *KeyA — returns record 4 (Code = E)
Using the Position parameter with a key consisting of more than one field gets tricky, especially when one of the key fields is a packed numeric field. You must code the key string to match the key's definition in the file, and if any key field is other than a character or signed-decimal field, you must code the key string in hexadecimal form. For example, suppose the key consists of two fields: a one-character field and a five-digit packed numeric field with two decimal positions. You must code the key value in the Position parameter as a hex string equal in length to the length of the two key fields together (i.e., 1 + 3; a packed 5,2 field occupies three positions). For instance, the value
Position(*Key 2 YourFormat X'C323519F') tells the system to retrieve the record that contains values for the character and packed-numeric key fields of C and 235.19, respectively. As we've mentioned, a CL program can position the database file and then call an HLL program to process the records. For instance, the CL program can use OvrDbF's Position parameter to set the starting point in a file and then call an RPG program that issues a read or read-previous to start reading records at that position.
Having this capability doesn't necessarily mean you should use it, though. One of our fundamental rules of programming is this: Make your program explicit and its purpose clear. Thus, we avoid using the OvrDbF or PosDbF command to position a file before we process it with an HLL program when we can more explicitly and clearly position the file within the HLL program itself. There's just no good reason to hide the positioning function in a CL program that may not clearly belong with the program that actually reads the file. However, when you process a file in a CL program, positioning the file therein can simplify the solution.
What About Record Output? Just about the time you get the hang of reading database files, you suddenly realize that your CL program can't perform any other form of I/O with them. CL provides no direct support for updating, writing, or printing records in a database file. Some programmers use command StrQMQry (Start Query Management Query) to execute a query management query or use the RunSQLStm (Run SQL Statement) command to effect one of these operations from within CL. To use these techniques, you must first create the query management query or enter the SQL source statements to execute with RunSQLStm.
A Useful Example Now that you know how to process database files in a CL program, let's look at a practical example. Security administrators would likely find a program that prints the object authorities for selected objects in one or more libraries useful. Figure 3A shows the command definition for the PrtObjAut (Print Object Authorities) command, which does just that. Figure 3B shows PrtObjAut's command processing program (CPP), PrtObjAut1. Notice that the CPP declares file QADspObj in the DclF statement. This IBM-supplied file resides in library QSys and is a model for the output file that the DspObjD command creates. In other words, when you use DspObjD to create an output file, that output file is modeled on QADspObj's record format and associated fields. In the CPP, the DspObjD command creates output file ObjList, whose file description includes record format QLiDObjD and fields from the QADspObj file description. Because we declare file QADspObj in the program, that's the file we must process. (Remember: You can declare only one file in the program, and file ObjList did not exist at compile time.) The CPP uses the OvrDbF command to override QADspObj to newly created file ObjList in library QTemp. When the RcvF command reads record format QLiDObjD, the override causes the RcvF to read records from file ObjList. As it reads each record, the CL program substitutes data from the appropriate fields into the DspObjAut command and prints a separate authority report for each object represented in the file. We're sure you'll find uses for the CL techniques you've learned in this article. Processing database files in CL is a handy ability that, at times, may be just the solution you need.
Chapter 27 - CL Programs and Display Files In Chapter 26, I talked about processing database files using a CL program. I discussed declaring a file, extracting field definitions (both externally described and program-described), and processing database records. In this chapter, I want to examine how CL programs work with display files. CL is an appropriate choice for certain situations that require displays. For example, CL works well with display files for menus because CL is the language used to override files, modify a user's library list, submit jobs, and check authorities -- all common tasks in a menu environment. CL is also a popular choice for implementing a friendly interface at which users can enter parameters for commands or programs that print reports or execute inquiries. For example, a CL program can present an easily understood panel to prompt the user for a beginning and ending date; the program can then format and substitute those dates into a STRQMQRY (Start Query Management Query) command to produce a report covering a certain time period. When you want users to enter substitution values for use in an arcane command such as OPNQRYF (Open Query File), it is almost imperative that you let them enter selections in a format they understand (e.g., a prompt screen) and then build the command string in CL. It is much easier to build and execute complex CL commands in CL than it is in other languages, especially RPG/400 and COBOL/400.
CL Display File Basics As with a database file, you must use the DCLF (Declare File) command to tell your CL program which display file you want to work with (for more details about declaring a file, see Chapter 26). Declaring the file lets the compiler locate it and retrieve the field and format definitions. Figure 27.1 shows the DDS for a sample display file, USERMENUF, and Figure 27.2 shows part of a compiler listing for a CL program that declares USERMENUF. The default for DCLF's RCDFMT parameter, *ALL, tells the compiler to identify and retrieve the descriptions for all record formats in the file. Notice that the field and format definitions immediately follow the DCLF statement on the compiler listing. If your display file has many formats and you plan to use only one or a few of them, you can specify up to 50 different record formats in the RCDFMT parameter instead of using the *ALL default value. Doing so reduces the size of the compiled program object by eliminating unnecessary definitions. After you declare a display file, you can output record formats to a display device using the SNDF (Send File) command, read formats from the display device using the RCVF (Receive File) command, or perform both functions with the SNDRCVF (Send/Receive File) command. These commands parallel RPG/400's WRITE, READ, and EXFMT opcodes, respectively. For instance, to present a record format named PROMPT on the display, you could code your CL as
SNDF RCDFMT(PROMPT) RCVF RCDFMT(PROMPT) or as
SNDRCVF RCDFMT(PROMPT) To send more than one format to the screen at once (e.g., a standard header format, a function key format, and an input-capable field), you use a combination of the SNDF and SNDRCVF commands as you would use a combination of WRITE and EXFMT in RPG/400:
SNDF RCDFMT(HEADER) SNDF RCDFMT(FKEYS) SNDRCVF RCDFMT(DETAIL Notice that the RCDFMT parameter value in each statement specifies the particular format for the operation. If there is only one format in the file, you can use RCDFMT's default value, *FILE, and then simply use the SNDF, RCVF, or SNDRCVF command without coding a parameter.
CL Display File Examples Let's look at an example of how to use CL with a display file for a menu and a prompt screen. Figure 27.3 shows a menu based on the DDS in Figure 27.1. From the DDS, you can see that record format MENU displays the list of menu options, and record formats MSGF and MSGCTL control the IBM-supplied message subfile function that sends messages to the program message queue. Record format PROMPT01 is a panel that lets the user enter selection values for Batch Report 1.
Figure 27.4 shows CL program USERMENU, the program driver for this menu. As you can see, USERMENU sets up work variable &pgmq, displays the menu, and then, depending on user input, either executes the code that corresponds to the menu option selected or exits the menu.
The sample menu's menu options, option field, and function key description are all part of the MENU record format on the DDS. To display these fields to the user and allow input, program USERMENU uses the SNDRCVF command (C in Figure 27.4). Should the user enter an invalid menu option, select an option (s)he is not authorized to, or encounter an error, the program displays the appropriate message at the bottom of the screen by displaying message subfile record format MSGCTL (D). (I discuss this record format in more detail in a moment.) Figure 27.5 shows a completion message at the bottom of the sample menu.
The message subfile is a special form of subfile whose definition includes some predefined variables and special keywords. The message subfile record format is format MSGSFL (B in Figure 27.1). The keyword SFLMSGRCD(23) tells the display file to display the messages in this subfile beginning on line 23 of the panel. You can specify any line number for this keyword that is valid for the panel you are displaying. The associated SFLMSGKEY keyword and the IBM-supplied variable MSGKEY support the task of retrieving a message from the program message queue associated with the SFLPGMQ keyword (i.e., the message queue named in variable PGMQ) and displaying the message in the form of a subfile. The CL program assigns the value USERMENU to variable &pgmq (A in Figure 27.4), thus specifying that the program message queue to be displayed is the one associated with program USERMENU. MSGCTL, the next record format in the DDS, uses the standard subfile keywords (e.g., SFLSIZ, SLFINZ, SLFDSP) along with the SFLPGMQ keyword. This record format establishes the message subfile for this display file with a SFLSIZ value of 10 and a SFLPAG value of 1. In other words, the message subfile will hold up to 10 messages and will display one message on each page. Because of the value of the SFLMSGRCD keyword in the MSGSFL format, the message will be displayed on line 23. You can alter the SFLMSGRCD and SFLPAG values to display as many messages as you like and have room for on the screen. If more than one page of messages exists, the user can scroll through the pages by pressing Page up and Page down. You might be asking, 'What does program USERMENU have to do to fill the message subfile?' The answer: Absolutely nothing! This fact often confuses programmers new to message subfiles because they can't figure out how to load the subfile. You can think of the message subfile as simply a mechanism by which you can view the messages on the program message queue. By changing the value of variable &pgmq to USERMENU, I specified which program message queue to associate with the message subfile. That's all it takes. Immediately after D in Figure 27.4, you can see that I change indicator 40 (variable &in40) to [email protected] and then output format MSGCTL using the SNDF command. In the DDS, indicator 40 controls the SFLINZ and SFLEND keywords (C in Figure 27.1) to initialize the subfile before loading it and to display the appropriate + or blank to let the user know whether more subfile records exist beyond those currently displayed. (You can specify SFLEND(*MORE) if you prefer to have the message subfile use the 'More...' and 'Bottom' subfile controls after the last record, but be sure your screen has a blank line at the bottom so that these subfile controls can be displayed.) When the program outputs the MSGCTL format, the PGMQ and MSGKEY variables coded in the MSGSFL record format cause all messages to be retrieved from the program message queue and presented in the subfile. The user can move the cursor onto a message and press the Help key to get secondary text, when it is available, and can scroll through all the error messages in the subfile. At B in Figure 27.4, the RMVMSG command clears *ALL messages from the current program queue (i.e., queue USERMENU). Clearing the queue at the beginning of the program ensures that old messages from a previous invocation do not remain in the queue.
Figure 27.6 shows a prompt screen a user might receive to specify selections for a menu option that submits a report program to batch. The user keys the appropriate values and presses Enter to submit the report. If the program encounters an error when validating the values, the display file uses an error subfile to display the error message at the bottom of the screen, like the error message in Figure 27.7.
You use the ERRSFL keyword in the DDS (A in Figure 27.1) to indicate an error subfile. An error subfile provides a different function than a message subfile. The error subfile automatically presents any error messages generated as a result of DDS message or validity-checking keywords (e.g., ERRMSG, SFLMSG, CHECK, VALUES). The purpose of the error subfile is to group error messages generated by these keywords for a particular record format, not to view messages on the program message queue. (For more information about error subfiles, see the Data Description Specifications Reference, SC41-9620.)
Considerations The drawbacks to using CL for display file processing are CL's limited database file I/O capabilities and its lack of support for user-written subfiles. As I explained in Chapter 26, CL can only read database files. The fact that you cannot write or update database file records greatly reduces CL's usefulness in an interactive environment. The lack of support for user-written subfiles also limits its usefulness in applications that require user interaction. But in many common situations, CL's strengths more than offset these limitations. CL's command processing, message handling, and string manipulation capabilities make it a good choice for menus, prompt screens, and other nondatabase-related screen functions. While not always appropriate, for many basic interactive applications CL offers a simple alternative to a high-level language for display file processing. With this knowledge under your belt, you can choose the best and easiest language for applications that use display files.
Chapter 28 - OPNQRYF Fundamentals In this chapter, I give you the foundation you need to use the OPNQRYF (Open Query File) command, and then I leave you to discover the rewards as you apply this knowledge to your own applications. OPNQRYF's basic function is to open one or more database files and present records in response to a query request. Once opened, the resulting file or files appear to high-level language (HLL) programs as a single database file containing only the records that satisfy query selection criteria. In essence, OPNQRYF works as a filter that determines the way your programs see the file or files being opened. You can use the OPNQRYF command to perform a variety of database functions: joining records from more than one file, grouping records, performing aggregate calculations such as sum and average, selecting records before or after grouping, sorting records by one or more key fields, and calculating new fields using numeric or character string operations. One crucial point to remember when using OPNQRYF is that you must use the SHARE(*YES) file attribute for each file opened by the OPNQRYF command. When you specify SHARE(*YES), subsequent opens of the same file will share the original open data path and thus see the file as presented by the OPNQRYF process. If OPNQRYF opens a file using the SHARE(*NO) attribute, the next open of the file will not use the open data path created by the OPNQRYF command, but instead will perform another full open of the file. Don't assume the file description already has the SHARE(*YES) value when you use the OPNQRYF command. Instead, always use the OVRDBF (Override with Database File) command just before executing OPNQRYF to explicitly specify SHARE(*YES) for each file to be opened. Be aware that the OPNQRYF command ignores any
parameters on the OVRDBF command other than TOFILE, MBR, LVLCHK, WAITRCD, SEQONLY, INHWRT, and SHARE.
The Command Figure 28.1 shows the entire OPNQRYF command. OPNQRYF has five major groups of parameters (specifications for file, format, key field, join field, and mapped field) and a few extra parameters not in a group. Using the OPNQRYF command is easier once you master the parameter groups. There are some strong, but awkwardly structured, parallels between OPNQRYF parameters and specific SQL concepts. For instance, the file and format specifications parallel the more basic functions of the SQL SELECT and FROM statements; the query selection expression parallels SQL's WHERE statement; the key field specifications parallel SQL's ORDER BY statement; and the grouping field names expression parallels the GROUP BY statement. If you compare OPNQRYF to SQL (page 351), you'll see that the OPNQRYF command is basically a complicated SQL front end that offers a few extra parameters.
Start with a File and a Format For every query, there must be data -- and for data, there must be a file. OPNQRYF's file specifications parameters identify the file or files that contain the data. A simple OPNQRYF command might name a single file, like this:
OPNQRYF FILE(MYLIB/MYFILE) ... This partial command identifies MYLIB/MYFILE as the file to be queried. Notice that the FILE parameter in Figure 28.1 has three separate parameter elements: the qualified file name, data member, and record format. A specified file must be a physical or logical file, an SQL view, or a Distributed Data Management file. In the sample command above, I specify the qualified file name only and do not enter a specific value for the second and third elements of the FILE parameter. Therefore, the default values of *FIRST and *ONLY are used for the member and record format, respectively. You can select a particular data member to be queried by supplying a member name. The default value of *ONLY for record format tells the database manager to use the only record format named in file MYFILE in our example. When you have more than one record format, you must use the record format element of the FILE parameter to name the particular record format to open. You can enter a plus sign in the '+ for more values' field and enter multiple file specifications to be dynamically joined (as opposed to creating a permanent join logical file on the system). When joining more than one record format, you must enter values in the join field specifications parameter (JFLD) to specify the field the database manager will use to perform the join. The FORMAT parameter specifies the format for records made available by the OPNQRFY command. The fields defined in this record format must be unique from those named in the FILE or MAPFLD parameter. When you use the default value of *FILE for the FORMAT parameter, the record format of the file defined in the FILE parameter is used for records selected. You cannot use FORMAT(*FILE) when the FILE parameter references more than one file, member, or record format. To return to our example, if you key
OPNQRYF FILE(MYLIB/MYFILE) ... the record format of file MYFILE would be used for the records presented by the OPNQRYF command. On the other hand, if you use the command
OVRDBF FILE(MYJOIN) TOFILE(MYLIB/MYFILE) SHARE(*YES) with this OPNQRYF command
OPNQRYF FILE(MYLIB/MYFILE) FORMAT(MYJOIN)
the database manager uses the record format for file MYJOIN. The FORMAT parameter can specify a qualified file name and a record format (e.g., (MYLIB/MYJOIN JOINR)), or it can simply name the file containing the format to be used (e.g., (MYJOIN)). Although you can select (via the QRYSLT parameter) any fields defined in the record format of the file named in the FILE parameter, OPNQRYF will make available only those fields defined by the record format named in the FORMAT parameter. In the previous example, the HLL program would open file MYJOIN, and the OVRDBF command would redirect the open to the queried file, MYLIB/MYFILE. The format for MYJOIN would present records from MYFILE. Later, in the discussion of field mapping, I'll explain why you might want to do this. Because this chapter is only an introduction to OPNQRYF, I won't talk any more about join files. Instead, let's focus on creating queries for single file record selection, sorting, mapping fields, and HLL processing.
Record Selection As I said earlier, the record selection portion of the OPNQRYF command parallels SQL's WHERE statement. The QRYSLT parameter provides record selection before record grouping occurs (record grouping is controlled by the GRPFLD parameter). The query selection expression can be up to 2,000 characters long, must be enclosed in apostrophes (because it comprises a character string for the command to evaluate), and can consist of one or more logical expressions connected by *AND or *OR. Each logical expression must use at least one field from the files being queried. The OPNQRYF command also offers built-in functions that you can include in your expressions (e.g., %SST, %RANGE, %VALUES, and %WILDCARD). This simple logical expression
QRYSLT('DLTCDE = 'D') instructs the database manager to select only records for which the field DLTCDE contains the constant value D. A more complex query might use the following expression:
QRYSLT('CSTNBR *EQ %RANGE(10000 49999) *AND + CURDUE *GT CRDLIM *AND CRDFLG *EQ 'Y') In this example, CSTNBR (customer number), CURDUE (current due), and CRDLIM (credit limit) are numeric fields, and CRDFLG (credit flag) is a character field. The QRYSLT expression uses the %RANGE function to determine whether the CSTNBR field is in the range of 10000 to 49999 and then checks whether CURDUE is greater than the credit limit. Finally, it tests CRDFLG against the value Y. When all tests are true for a record in the file, that record is selected. You can minimize trips to the manual by remembering a few rules about the QRYSLT parameter. First, enclose all character constants in apostrophes or quotation marks (e.g., 'char-constant' or 'char-constant'). For example, consider the following logical expression comparing a field to a character constant:
CRDFLG 'EQ 'Y' If you want to substitute runtime CL variable &CODE for the constant, you would code the expression as:
'CRDFLG *EQ ' *CAT &CODE *CAT '' After substitution and concatenation, quotation marks enclose the value supplied by the &CODE variable, and the expression is valid. Second, differentiate between upper and lower case when specifying character variables. Character variables in the QRYSLT parameter are case-sensitive; in other words, you must either specify a 'Y' or a 'y' or provide for both possibilities. Numeric constants and variables cause undue anxiety for newcomers to the OPNQRYF command. Look again at this example:
QRYSLT('CSTNBR *EQ %RANGE(10000 49999) *AND
+
CURDUE *GT CRDLIM *AND CRDFLG *EQ 'Y') Two of the logical expressions use numeric fields or constants. In the first expression
'CSTNBR *EQ %RANGE(10000 49999)' notice there are no apostrophes or quotation marks around the numeric constants. Although these numbers appear in a character string (the QRYSLT parameter), they must appear as numbers for the system to recognize and process them, which brings us to the third QRYSLT parameter rule: Don't enclose numeric or character variables in quotation marks if the value of a variable should be evaluated as numeric. The second logical expression
CURDUE *GT CRDLIM compares two fields defined in the record format or mapped fields. Again, there are no quotation marks around the names of these numeric fields. A dragon could rear its ugly head when you create a dynamic query selection in a CL or HLL program. Suppose you want to let the user enter the range of customer numbers to select from rather than hard-coding the range. To build a dynamic QRYSLT, you must use concatenation, and concatenation can only be performed on character fields. However, you would probably require the user to enter numeric values so you could ensure that all positions in the field are numeric. This means that the variables that define the range of customer numbers must be converted to characters before concatenation, but later they must appear as numbers in the QRYSLT parameter so they can be compared to the numeric CSTNBR field. Figure 28.2 shows one way to create the correct QRYSLT value. Suppose the user enters the numeric values at a prompt provided by display file USERDSP. First, you use the CHGVAR (Change Variable) command to move these numeric values into character variables &LOWCHR and &HIHCHR. You can use the character variables and concatenation to build the QRYSLT string in variable &QRYSLT. When the substitution is made, the numeric values appear without quotation marks, just as though the numbers were entered as constants. The GRPSLT parameter functions exactly like the QRYSLT parameter, except the selection is performed after records have been grouped. The same QRYSLT functions are available for the GRPSLT expression, and the same rules apply.
Key Fields Besides selecting records, you can establish the order of the records OPNQRYF presents to your HLL program by entering one or more key fields in the key field specifications. The KEYFLD parameter consists of several elements. You must specify the field name, whether to sequence the field in ascending or descending order, whether or not to use absolute values for sequencing, and whether or not to enforce uniqueness. Let's look at a couple of examples. The following OPNQRYF command:
OPNQRYF FILE(MYLIB/MYFILE) QRYSLT('....') KEYFLD(CSTNBR) would cause the selected records to appear in ascending order by customer number because *ASCEND is the default for the key field order. The command
OPNQRYF FILE(MYLIB/MYFILE) QRYSLT('....') + KEYFLD((CURBAL *DESCEND) (CSTNBR)) would present the selected records in descending order by current balance and then in ascending order by customer number. Any key field you name in the KEYFLD parameter must exist in the record format referenced by the FORMAT parameter. The key fields specified in the KEYFLD parameter can be mapped from existing fields, so long as the referenced field definition exists in the referenced record format. The KEYFLD default value of *NONE tells the
database manager to present the selected records in any order. Entering the value *FILE tells the query to use the access path definition of the file named in the FILE parameter to order the records.
Mapping Virtual Fields One of the richer features of the OPNQRYF command is its support of field mapping. The mapped field specifications let you derive new fields (known as 'virtual' fields in relational database terms) from fields in the record format being queried. You can map fields using a variety of powerful built-in functions. For example, %SST returns a substring of the field argument, %DIGITS converts numbers to characters, and %XLATE performs character translation using a translation table. You can use the resulting fields to select records and to sequence the selected records. Look at the following OPNQRYF statement:
OPNQRYF FILE(INPDTL) FORMAT(DETAIL) QRYSLT('LINTOT *GT 10000')+ KEYFLD((CSTNBR) (INVDTE)) MAPFLD((LINTOT 'INVQTY * IPRICE')) Fields INVQTY (invoice item quantity) and IPRICE (invoice item price) exist in physical file INPDTL. Mapped field LINTOT (line total) exists in the DETAIL format, which is used as the format for the selected records. As each record is read from the INPDTL file, the calculation defined in the MAPFLD parameter ('INVQTY * IPRICE') is performed, and the value is placed in field LINTOT. The database manager then uses the value in LINTOT to determine whether to select or reject the record.
OPNQRYF Command Performance Whenever possible, the OPNQRYF command uses an existing access path for record selection and sequencing. In other words, if you select all customer numbers in a specific range and an access path exists for CSTNBR, the database manager will use that access path to perform the selection, thus enhancing the performance of the OPNQRYF command. However, if the system finds no access path it can use, it creates a temporary one; and creating an access path takes a long time at the machine level, especially if the file is large. Likewise, when you specify one or more key fields in your query, the database manager will use an existing access path if possible; otherwise, the database manager must create a temporary one, again degrading performance. Overall, the OPNQRYF command provides flexibility that is sometimes difficult to emulate using only HLL programming and the native database. However, OPNQRYF is a poor performer when many temporary access paths must be created to support the query request. You may also need to weigh flexibility against performance to decide which record-selection method is best for a particular application. To help you make a decision, you can use these guidelines:
• • •
If the application is interactive, use OPNQRYF sparingly; and, unless the file is relatively small (i.e., fewer than 10,000 records), ensure that existing access paths support the selection and sequencing. If the application is a batch application run infrequently or only at night, you can use OPNQRYF without hesitation, especially if it eliminates the need for logical files used only to support those infrequent or night jobs. If the application runs frequently and in batch during normal business hours, use OPNQRYF when existing access paths support the selection and sequencing or when the files are relatively small. Use native database and HLL programming when the files are large (greater than 10,000 records) or when many (more than three or four) temporary access paths are required.
The next time a user requests a report that requires more than a few selections and whose records must be in four different sequences, use the OPNQRYF command to do the work and write one HLL program to do the reporting... But remember, to be on the safe side, run the report at night!
Sidebar: SQL Special Features
Chapter 29 - Teaching Programs to Talk Speak, program! Speak!' That's one way to try to get your program to talk (perhaps success is more likely if you reward good behavior with a treat). However, to avoid finding you actually barking orders, I want to introduce SNDUSRMSG (Send User Message), an OS/400 command you can use to 'train' your programs to communicate.
The SNDUSRMSG command exists for the sole purpose of communicating from program to user and includes the built-in ability to let the user talk back. In Chapter 7, I covered the commands you can use to send impromptu messages from one user to another: SNDMSG (Send Message), SNDBRKMSG (Send Break Message), and SNDNETMSG (Send Network Message). Programs can also use these commands to send an informational message to a user, but because these commands provide no means for the sending program to receive a user response, their use for communication between programs and users is limited. In contrast, the SNDUSRMSG command lets a CL program send a message to a user or a message queue and then receive a reply as a program variable.
Basic Training Figure 29.1 shows the SNDUSRMSG command screen. The message can be an impromptu message or one you've defined in a message file. To send an impromptu message, just type a message of up to 512 characters in the MSG parameter. To use a predefined message, enter a message ID in the MSGID parameter. The message you identify must exist in the message file named in the MSGF parameter. The MSGDTA parameter lets you specify values to take the place of substitution variables in a predefined message. For example, message CPF2105
(Object &1 in &2 type *&3 not found) has three data substitution variables: &1, &2, and &3. When you use the SNDUSRMSG command to send this message, you can also send a MSGDTA string that contains the substitution values for these variables. If you supply these values in the MSGDTA string:
'CSTMAST
ARLIB
FILE
'
the message appears as
Object CSTMAST in ARLIB type *FILE not found If you do not supply any MSGDTA values, the original message is sent without values (e.g., Object in type * not found). The character string specified in the MSGDTA parameter is valid only for messages that have data substitution variables. It is important that the character string you supply is the correct length and that each substitution variable is positioned properly within that string. The previous example assumes that the message is expecting three variables (&1, &2, and &3) and that the expected length of each variable is 10, 10, and 7, respectively, making the entire MSGDTA string 27 characters long. How do I know that? Because each system-defined message has a message description that includes detailed information about substitution variables, and I used the DSPMSGD (Display Message Description) command to get this information. Every AS/400 is shipped with QCPFMSG (a message file for OS/400 messages) and several other message files that support particular products. You can also create your own message files and message IDs that your applications can use to communicate with users or other programs. For more information about creating and using messages, see the AS/400 Control Language Reference (SC41-0030) and the AS/400 Control Language Programmer's Guide (SC41-8077). The next parameter on the SNDUSRMSG command is VALUES, which lets you specify the value or values that will be accepted as the response to your message, if one is requested. When you specify MSGTYPE(*INQ) and a CL variable in the MSGRPY parameter (discussed later), the system automatically supplies a prompt for a response when it displays the message. The system then verifies the response against the valid values listed in the VALUES parameter. If the user enters an invalid value, the system displays a message saying that the reply was not valid and resends the inquiry message. To make sure the user knows what values are valid, you should list the valid values as part of your inquiry message. In the DFT parameter, you can supply a default reply to be used for an inquiry message when the message queue that receives the message is in the *DFT delivery mode or when an unanswered message is deleted from the message queue. The default value in the SNDUSRMSG command overrides defaults specified in the message description of predefined messages. The system uses the default value when the message is sent to a message
queue that is in the *DFT delivery mode, when the message is inadvertently removed from a message queue without a reply, or when a system reply list entry is used that specifies the *DFT reply. Oddly enough, this value need not match any of the supplied values in the VALUES parameter. This oddity presents some subtle problems for programmers. If the system supplies a default value not listed in the VALUES parameter, it is accepted. However, if a user types the default value as a reply, and the default is not listed in the VALUES parameter, the system will notify the user that the reply was invalid. To avoid such a mess, I strongly recommend that you use only valid values (those listed in the VALUES parameter) when you supply a default value. The MSGTYPE parameter lets you specify whether the message you are sending is an *INFO (informational, the default) or *INQ (inquiry) message. Both kinds appear on the destination message queue as text, but an inquiry message also supplies a response line and waits for a reply. The TOMSGQ parameter names the message queue that will receive the message. You can enter the name of any message queue on the local system, or you can use one of the following special values:
• • •
* -- instructs the system to send the message to the external message queue (*EXT) if the job is interactive or to message queue QSYS/QSYSOPR if the program is being executed in batch. *SYSOPR -- tells the system to send the message to the system operator message queue, QSYS/QSYSOPR. *EXT -- instructs the system to send the message to the job's external message queue. Inquiry messages to batch jobs will automatically be answered with the default value, or with a null value (*N) if no default is specified. Keep in mind that although messages can be up to 512 characters long for first-level text, only the first 76 characters will be displayed when messages are sent to *EXT.
The TOUSR parameter is similar to TOMSGQ but lets you specify the recipient by user profile instead of by message queue. You can enter the recipient's user profile, specify *SYSOPR to send the message to the system operator at message queue QSYS/QSYSOPR, or enter *REQUESTER to send the message to the current user profile for an interactive job or to the system operator message queue for a batch job. One problem emerges when using the SNDUSRMSG command to communicate with a user from the batch job environment. In the interactive environment, both the TOUSR and TOMSGQ parameters supply values that let you communicate easily with the external user of the job. In the batch environment, the only values provided for TOUSR and TOMSGQ direct messages to the system operator as the external user. There are no parameters to communicate with the user who submitted the job. The CL code in Figure 29.2 solves this problem. When you submit a job, the MSGQ parameter on the SBMJOB (Submit Job) command tells the system where to send a job completion message. You can retrieve this value using the RTVJOBA (Retrieve Job Attributes) command and the SBMMSGQ and SBMMSGQLIB return variables. The program in Figure 29.2 uses the RTVJOBA command to retrieve the name of the message queue and tests variable &type to determine whether the current job is a batch job ([email protected]). If so, SNDUSRMSG can send the message to the message queue defined by the &sbmmsgq and &sbmmsgqlib variables. If the job is interactive, the SNDUSRMSG command can simply direct the message to the external user by specifying TOUSR(*REQUESTER). You can use the MSGRPY parameter to specify a CL character variable (up to 132 characters long) to receive the reply to an inquiry message. Make sure that the length of the variable is at least as long as the expected length of the reply; if the reply is too short, it will be padded with blanks to the right, but if the reply exceeds the length of the variable, it will be truncated. The first result causes no problem, whereas a truncated reply may cause an unexpected glitch in your program. An inquiry message reply must be a character (alphanumeric) reply. If your application requires the retrieval of a numeric value, it is best to use DDS and a CL or high-level language (HLL) program to prompt the user for a reply. This approach ensures that validity checking is performed for numeric values. Alas, the SNDUSRMSG command also exhibits another oddity: If you don't specify a MSGRPY variable but do specify MSGTYPE(*INQ), the command causes the job to wait for a reply from the message queue but doesn't retrieve the reply into your program. The last parameter on the SNDUSRMSG command is TRNTBL, which lets you specify a translation table to process the response automatically. The default translation table is QSYSTRNTBL, which translates lowercase
characters (X'81' through X'A9') to uppercase characters. Therefore, you can check only for uppercase replies (e.g., Y or N) rather than having to code painstakingly for all lowercase and uppercase possibilities (e.g., Y, y, N, n).
Putting the Command to Work Figure 29.3 shows how the SNDUSRMSG command might be implemented in a CL program. Notice that SNDUSRMSG is first used for an inquiry message. The message is sent to *REQUESTER to make sure the entire message text is displayed on the queue. The job determines whether or not the daily report has already been run for that day and, if it has, prompts the user to verify that the report should indeed be run again. The program explicitly checks for a reply of Y or N and takes appropriate action. Some people might argue that this is overcoding, because if you specified VALUES('Y' 'N') and you check for Y first, you can assume that N is the only other possibility. Although you can make assumptions, it is best if all the logical tests are explicit and obvious to the person who maintains the program. Also notice that the SNDUSRMSG command is used again in Figure 29.3 to send informational messages that let the user know which action the program has completed (the completion of the task or the cancellation of the request to process the daily report, depending on the user reply). You will find that supplying informational program-to-user messages will endear you to your users and help you avoid headaches (e.g., multiple submissions of the same job because the user wasn't sure the first job submission worked).
Knowing When To Speak As shown in the CL program example, using SNDUSRMSG to prompt the user for a simple reply makes good use of the command's capabilities. This function is somewhat different from prompting for data when you submit a job. I don't recommend using the SNDUSRMSG command to retrieve data for program execution (e.g., branch number, order number range, date range), because SNDUSRMSG offers minimal validity checking and is not as user-friendly as a DDS-coded display file prompt can be. Instead, you should create prompts for data as display files (using DDS) and process them with either a CL or HLL program. In a nutshell, the SNDUSRMSG command is best suited to sending an informational message to the user to relate useful information (e.g., 'Your job has been submitted. You will receive a message when your job is complete.') or to sending an inquiry message that lets the user choose further program action. The SNDUSRMSG command can teach your programs to talk, but the vocabulary associated with this command is specific to these two tasks. Now that you know how to train your program to talk to users, you can save the biscuits for the family pooch. The next challenge: teaching your programs to communicate with each other! You'll be able to master that after I explain how to use the SNDPGMMSG (Send Program Message) command, which lets you send messages from program to program with information such as detected program errors and requirements for continued processing. Who says you can't teach an old dog new tricks?
Chapter 30 - Just Between Us Programs In Chapter 7, I explained that you can use the SNDMSG (Send Message), SNDBRKMSG (Send Break Message), or SNDNETMSG (Send Network Message) command to communicate with someone else on your AS/400. In Chapter 29, I showed how to use one of these commands or the SNDUSRMSG (Send User Message) command to have a program send a message to a user. But when you want to establish communications between programs, none of these commands will do the job; you need the SNDPGMMSG (Send Program Message) and RCVMSG (Receive Message) commands. Now I want to introduce the SNDPGMMSG command (see Chapter 31 for a discussion of the RCVMSG command). Program messages are normally used for one of two reasons: to send error messages to the calling program (so it knows when a function has not been completed) or to communicate the status or successful completion of a process to the calling program. In this chapter, you'll learn how a job stores messages, how to have one program send a message to another, what types of messages a program can send, and what actions they can require a job to perform. But first, you need to understand the importance of job message queues.
Job Message Queues All messages on the AS/400 must be sent to and received from a message queue. User-to-user and program-touser messages are exchanged primarily via nonprogram message queues (i.e., either a workstation or a user message queue). OS/400 creates a nonprogram message queue when a workstation device or a user profile is created. You can also use the CRTMSGQ (Create Message Queue) command to create nonprogram message queues. For example, you might want to create a message queue for communication between programs that aren't part of the same job. Or you might want to create a central message queue to handle all print messages. Both users and programs can send messages to and receive them from nonprogram message queues. Although programs can use nonprogram message queues to communicate to other programs, OS/400 provides a more convenient means of communication between programs in the same job. For each job on the system, OS/400 automatically creates a job message queue that consists of an external message queue (*EXT, through which a program communicates with the job's user) and a program message queue (for each program invocation in that job). Figure 30.1 illustrates a sample job message queue. OS/400 creates an external message queue when a job is initialized and deletes the queue when the job ends. OS/400 also creates a program message queue when a program is invoked and deletes it when the program ends (before removing the program from the job invocation stack). The job message queue becomes the basis for the job log produced when a job is completed. The job log includes all messages from the job message queue, as well as other essential job information. (For more information about job logs, see 'Understanding Job Logs'.)
The SNDPGMMSG Command Figure 30.2 shows the parameters associated with the SNDPGMMSG command prior to OS/400 V2R3. Because of the introduction of ILE (Integrated Language Environment) support in OS/400 V2R3, the SNDPGMMSG command now includes some additional parameter elements that address specific ILE requirements (see the section 'ILE-Induced Changes'). You can use the SNDPGMMSG command in a CL program to send a program message to a nonprogram or program message queue. You can enter an impromptu message (up to 512 characters long) on the MSG parameter, or you can use the MSGID, MSGF, and MSGDTA parameters to send a predefined message. (To review predefined messages, see Chapter 29, 'Teaching Programs to Talk.') The TOPGMQ parameter is unique to the SNDPGMMSG command and identifies the program queue to which a message will be sent. TOPGMQ consists of two values: relationship and program. The first value specifies the relationship between the target program and the sending program. For this value, you can specify *PRV (indicating the message is to go to the target program's caller or requester), *SAME (the message is to be sent to the target program itself), or *EXT (the message is to go to the target job's external message queue). The second value specifies the target program and can be either the name of a program within the sending program's job or the special value *, which tells OS/400 to use the sending program's name. The default value for the TOMSGQ parameter, *TOPGMQ, tells the system to refer to the TOPMGQ parameter to determine the destination of the message. Let's look at the job message queues shown in Figure 30.1. Assuming that PGM_D is the active program, let's suppose PGM_D executes the following SNDPGMMSG command:
SNDPGMMSG MSG('Test message') TOMSGQ(*SYSOPR) MSGTYPE(*INFO)
+
PGM_D would send the message 'Test message' to the system operator's workstation message queue because the value *SYSOPR was specified for the TOMSGQ parameter. In the following SNDPGMMSG command,
SNDPGMMSG MSG('Test message') TOMSGQ(*TOPGMQ) + TOPGMQ(*SAME *) MSGTYPE(*INFO) the parameter TOMSGQ(*TOPGMQ) tells OS/400 to use the TOPGMQ parameter to determine the message destination. Because TOPGMQ specifies *SAME for the relationship and * for the target program, the system sends the message 'Test message' to program message queue PGM_D. Now consider the command
SNDPGMMSG MSG('Test message') + TOPGMQ(*PRV *) MSGTYPE(*INFO) In this case, the message is sent to program message queue PGM_C, because PGM_C is PGM_D's calling program (*PRV). (Notice that this time I chose not to specify the TOMSGQ parameter, but to let it default to *TOPGMQ.) As on the SNDUSRMSG command, SNDPGMMSG's TOUSR parameter lets your program send a message to a particular user profile. You can specify *ALLACT for the TOUSR parameter to send a message to each active user's message queue. Although this value provides an easy way for a program to send a message to all active users, it does not guarantee that users immediately see the message. Each user message queue processes the message based on the DLVRY attribute specified for the message queue. ILE-Induced Changes In OS/400 V2R3, the SNDPGMMSG TOPGMQ parameter is expanded to include ILE (Integrated Language Environment) support. Figure 30.3 presents the new TOPGMQ parameter structure, which contains two elements. The first element, 'relationship,' works just the same as in V2R2. The second element is now called 'Call stack entry identifier' and is expanded to multiple fields that help identify the exact program message queue to receive the message. The first entry field is the 'Call stack entry' field and is similar to the V2R2 implementation. This field represents the name of the program or procedure message queue. If this entry is a procedure name, the name can be a maximum of 256 characters. The system will begin searching for this procedure in the most recently called program or procedure. If more qualifications are needed to correctly identify the procedure message queue, you can use the next two items, 'Module name' and 'Bound program name,' to specifically point to the exact procedure message queue. The module identifies the module into which the procedure was compiled. The bound program name identifies the program name into which this procedure was bound.
Note that, when using the new SNDPGMMSG command, it is in this third item that you can enter the single value '*EXT' to tell OS/400 to send the message to the external message queue of the current job. Prior to V2R3, you entered this special value in the 'relationship' parameter element of the SNDPGMMSG command.
Message Types The next parameter on the SNDPGMMSG command is MSGTYPE. You can use six types of messages in addition to the informational and inquiry message types that you can create with the SNDMSG and SNDUSRMSG commands. Figure 30.4 lists the message types and describes the limitations (message content and destination) and normal uses of each. Each message type has a distinct purpose and communicates specific kinds of information to other programs or to the job's user. You can send an informational message (*INFO) to any user, workstation, or program message queue. Because inquiry messages (*INQ) expect a reply, you can send an inquiry message only to a nonprogram message queue (i.e., a user or workstation message queue) or to the current job's external message queue.
A completion message (*COMP) is usually sent to inform the calling program that the requested work is complete. It can be sent to a program message queue or to the job's external message queue. Diagnostic messages (*DIAG) are sent to program or external message queues to describe errors detected during program execution. Typically, escape messages follow diagnostic messages, telling the calling program that diagnostic messages are present and that the requested function has failed. You can send a request message (*RQS) to any message queue as a command request. You must use an impromptu message on the MSG parameter to send the request. (For more information about request messages and request message processing, see Chapter 8 of the Control Language Programmer's Guide, SC41-8077). An escape message (*ESCAPE) specifically identifies the error that caused the sending program to fail. An escape message can be sent only to a program message queue, and the escape message terminates the sending program, returning control to the calling program. MSGTYPE(*ESCAPE) cannot be specified if the MSG parameter is specified -- in other words, all escape messages must be predefined. Status messages (*STATUS) describe the status of the work that the sending program performs. When a program sends a status message to an interactive job's external message queue, the message is displayed on the workstation screen, processing continues, and the sending program does not require a response. When a status message is sent to a program message queue, the message functions as a warning message. If the program receiving the message monitors for this message (using the MONMSG (Monitor Message) command, which I will discuss in Chapter 31, the message functions as an escape message by terminating the sending program. If the program receiving the status message does not monitor for that message, the system immediately returns control to the sending program. OS/400 uses notify messages (*NOTIFY) to describe a condition in the sending program that requires a correction or a reply. If the notify message is sent to an interactive job's external message queue, the message acts like an inquiry message and waits for a reply, which the sending program can then receive. When a notify message is sent to a program message queue, the message functions as a warning. If the program receiving the notify message monitors for it, the message causes the sending program to end, and control returns to the receiving program. If the receiving program doesn't monitor for the message, or if the message is sent to a batch job's external message queue, the default reply for that message is sent, and control returns to the sending program. You can either define the default reply in the message description or specify it on the system reply list.
The Receiving End The next parameter on the SNDPGMMSG command is RPYMSGQ, which lets you specify the program or nonprogram message queue to which the reply should go. The only valid values are *PGMQ, which specifies that the reply is to go to the sending program's message queue, or a qualified nonprogram message queue name. You can receive or remove a specific message by using a key value to identify that message. The KEYVAR parameter specifies the CL return variable containing the message key value of the message sent by the SNDPGMMSG command. To understand how key variables work, examine the following CL statement:
SNDPGMMSG MSG('Test message') TOPGMQ(*PRV *) MSGTYPE(*INFO) KEYVAR(&MSGKEY)
+ +
The SNDPGMMSG command places the message on the calling program's message queue, and OS/400 assigns to that message a unique message identifier that is returned in the &MSGKEY variable. In the example
RMVMSG PGMQ(*PRV *) + MSGKEY(&MSGKEY) CLEAR(*BYKEY) the RMVMSG (Remove Message) command uses the &MSG KEY value to remove the correct message from the queue. The return variable must be defined as TYPE(*CHAR) and LEN(4).
Program Message Uses Now that you're acquainted with SNDPGMMSG parameters, let's look at a few examples that demonstrate how to use this command. The following is a sample diagnostic message:
SNDPGMMSG MSGID(CPF9898) + MSGF(QSYS/QCPFMSG) + MSGDTA('Output queue' |> + &outqlib |< + '/' || + &outq |> + 'not found') + TOPGMQ(*PRV) MSGTYPE(*DIAG) In this example, I have concatenated constants (e.g., output queue and /) and two variables (&outqlib and &outq) to construct the diagnostic message 'Output queue &outqlib/&outq not found.' The current program sends this message to the calling program, which can receive it from the program message queue after control returns to the calling program. As I mentioned in my discussion of the MSGTYPE parameter, you must supply a valid message ID for the MSGID keyword when you send certain message types (to review which types require a message ID, see Figure 30.3). Because this means you cannot simply use the MSG parameter to construct text for these message types, OS/400 provides a special message ID, CPF9898, to handle this particular requirement. The message text for CPF9898 -&1. -- means that substitution variable &1 will supply the message text, which you can construct using the MSGDTA parameter. Notice that the message text in the preceding example is constructed in the MSGDTA parameter. When the program sends the message, the MSGDTA text becomes the message through substitution into the &1 data variable. (For a more complete explanation of message variables, see Chapter 29, 'Teaching Programs to Talk;' the Programming: Control Language Reference, SC41-0030; and the Programming: Control Language Programmer's Guide, SC41-8077.) The next example is an escape message that might follow such a diagnostic message:
SNDPGMMSG MSGID(CPF9898) MSGF(QCPFMSG) MSGDTA('Operations ended in error.' |> 'See previously listed messages') TOPGMQ(*PRV) MSGTYPE(*ESCAPE)
+ + + + +
OS/400 uses an escape message to terminate a program when it encounters an error. When a program sends an escape message, the sending program is immediately terminated, and control returns to the calling program. In the following example, the current program sends a completion message to the calling program to confirm the successful completion of a task.
SNDPGMMSG MSGID(CPF9898) + MSGF(QCPFMSG) + MSGDTA('Copy of spooled files is complete') + TOPGMQ(*PRV) + MSGTYPE(*ESCAPE) The following sample status message goes to the job's external message queue and tells the job's external user what progress the job is making.
SNDPGMMSG MSGID(CPF9898) + MSGF(QCPFMSG) + MSGDTA('Copy of spooled files in progress') + TOPGMQ(*EXT) +
MSGTYPE(*STATUS) When you send a status message to an interactive job's external message queue, OS/400 displays the message on the screen until another program message replaces it or until the message line on the display is cleared. Although you may be ready to send messages to another program, you have only half the picture. In Chapter 31, you will learn how programs receive and manipulate messages, and I'll give you some sample code that contains helpful messaging techniques.
Chapter 31 - Hello, Any Messages? On the AS/400, sending and receiving program messages functions much like phone mail. Within a job, each program, as well as each job, has its own 'mailbox.' One program within the job can leave a message for another program or for the job; each program or job can 'listen' to messages in its mailbox; and programs can remove old messages from the mailbox. In Chapter 30, I explained how programs can send messages to other program message queues or to the job's external message queue. In this chapter, we look at the 'listening' side of the equation -- the RCVMSG (Receive Message) and MONMSG (Monitor Message) commands.
Receiving the Right Message You can use the RCVMSG command in a CL program to receive a message from a message queue and copy the message contents and attributes into CL variables. Why would you want to do this? You may want to look for a particular message in a message queue to trigger an event on your system. Or you may want to look for messages that would normally require an operator reply, and instead, have your program supply the reply. Or you may want to log specific messages received at a message queue. Whatever the reason, the place to begin is the RCVMSG command. Figure 31.1 lists the RCVMSG command parameters. The first six parameters -- PGMQ (program queue), MSGQ (message queue), MSGTYPE (message type), MSGKEY (message key), WAIT (time to wait), and RMV (remove message) -- determine which message your program will receive from which message queue and how your program processes a message. Figure 31.2 illustrates a job message queue comprised of the job's external message queue and five program message queues. For our purposes, each message queue contains one message. Let's suppose that PGM_D is the active program and that it issues the following command: RCVMSG
Because no specific parameter values are provided, OS/400 would use the following default values for the first six parameters:
RCVMSG PGMQ(*SAME *) MSGQ(*PGMQ) MSGTYPE(*ANY) MSGKEY(*NONE) WAIT(0) RMV(*YES)
+ + + + +
The PGMQ parameter of the pre-V2R3 RCVMSG Command, which consists of two values -- relationship and program -lets you receive a message from any program queue active within the same job or from the job's external message queue (see Figure 31.3). The first value specifies the relationship between the program named in the second value and the receiving program. You can specify one of three values:
• • •
*PRV to indicate the program is to receive the message from the message queue of the program that called the program named in the second value *SAME to indicate the program is to receive the message from the message queue of the named program *EXT to indicate the program is to receive the message from the job's external message queue
The value for the second element of the PGMQ parameter can be either the name of a program within the current program's job or the special value *, which tells OS/400 to use the current program's name. In the example above, because the PGMQ value is (*SAME *), PGM_D would receive a message from the PGM_D message queue. According to Figure 31.2, there is only one message to receive -- 'First message on PGM_D queue.' In our example, the value MSGTYPE(*ANY), combined with the value MSGKEY(*NONE), instructs the program to receive the first message of any message type found on the queue regardless of the key value (for more information about the MSGTYPE and MSGKEY parameters see 'RCVMSG and the MSGTYPE and MSGKEY Parameters,'). The value WAIT(0) in the example tells the program to wait 0 seconds for a message to arrive on the message queue. You can use the WAIT parameter to specify a length of time in seconds (0 to 9999) that RCVMSG will wait for the arrival of a message. (You can also specify *MAX, which means the program will wait indefinitely to receive a message.) If RCVMSG finds a message immediately, or before the number of seconds specified in the WAIT value elapses, RCVMSG receives the message. If RCVMSG finds no message on the queue during the WAIT period, it returns blanks or zeroed values for any return variables. The last parameter value in the sample command, RMV(*YES), tells the program to delete the message from the queue after processing the command. You can use RMV(*NO) to instruct OS/400 to leave the message on the queue after RCVMSG receives the message. OS/400 then marks the message as an 'old' message on the queue. A program can receive an old message again only by using the specific message key value to receive the message or by using the value *FIRST, *LAST, *NEXT, or *PRV for the MSGTYPE parameter. Note on the V2R3 RCVMSG Command Parameter Changes As with the SNDPGMMSG command, the RCVMSG command parameters also changed in V2R3 to accommodate ILE. For more information about the new parameter items you see for the PGMQ parameter in Figure 31.3, see Chapter 30. Because this book is at an introductory level and many of you will not use ILE until ILE RPG and/or ILE COBOL is available, I will not discuss these parameter changes in detail.
Receiving the Right Values All the remaining RCVMSG parameters listed in Figure 31.1 provide CL return variables to hold copies of the actual message data or message attributes. You normally use the RCVMSG command to retrieve the actual message text or attributes to evaluate that message and then take appropriate actions. For example, the following command:
RCVMSG MSGQ(MYMSGQ) MSGTYPE(*COMP) RMV(*NO) MSG(&MSG) MSGDTA(&MSGDTA) MSGID(&MSGID)
+ + + + + +
SENDER(&SENDER) retrieves the actual message text, the message data, the message identifier, and the message sender data into the return variables &MSG, &MSGDTA, &MSGID, and &SENDER, respectively. After RCVMSG is processed, the program can use these return variables. The program may receive messages looking for a particular message identifier. In this particular example, the current program might be looking for a particular completion message on a nonprogram message queue (MYMSGQ) to determine whether or not a job has completed before starting another job. Notice the SENDER parameter used in this example. When you create a return variable for the SENDER parameter, the variable must be at least 80 characters long and will return the following information: Positions 1 through 26 identify the sending job: 1-10 = job name 11-20 = user name 21-26 = job number Positions 27 through 42 identify the sending program: 27-38 = program name 39-42 = statement number Positions 43 through 55 provide the date and time stamp of the message: 43-49 = date (Cyymmdd) 50-55 = time (hhmmss) Positions 56 through 69 identify the receiving program (when the message is sent to a program message queue): 56-65 = program name 66-69 = statement number Positions 70 through 80 are reserved for future use. The SENDER return variable can be extremely helpful when processing messages. For example, during the execution of certain programs, it is helpful to know the name of the calling program without having to pass this information as a parameter or hardcoding the name of the program into the current program. You can use the technique in Figure 31.4 to retrieve that information. The current program sends a message to the calling program. The current program then immediately uses RCVMSG to receive that message from the *PRV message queue. Positions 56 through 65 of the &SENDER return value contain the name of the program that received the original message; thus, you have the name of the calling program. Another RCVMSG command parameter that you will use frequently is RTNTYPE (return message type). When you use RCVMSG to receive messages with MSGTYPE(*ANY), your program can use a return variable to capture and interrogate the message type value. For instance, in the following command:
RCVMSG PGMQ(*SAME *) MSGTYPE(*ANY) MSG(&MSG) RTNTYPE(&RTNTYPE)
+ + +
the variable &RTNTYPE returns a code that provides the type of the message that RCVMSG is receiving. The possible codes that are returned are: 01 Completion 02 Diagnostic 04 Information 05 Inquiry 08 Request
10 Request with prompting 14 Notify 15 Escape 21 Reply, not checked for validity 22 Reply, checked for validity 23 Reply, message default used 24 Reply, system default used 25 Reply, from system reply list As you can see, IBM did not choose to return the 'word' values (e.g., *ESCAPE, *DIAG, *NOTIFY) that are used with the MSGTYPE parameter on the SNDPGMMSG (Send Program Message) command but instead chose to use codes. However, when you write a CL program that must test the RTNTYPE return variable, you should avoid writing code that appears something like
IF (&rtntype = '02') DO ... ENDDO ELSE IF (&rtntype = '15') DO ... ENDDO Instead, to make your CL program easier to read and maintain, you should include a standard list of variables, such as the CL code listed in Figure 31.5, in the program. Then, you can change the code above to appear as
IF (&rtntype = [email protected]) DO ... ENDDO ELSE IF (&rtntype = [email protected]) DO ... ENDDO
Monitoring for a Message The MONMSG command is available only in a CL program. It provides a technique to trap error/exception conditions by monitoring for escape, notify, and status messages. It also provides a technique to direct the execution of the program based on the particular error conditions detected. Figure 31.6 lists the MONMSG command parameters. You can use the MSGID parameter to name from one to 50 specific or generic message identifiers for which the command will monitor. A specific message identifier is a message ID that represents only one message, such as CPF9802, which is the message ID for the message 'Not authorized to object &2.' A generic message is a message ID that represents a group of messages, such as CPF9800, which includes all messages in the CPF9801 through CPF9899 range. Thus, the command
MONMSG CPF9802 + EXEC(GOTO ERROR) monitors for the specific message CPF9802, whereas the command
MONMSG CPF9800 + EXEC(GOTO ERROR) monitors for all escape, notify, and status messages in the CPF9801 through CPF9899 range. The second parameter on the MONMSG command is the CMPDTA parameter. You have the option of using this parameter to specify comparison data that will be used to check against the message data of the message trapped by the MONMSG command. If the message data matches the comparison data (actually only the first 28 positions are compared), the MONMSG command is successful and the action specified by the EXEC parameter is taken. For example, the command
MONMSG CPF9802 CMPDTA('MAINMENU') EXEC(DO) monitors for the CPF9802 message identifier, but only executes the command found in the EXEC parameter if the CMPDTA value 'MAIN MENU' matches the first eight positions of the actual message data of the trapped CPF9802 message. The EXEC parameter lets you specify a CL command that is processed when the MONMSG traps a valid message. If no EXEC value is found, the program simply continues with the next statement found after the MONMSG command. You can use the MONMSG command to monitor for messages that might occur during the execution of a single command. This form of MONMSG use is called a command-level message monitor. It is placed immediately after the CL command that might generate the message and might appear as
CHKOBJ &OBJLIB/&OBJ &OBJTYPE MONMSG CPF9801 EXEC(GOTO NOTFOUND) MONMSG CPF9802 EXEC(GOTO NOTAUTH) The MONMSG commands here monitor only for messages that might occur during the execution of the CHKOBJ command. You should use this implementation to anticipate error conditions in your programs. When a commandlevel MONMSG traps a message, you can then take the appropriate action in the program to continue or end processing. For example, you might code the following:
DLTF QTEMP/WORKF MONMSG CPF2105 to monitor for the CPF2105 'File not found' message. In this example, if the CPF2105 error is found, the program simply continues processing as if no error occurred. That may be appropriate for some programs. Now, examine the following code:
CHKOBJ MONMSG CRTPF ENDDO CLRPFM
QTEMP/WORK *FILE CPF9801 EXEC(DO) FILE(QTEMP/WORK) RCDLEN(80) QTEMP/WORK
This code uses the MONMSG command to determine whether or not a particular file exists. If the file does not exist, the program uses the CRTPF (Create Physical File) command to create the file. The program then uses the CLRPFM (Clear Physical File Member) command to clear the existing file (if the program just created the new file, the member will already be empty). In addition to using the command-level message monitor to plan for errors from specific commands, you can use another form of MONMSG to catch other errors that might occur. This form of MONMSG use is called a programlevel message monitor, and you must position it immediately after the last declare statement and before any other CL commands. Figure 31.7 illustrates the placement of a program-level message monitor.
When you implement a program-level message monitor, I recommend that you use the message identifier CPF9999 instead of the widely used CPF0000. Using CPF9999 provides two important functions over CPF0000. First, CPF9999 catches some messages that CPF0000 will not catch because CPF9999 is the 'Function Check' error, which occurs only after some other program error, including errors triggered by CPFxxxx escape messages, MCHxxxx escape messages (machine errors), and escape messages from other message identifier groups. CPF0000 only monitors for actual CPFxxxx messages. Second, the CPF9999 'Function Check' message provides the actual failing statement number, which is not available from the CPFxxxx error message. Specifying the CPF9999 message ID as the program-level message monitor makes this additional information available.
Working with Examples Figure 31.8 is a portion of a CL program that provides several examples of program message processing to help you tie together the information I've presented here and in Chapter 30. The program contains a standard list of DCL statements for defining variables used in normal message processing. You may want to place these variables in a source member that you can copy into CL programs as needed. (Remember, you have to use your editor to do the copying because CL has no /COPY equivalent.) The program-level message monitor (A in Figure 31.8) is coded to handle any unexpected errors. If an unexpected error occurs during program execution, this MONMSG causes execution to continue at the label GLOBAL_ERR. At GLOBAL_ERR, the program first prevents an infinite loop by testing to determine whether the program has already initiated the error-handling process. An infinite loop might occur when an unexpected error occurs during the error-handling process that the program-level MONMSG controls. The &msg_flag controls the overall message process. The program sets the value of &msg_flag to [email protected] and continues at label CLEAN_UP. Your programs should have a mechanism for cleaning up any temporary objects, whether the program ends normally or abnormally with an error. After processing statements at CLEAN_UP, the program continues at label RSND_BGN. If the &msg_flag variable is [email protected], the program has found an error condition. The program then sends each message on the current program message queue to the calling program (which in turn might continue to send the messages back in the program stack to the command processor or some other program that either ends abnormally or displays the messages to the user who requested the function). Notice that RCVMSG is used to receive each message from the program queue (D). The MONMSG CPF0000 is used here to catch any error that might occur during the RCVMSG command process and immediately go to the end of the program without attempting to receive any other messages. As the program receives each message, the return variable &RTNTYPE is tested and only the messages that have a &RTNTYPE of [email protected] or [email protected] are processed. The SNDPGMMSG command sends each processed message to the calling program message queue as a *DIAG message. Finally, at the RSND_END label, the program sends one generic escape message 'Operation ended in error ....' to the calling program. That escape message terminates the current program and returns control to the calling program. The sample code in Figure 31.8 contains several examples of command-level message monitors. The first example is the MONMSG CPF9801 that follows the CHKOBJ command (B). If the CPF9801 'Object not found' message is trapped, the program first removes this message by using RCVMSG with RMV(*YES), and then sends a more meaningful message to the program queue using the SNDPGMMSG command. Notice that the value for the TOPGMQ parameter on the SNDPGMMSG command is *SAME to direct the message to the current program queue. The &msg_flag is set to [email protected] and GOTO CLEAN_UP sends control of the program to the CLEAN_UP label, where cleanup and then error message processing occurs. Another example of the command-level message monitor is the MONMSG CPF0864 that appears immediately after the RCVF statement (C). If the MONMSG traps the CPF0864 'End of file' message, the program removes this message from the current program message queue using RCVMSG with RMV(*YES). Because the 'End of file' message is expected and not an error, it is appropriate to remove that message from the program queue to prevent confusion in debugging any errors. Next the program uses the GOTO RCD_END statement to pass control of the program to the RCD_END label where the program sends a normal completion message to the calling program message queue.
What Else Can You Do with Messages? Now that you understand the mechanics, you may want to know what else you can do with messages. Listed below are three possible solutions using messages:
• • •
Create a message break-handling program for your message queue. See Chapter 8. Create a request message processor (a command processor like QCMD). See Chapter 8 of the Control Language Programmer's Guide (SC41-8077). Use the SNDPGMMSG and RCVMSG commands to send and receive data strings between programs. For instance, you might send a string of order data to a message queue where the order print program uses RCVMSG to receive and print the order data. This avoids having to submit a job or call a program. The order print program simply waits for messages to arrive on the queue. This functions much like data queue processing, but is simplified because you can display message information (you cannot display a data queue without writing a special program to perform that task).
These are only examples of how you might use messages to perform tasks on the system. With the mechanics under your belt, it's time for you to explore how you can use messages to enhance your own applications.
Chapter 32 - OS/400 Commands OS/400 commands -- friend or foe? That's the big question for anyone new to the AS/400. It is certainly understandable to look at the IBM-supplied system commands and wonder just how many there are, why so many are needed, and how you are ever going to remember them all. You might easily decide that the procedures you've already memorized on another system are certainly better and fail to see why IBM would think the OS/400 commands could possibly be helpful! Well, after recently trying to navigate my way around an HP3000, I can empathize with you. I kept thinking, 'Why didn't Hewlett-Packard think to provide the WRKSPLF (Work with Spooled Files) command, or why not say DSPFD (Display File Description) instead of this 'LISTF ,2' stuff?' Anyway, after stumbling around for days, calling everyone I could think of, and scouring the books for information, I finally managed to memorize a few of the needed commands and complete the 'short' job I had set out to do. So if you get frustrated when you find the procedures you are accustomed to have been twisted into something that seems foreign, remember that being uncomfortable doesn't mean you're incompetent; it only makes you feel that way! With that said, and realizing that many of you need to master the AS/400 sooner or later, let me introduce OS/400 commands and give you a few helpful tips and suggestions for customizing system commands to make them seem more friendly.
Commands: The Heart of the System The command is at the heart of the AS/400 operating system. Whether you are working with an output queue, creating an object, displaying messages, or creating a subsystem, you are using an OS/400 command. When you select an option from an OS/400 menu or from a list panel display, you are executing a command. Let me give you a couple of examples. Figure 32.1 shows the AS/400 User Tasks menu. Next to each menu option I have added the command the system executes when you select that option. You can simply key in the command to achieve the same results. In Figure 32.2 you see the familiar Work with Output Queue display. Below the screen format, I have listed the available options and the command the system executes for each. For instance, if you enter a '6' next to a spooled file entry on the list, the system releases that spooled file. If you are familiar with the system commands, you can type in RLSSPLF (Release Spooled File), prompt it, and fill in the appropriate parameters to accomplish the same thing. Obviously, typing in the RLSSPLF command is much more time consuming than entering a '6' in the appropriate blank. However, this example is not typical of all OS/400 commands. In many cases, it's quicker and easier to key in the command than it is to use the menus. To know which technique to use, it's helpful to have a firm grasp of how commands are organized and how they can be used, and to know which commands are worth learning. Before I continue with this chapter, let me say something about how system commands are organized and named. OS/400 commands consist basically of a verb and a noun (e.g., CRTOUTQ -- Create Output Queue), and more than two-thirds of the existing commands are constructed using just 10 verbs (CRT, CHG, DLT, ADD, RMV, DSP, WRK, CPY, STR, and END). This is good news if you are worried about remembering all the commands. I recommend that you first familiarize yourself with the various objects that can exist on the system. Once you understand most of those objects, you can quickly figure out what verbs can operate upon each object type. For example, you can't delete a job, but you can cancel one. For help identifying and using OS/400 commands, try using one or more of the following resources:
• • •
On any command line, press F4 (Prompt). OS/400 will present you with a menu of the major command groups. You can choose menu options to find and select the command you need. On any command line, type 'GO CMDxxx', where you fill in the xxx with either a verb or an object (e.g., GO CMDPTF for PTF-related commands, GO CMDWRK for 'work with' commands). OS/400 will present you with a list of those commands. Type a command on the command line and press F1 (Help). OS/400 offers online help for all CL commands.
• • •
Execute the SLTCMD (Select Command) Command to find commands using a generic name (e.g., WRK*, STR*). If you are on V2R3 (or beyond), you can enter a generic name directly on the command line (e.g., WRK*, STR*, CRTDEV*) and press Enter. OS/400 will present you with a list of commands that begin with the same letters you specify before the asterisk. If you are on V2R3 (or beyond), use InfoSeeker. You access InfoSeeker by pressing F11 on any Help Display Panel, by typing STRSCHIDX (Start Search Index) on a command line and pressing Enter, or by selecting option 20 from the Information Assistant menu (to get this menu, type 'GO INFO'). InfoSeeker helps you find further command help and related information. Refer to the IBM reference guide Programming: Reference Summary (SX41-0028).
Tips for Entering Commands By putting a little time and effort into learning a few phrases in this new language, you'll be comfortable and productive with day-to-day tasks on the AS/400. Once you've become acquainted with some of the most frequently used commands, it's often easier to key them in on the system command line than it is to go through the menus. Following these tips for entering commands will help ensure correct syntax and get you up to speed:
• • •
Be sure to enter values for required parameters. Specify values for positional parameters unless you want to use the default values. When entering parameter values positionally (i.e., without keywords), key them in the same order as they appear in the command syntax diagram. If you exceed the number of allowed parameters, an error message is issued. The number of allowed positional parameters is designated in the syntax diagram by a 'P' in a box. If the symbol does not appear in the syntax diagram, you can code all parameters positionally.
Keeping the above guidelines in mind, let's practice a few commands. First, consider the DSPOBJD (Display Object Description) command. Type 'DSPOBJD' and press F4 to prompt the command. In the resulting screen (Figure 32.3), the line next to 'Object' will be in bold, indicating that Object is a required parameter. Now press F11, and you will see the screen shown in Figure 32.4. Notice that the keywords appear beside each field (e.g., OBJ for object name and OBJTYPE for object type). The OBJ keyword requires a qualified value, which means that you must supply the name of the library in which the object is found. The default value *LIBL indicates that if you don't enter a specific library name, the system will search for the object in the job's library list. Notice that the keyword OUTPUT is not in bold, showing that it is an optional parameter. The default value for OUTPUT is an asterisk (*), which instructs the system to display the results of the command on the screen. Now you can key in the values QGPL and QSYS for the object name and the library name, respectively, and the value *LIB for the OBJTYPE parameter. Then press Enter, and the screen displays the object description for library QGPL, which exists in library QSYS. Now, using only the command line, type in the same command as follows:
DSPOBJD QSYS/QGPL *LIB or
DSPOBJD QGPL *LIB Either command meets the syntax requirements. Keywords aren't needed because all the parameters used are positional, and the order of the values is correct. Suppose you type
DSPOBJD QGPL *LIB *FULL Will this work? Sure. In this example, you have entered, in the correct order, values for the two required parameters and the value (*FULL) for the optional, positional parameter (DETAIL). What if you want to direct the output to the printer, and you type
DSPOBJD QGPL *LIB *FULL *PRINT
Will this work? No! You have to use the keyword (OUTPUT) in addition to the value (*PRINT), because OUTPUT is beyond the positional coding limit. Let's say you skip *FULL and just enter
DSPOBJD QGPL *LIB OUTPUT(*PRINT) Because you haven't specified a value for the positional parameter DETAIL, you would get the description specified by the default value (*BASIC). Most of the time you will probably prompt commands, but learning how to enter a few frequently used commands with minimal keystrokes can save you time. For example, which would be faster: to prompt WRKOUTQ just to enter the output queue name, or to enter 'WRKOUTQ outq_name'? Should you prompt the WRKJOBQ (Work with Job Queue) command just to enter the job queue name, or should you simply enter 'WRKJOBQ job_name'? In both cases you will save yourself a step (or more) if you simply enter the command.
Customizing Commands Taking our discussion one step further, let's explore how you might create friendlier versions of certain useful system commands. Why would you want to? Well, some (translation: 'many') IBM-supplied commands are long, requiring multiple keystrokes. You might want to shorten the commands you use most often. For example, you could shorten the command WRKSBMJOB (Work with Submitted Jobs) to WSJ or JOBS. The command WRKOUTQ could become WO, and the command DSPMSG (Display Messages) could become MSG. How can you accomplish this without renaming the actual IBM commands or having to create your own command to execute the real system command? Easy! Just use the CRTDUPOBJ (Create Duplicate Object) command. Before trying this command, take a few minutes to look over the CRTDUPOBJ command description in Volume 3 of IBM's Programming: Control Language Reference manual. Then create a library to hold all your new customized versions of IBM-supplied commands. Don't place the new command in library QSYS or any other system-supplied library: New releases of OS/400 replace these libraries, and your modified command will be lost. You should name your new library USRCMDLIB, or CMDLIB, or anything that describes the purpose of the library; and you should include the new library in the library list of those who will use your modified commands. When the destination library is ready, use the CRTDUPOBJ command to copy the commands you want to customize into the new library. CRTDUPOBJ lets you duplicate individual objects; or you can duplicate objects generically (i.e., by an initial character string common to a group of objects, followed by an asterisk), or all objects in a particular library, or multiple object types. To rename the WRKOUTQ command, enter
CRTDUPOBJ WRKOUTQ QSYS *CMD USRCMDLIB WO In this example, WRKOUTQ, QSYS, and *CMD are values for required parameters that specify the object, the originating library, and the object type, respectively. If you prompt for the parameters, enter
CRTDUPOBJ OBJ(WRKOUTQ) FROMLIB(QSYS) OBJTYPE(*CMD) + TOLIB(USRCMDLIB) NEWOBJ(WO) Either of these commands places the new command (WO) into library USRCMDLIB. When you duplicate an object, all the object's attributes are duplicated. This means that the command processing program for WO is the same as for WRKOUTQ, so the new command functions just the same as the IBM-supplied command.
Modifying Default Values The final touch for tailoring commands is to modify certain parameter default values when you know that you will normally use different standard values for those parameters. You may want to change default values for the CRTxxx (Create) commands especially. For example, for every physical file created, you may want to specify the SIZE parameter as (1000 1000 999). Or you may want the SHARE parameter to contain the value *YES rather than the IBM-supplied default *NO. You can change these defaults by using one of two methods. The first method requires that everyone who uses a command remember to specify the desired values instead of the defaults for certain parameters. Although you can place such requirements in a data processing handbook or a standards guide, this method relies on your staff to either remember the substitute values or look up the values each time they need to key them in.
The other method for modifying the default values of IBM-supplied commands is to use the CHGCMDDFT (Change Command Default) command. Take a few minutes to read the command description in IBM's Programming: Control Language Reference, Volume 2. CHGCMDDFT simply modifies the default values that will be used when the command is processed. For instance, to make the changes mentioned above for the CRTPF (Create Physical File) command, you would type
CHGCMDDFT CMD(CRTPF) NEWDFT('SIZE(1000 1000 999) SHARE(*YES)') You could use CHGCMDDFT to enhance the WO command you created earlier. Suppose that you usually use the WO command to work with your own output queue. Why not change the default value of *ALL for the OUTQ parameter to be the name of your own output queue? Then, rather than having to type
WO your_outq you can simply type 'WO' (of course, this personalized command should only exist in your library list). If you want to work with another output queue, you can still type in the queue name to override the default value. See? Commands can be fun! To modify system command parameter defaults using the CHGCMDDFT command, you should duplicate the command into a different library. Then change the command defaults and, if you have retained the CL command names rather than renaming the commands, list the library before QSYS on the system library list. When you use the CHGCMDDFT or CRTDUPOBJ command to customize CL commands, you should create a CL source program that performs those changes. Then whenever a new release of OS/400 is installed, you should run the CL program, thus duplicating or modifying the new version of the system commands. The system commands on the new release might have new parameters, different command processing programs, or new default values. Using CHGCMDDFT is an effective way to control standards. However, you should be cautious when using this command because it affects all uses of the changed command (e.g., a vendor-supplied software package might be affected by a change you make). You might want to use a good documentation package to find all uses of specific commands and to evaluate the risk of changing certain default values. You can modify your user profile attribute USROPT to include the value *CLKWD if you want the CL keywords to be displayed automatically when you prompt commands (rather than having to press F11 to see them). To modify this user profile attribute, someone with the proper authority should enter the CHGUSRPRF (Change User Profile) command as follows:
CHGUSRPRF user_profile USROPT(*CLKWD) For more information about the USROPT keyword, see IBM's Programming: Control Language Reference. The AS/400 provides a function-rich command structure that lets you maneuver through the operations of your system. I don't happen to believe that everyone should be able to enter every command without prompting or using any keywords. But I am convinced that having a good working knowledge of the available OS/400 commands not only will help you save time, but also will make you more productive on the system.
Chapter 33 - OS/400 Data Areas I'd like to have a dollar for every time I've put something in a special place only to forget where I put it. Someone living in my old house in Florida will someday find that special outlet adaptor I never used. He will probably also find the casters I removed from the baby bed, a stash of new golf balls, and several little metal doohickeys I removed from the back of my PS/2 when I installed feature cards. I hope he gets some use out of them! This tendency to misplace things also finds its way into the world of computer automation; but fortunately for those of us who need a place to keep some essential chunk of information, OS/400 provides a simple solution. If you ever write applications that use data such as the next available order number, the current job step in progress for a long-running job, a software version identification number, or a serial number, you should know about OS/400 data areas.
A data area is an AS/400 object you can create to store information of a limited size. A data area exists independently of programs and files and therefore can be created and deleted independently of any other objects on the system. Data areas typically are used to store some incremental number. For instance, a payroll application might use a data area to store the next available check number. Each time the application writes or records a check, it can get the next check number from the data area and then update the data area to reflect the use of that check number. Another use for data areas is to emulate the S/36's IF-ACTIVE feature, which lets you check a program's execution status. You can create and name a data area for each program whose status you need to know. For example, if PRP101 is a program to be checked, you can create a data area named PRP101 in a user library. Then you can modify program PRP101 to acquire a shared update (*SHRUPD) lock on the data area using the ALCOBJ (Allocate Object) command. An application needing to check the execution status of program PRP101 can simply try to acquire an exclusive (*EXCL) lock on the data area. If the allocation attempt fails, it means the data area object is currently allocated by program PRP101, indicating that the program is active.
Creating a Data Area The best way to acquaint you with data areas is to walk you through the process of creating one. To create a data area object named MYDTAARA in library QGPL and initialize it with the value 'ABCDEFGHI', you would type the following command:
CRTDTAARA DTAARA(QGPL/MYDTAARA) TYPE(*CHAR) LEN(10) VALUE('ABCDEFGHIJ') TEXT('Data Area to + store ABCDEFGHIJ')
+
This data area object can contain 10 characters of data and can be referenced by any user or program authorized to use library QGPL. Notice that you can specifically identify the data area with the TEXT parameter on the CRTDTAARA (Create Data Area) command. Just as you may forget where you've placed a special object in your home, you can easily forget what you've created a data area for. Wise use of the TEXT parameter can help you effectively document data area objects. Besides CRTDTAARA, other OS/400 commands associated with data areas are the DSPDTAARA (Display Data Area), CHGDTAARA (Change Data Area), RTVDTAARA (Retrieve Data Area), and DLTDTAARA (Delete Data Area) commands. Suppose you wanted to display the data area we just created. You would execute the following command:
DSPDTAARA QGPL/MYDTAARA The system would then display the description and contents of the data area on your workstation. You can use the CHGDTAARA command interactively or from within a program. The DTAARA parameter on this command lets you either replace the contents of a data area or change only a portion (substring) of the data area. For example, the command
CHGDTAARA DTAARA(QGPL/MYDTAARA) VALUE('123') would replace the entire contents of the data area. If the value being placed into the return variable is shorter than the return variable, the value is padded to the right with blanks. Therefore, the new value of the data area would be '123 '. However, the command
CHGDTAARA DTAARA(QGPL/MYDTAARA (1 3)) VALUE('123') replaces only the first three positions of the data area with the value '123'. Thus, the original value of MYDTAARA would be modified to '123DEFGHIJ'. The RTVDTAARA command provides a simple way for CL programs to retrieve the data area value. Because the command provides return variables, it can be executed only from within a CL program. Here again, the DTAARA parameter lets you reference all or only a portion of the data area. The CL program statement
RTVDTAARA DTAARA(QGPL/MYDTAARA) RTNVAR(&ALL)
would retrieve the entire contents of MYDTAARA ('ABCDEFGHIJ') into return variable &ALL, starting in the left-most position of the return variable. Now consider the following CL program statement:
RTVDTAARA DTAARA(QGPL/MYDTAARA (4 2)) RTNVAR(&JUST2) This RTVDTAARA command retrieves from MYDTAARA a substring of two characters, starting with position 4. The variable &JUST2 would return the value 'DE'. One performance tip to remember when using the RTVDTAARA command is that it is more efficient to retrieve the entire data area into a single CL variable and use several CHGVAR (Change Variable) commands to pull substrings from that variable than it is to execute several RTVDTAARA commands to retrieve multiple substrings. Every RTVDTAARA command must access the data area -- a time-consuming operation. To delete this data area from your system, type the command
DLTDTAARA QGPL/MYDTAARA You can also use high-level language programs to retrieve and modify data area values. Figure 33.1 provides sample RPG/400 code to retrieve information from a data area named INPUT. In this example, the program implicitly reads and locks the data area when the program is initialized and then implicitly writes and unlocks it when the program ends. During program execution, the INPUT data area data structure defines fields internally to the program. RPG's DEFN, IN, and OUT opcodes provide a method for explicitly retrieving and updating a data area and for explicitly controlling the lock status of a data area object. For more information about how to use these opcodes with data areas, see IBM's AS/400 Languages: RPG/400 Reference (SC09-1349) and AS/400 Languages: RPG/400 User's Guide (SC09-1348).
Local Data Areas A local data area (LDA) is a special kind of data area automatically created for each job on the system. The LDA is a character-type data area 1,024 characters long and initialized with blanks. As long as the job is running, the LDA is associated with that job. When one job submits another job, the system creates an LDA for the submitted job and copies the contents of the submitting job's LDA into it. Thus, you can pass a data string from a given job to every job it submits. Unlike the data area object discussed earlier, the LDA is dependent on a particular job; it cannot be displayed or modified by any other job on the system. You cannot manually create or delete an LDA, nor does it have an associated library (not even QTEMP). The LDA is simply maintained as part of the job's process access group (a group of internal objects that is associated with a job and that holds essential information about that job). The RTVDTAARA statements in Figure 33.2 retrieve two substrings from the LDA, putting the first into variable &FIELD1 and the second into &FIELD2. The CHGDTAARA command replaces positions 101 through 150 of the LDA with the value of variable &NEWVAL. You can perform any number of retrievals and changes on the LDA; however, keep in mind that only one copy of the LDA exists for each job.
The LDA is often used to store static information that must be available to many different programs executed within a job. For example, when an employee signs on to a workstation, an initial program might retrieve information relating to that employee (e.g., branch number or employee number) and put it into the LDA. Any subsequent programs the job invokes that require this information can simply retrieve it from the LDA rather than performing additional file I/O.
Group Data Areas If an interactive job becomes a group job (via the CHGGRPA (Change Group Attributes) command), the system creates an additional data area called the group data area (GDA). Similar to the LDA, the GDA is a blank-initialized character-type data area, but it is only 512 characters long. The GDA is accessible from any job in the group and is deleted when the last job in the group has ended or when the job is no longer part of a group job. You cannot create or delete a GDA (although you can modify it), and it has no associated library. Another unique limitation is that you cannot use it in a substring function on a parameter. However, you can retrieve the entire GDA and then use the CHGVAR command to reference particular portions of the data. For jobs that run as group jobs, the GDA simply provides additional temporary storage (beyond the LDA that exists for each job). These are the basics you need to begin exploring OS/400 data areas. As you begin to think of reasons to use data areas on your system, you may want to look at data areas that already exist there. Check your libraries to see whether your software provider has supplied any data areas. If so, how are they used? You may discover that you have used OS/400 data areas all along.