Thursday, March 25, 2010
Methods for Estimating
We want to recommend not to purchase Production hardware before Production user (software and business) requirement details are known. We do not know of any way to estimate hardware specifications for requirements that are not quantified in some tangible way. Every infrastructure estimation is based upon an explicit or implicit model of what the use cases will actually be.
Develop in Stages, Purchase in Stages
Ideally you will design your system to have three environments: Development, Test/QA and Production. We promote that you begin by investing in a Development environment.
The Development environment is not used for QA, or performance testing. It only needs to be configured to enable developers to do their work capturing user business requirements. During the initial development phase, business requirements are modeled, and the batch processes required to support them start to be understood. Through the iterations of the development process, the hardware configurations required to support processing requirements start to be objectified.
In this way, a Development environment helps empirically to define the requirements for a QA. And because QA needs to be as close to identical with Production as possible, the QA infrastructure defines what is required for Production.
This observation is accurate because of the strict demands of change management. It is not possible to predict the impact of a change in one architecture based on the impact that the change has on an architecturally different one. So QA needs to be a duplicate of Production. Opting to use a single box for Development and QA locks you in a paradox: You need to estimate your Production infrastructure specifications before you know your Production infrastructure requirements.
Unless, of course, you are comfortable faking it…
Generic Infrastructure Recommendations
Production performance issues invariably are the result of process contention for the same hardware resources. We measure, or estimate, the degree of this contention, and refer to it as concurrency. We can talk about concurrency as it applies individually to memory, processor, disk, network, etc. And the overall concurrency that any given system is able to support is determined by the throughput of the entire software and hardware infrastructure.
High-level and generic recommendations for infrastructures designed to support Essbase are:
1. Minimize processor related variations in performance by configuring Essbase to run using dedicated resources. Logical partitions (LPARs), or virtual environments, must be able to be controlled so that the Administrator is sure that each Essbase process has full access to the resources being assigned to it
2. Minimize storage I/O conflicts between Essbase and other 3rd party applications (e.g. Essbase vs. RDBMS), between different Essbase applications, and intra-cube I/O contention (e.g. multiple business rules, CALCPARALLEL and queries). In demanding Essbase environments, it is good practice to ensure that Essbase uses dedicated devices for storage. In extremely demanding processing situations, an individual cube might need to be provided dedicated storage.
3. Minimize memory conflicts by ensuring that processes have sufficient RAM resources to complete without the OS having to use virtual memory space.
Generic recommendations for methodologies to determine system requirements:
1. Performance engineering software simulations should be used to determine optimal hardware settings to support the Essbase server.
2. Performance engineering software simulations should be used to determine the optimal Essbase settings to support loading, exporting, querying and aggregating.
When attempting to estimate requirements to support Essbase Server, concurrency eventually means the simultaneous request for processor resources. Concurrency analysis can of course be applied to any infrastructure resource. It is not too inaccurate to think of bottlenecks essentially as where the concurrency rubber hits the infrastructure road.
When not able to perform a concurrency analysis using accurate simulations of end-user activities, sizing must be performed using estimates for concurrency.
Concurrency and Essbase
The classic way that concurrency is estimated derives from the total number of users of the system. This is probably because it is easy, and the number of users for a system is available very early in the application lifecycle. It is a perfectly legitimate way to begin to think about concurrency.
The number of total users is factored by a driver to estimate the number of connected users, and the connected user count is factored by another (perhaps the same?) driver to estimate concurrency. Ten percent (10%) appears to be the de facto standard used for this driver. So, for example, 3,000 named users represents 300 connected users and, ultimately, an estimated concurrency of 30.
How accurate is this for helping to size Essbase infrastructures? Recalling the above definition of concurrency as the “simultaneous request for processor”, the concurrency estimate of 30 turns out to be very large indeed.
In terms of Essbase, concurrency is best understood within the context of peak usage factored by response time. This is important because Essbase performance is invariably determined by a combination of the three characteristics of simultaneous requests, peak usage and response time.
Peak usage specifies concurrency across a period of time. We are looking to understand, in precise detail, the number of concurrent requests that are going being made of the processors, and for the length of time that these requests are expected to occur.
If we take our figure of 30 simultaneous requests, and add to that the duration of 4 hours, we end up with something like “peak usage for Essbase is that, at any point in time during our peak 4 hour period, 30 simultaneous requests are being made for processor”. When we factor the average response time for these requests, we begin to develop a conceptual framework for defining an infrastructure to support concurrency. How does response time factor into this?
For the sake of discussion, let’s inaccurately define “simultaneous” as a one second interval. We write “inaccurately” because we mean that concurrent requests theoretically are occurring whenever we snapshot the Essbase Server. Using one second as the default interval is useful because it simplifies an accurate estimation of concurrency.
In our example, peak usage now means “every second for four continuous hours, 30 new requests for processor are being made by users”. If the average response time is in the order of 5 seconds (and five seconds is a very optimistic response time given the current trend of customer uses of Essbase), then we have a potential problem because by the time that the first 30 requests have been processed, we have an application where slightly less than 30 * 5 more processing requests have been generated. The result is that we have a queue of just under 150 requests occurring within the first 5 seconds of peak runtime. And this activity is expected to happen for four solid hours.
What kind of server can reasonably be expected to be able to perform under such a workload?
Definition of Concurrency
We broaden the definition of concurrency to be “the number of requests for processor that can occur within an average response time”. This helps delineate the relation between response time and concurrency: Concurrency varies directly with response time.
For example, we recently reviewed a customer environment for performance. The average baseline query response time (~25 seconds) was measured before and then during while increasing workload demands were made of the server. The average response time jumped from 25 to over 450 seconds before having to be stopped. The test was able to be run for slightly less than 30 minutes. The connected user count was only ~5% of the actual anticipated connected user volume.
How would this scenario have been described if this were an actual Production run rather than a simulation? “We saw more or less acceptable performance for a few minutes this morning before the entire system became totally, completely unresponsive…”
Hopefully that comment isn’t familiar to very many readers. Sadly, it is possible for an improperly configured Essbase Server environment to overwhelm powerful server infrastructures.
When thinking about defining a server infrastructure for Essbase, it is necessary to conceive concurrency as having two characteristics:
1. requests for processor
2. average response time
Expressing the relation scientifically we get:
We are even tempted to construct the following formula for computing concurrency:
Concurrency = Requests * Response_Time
At the level of understanding desired here, processor concurrency refers to the number of requests that occur within the average peak response time. This explains how concurrency and performance vary over time.
During non-peak periods, the number of requests for processor is low, average resulting response times are low, and so too is concurrency. During peak periods, on the other hand, when the number of requests for processor is high, both the average response time and concurrency increase.
The worst case for customers occurs when requests are so high as to dramatically reduce the number of available processors per request. Response times become so attenuated that the entire application becomes unresponsive.
The unresponsiveness might, however, be the expected behavior:
"Because computations in a concurrent system can interact with each other while they are executing, the number of possible execution paths in the system can be extremely large...Concurrent use of shared resources can be a source of indeterminacy leading to...starvation."
When either the number of requests for processor or the average response time is under-estimated, the accuracy of the proposed infrastructure specifications is undermined.
In our final entry will discuss Essbase processes and present methods for estimating server requirements.
Wednesday, March 24, 2010
PeopleSoft 9.1 FSCM Entity Relationship Diagrams are now available on My Oracle Support at the following links:
FMS 9.1 https://support.us.oracle.com/oip/faces/secure/km/DocumentDisplay.jspx?id=1074856.1
Monday, March 22, 2010
Oracle I/PM 11g offers a new JavaEE architecture that simplifies deployment and leverages Oracle Universal Content Management services for metadata and document storage to provide organizations with a unified repository and an enterprise class ECM platform.
For customers looking for detailed information about upgrading from earlier I/PM releases and implementing I/PM 11g, be sure to check out the Quarterly Customer Update Webcast, recorded on March 10.
Also, be sure to visit the Oracle Fusion Middleware Launch Center for videos, datasheets, and presentations about Oracle I/PM 11g.
Missed the last Quarterly Customer Update Webcast?
We discussed several product updates on the March quarterly customer Webcast, including the first phase of the Oracle Content Management 11g release. Some of the highlights include Information Rights Management (IRM) 11g and Imaging and Process Management (I/PM) 11g Overviews. Additionally, we covered I/PM 11g new features, implementation and migration topics that existing customers would like to know.
You can find quick links to all the resources mentioned on the call, as well as links to the presentation and recording details in My Oracle Support from the March 2010 Webcast Resource Links page on OTN.
* Release information
Thursday, March 18, 2010
Wednesday, March 17, 2010
My Oracle Support Speed Training
Did you know that there are short recorded training sessions available on My Oracle Support? Check out the My Oracle Support Speed Training sessions available from Note:603505.1. Topics covered include using PowerView, Quick Search, Service Request Management and more.
Tuesday, March 16, 2010
While helping out a customer with some OBIEE problems I came across this nicely done posting on Troubleshooting OBIEE : Connectivity and Server Issues over at the Rittman Mead Consulting blog.
Friday, March 12, 2010
ACS Principal Service Delivery Engineer
Richard (Rick) Sawa
ACS Principal Service Delivery Engineer
We provide a high-level discussion of what is involved in assessing/sizing a server infrastructure to support Essbase. We start by briefly outlining the requirements and procedures involved with assessing an existing environment. This establishes a frame of reference for the discussion that follows on guidelines for estimating infrastructures to support Essbase when the details of end-user and batch processing requirements have yet to be defined.
There is no replacement for systematic testing to determine the specific hardware specifications required to support Essbase, no matter how hard the shoe strikes the podium. We think that everyone knows that this is true. And it’s also true that at the very beginning of a new development initiative, the proverbial cart is before the horse. How does one define hardware specifications for processing requirements that are not yet quantified? The short answer is that you can’t.
In the absence of requirements, every estimation for hardware is based on assumptions.
Essbase Server Assessments
We frame the sizing discussion by briefly presenting how we evaluate existing Essbase servers when processing requirements are fully understood. When the results of the assessment reveal that the infrastructure is found wanting, an estimation of more appropriate server specifications can be brought forward. The criteria used to draw up these new specifications forms an ideal list of criteria for assessing Essbase infrastructures.
Once ideal requirements are understood, you will be able compare them to what is available on more generic assessments. Subtracting the generic criteria from the ideal gives an indication of how accurate the sizing estimate can be expected to be.
The following is a summary list of the objects and information that eServices review in order to complete an Essbase infrastructure server assessment:
1. Essbase Server Configuration
a. Essbase.cfg Settings
2. Essbase Application Settings
a. Application Logs
b. Cube Outlines
c. Cube Statistics
d. Calc Script/Business Rules Procedural Logic and Settings
e. Batch Process Scripts
3. Hardware Server Configuration
a. Operating System
b. Processors (number, speed & architecture)
d. Virtual Machine configuration
e. LPAR definition
f. Server Application profile
g. Disk configuration
h. Network configuration
4. Server Performance Monitoring Logs
Items 1 and 2 contain detailed software requirements. These infer specifications for hardware listed in item 3. The first two items really provide specific sizing criteria for hardware.
In an in situ environment, items 1, 2 and 3 are already working together, and have specific content. A sizing assessment where software requirements are minimally known means that assumptions need to be provided. The accuracy of the sizing estimate is strictly correlative with the accuracy of these assumptions.
Essbase objects are analyzed for settings (caches, CALCPARALLEL, and so on). Requests for processor, network and disk resources are extracted from the Essbase Application logs in the form of response times for events. Response times are combined manually to provide a single Essbase Performance Log.
Every Essbase server review should look at the Essbase cube designs to determine whether they are following best practices, and whether tuning methodologies can be invoked to increase performance.
Complete application design reviews involve coordinating the detailed business requirements with cube design decisions. Full reviews vary in no significant way from an implementation in terms of the amount of time and resources that they consume. This usually stands far outside of what is possible to do within the timeframes allocated for an assessment.
Once, however, the cubes and their processes have been optimized within time and resource constraints, a more reliable determination of hardware requirements can be made. Sometimes a tuning effort is sufficient to enable the system to perform up to service level agreements, and sometimes not.
In our opinion, tuning is mandatory because it averts the criticism that hardware is simply being thrown at the problem.
Supporting Infrastructure Components
The Essbase configuration and script settings are cross-referenced with infrastructure settings and configuration. The infrastructure (RAM, CPU, etc.) is monitored and measured during Essbase processing.
Concurrency is accurately extracted from the Essbase performance logs by identifying overlapping response times. The contents of the manually generated Essbase performance log are correlated with infrastructure performance log statistics, and subjected to analysis.
Correlating the Server Performance Monitoring Logs with Essbase events, enable you to compare what is being allocated to Essbase processes with how the underlying server hardware, operating system and supporting infrastructure components are behaving.
Consider the two following charts created during an infrastructure assessment. They show the saw tooth behavior of both disk and CPU activity. Comparing teeth directly, clearly evident is an inverse relationship between disk and CPU. Vertical lines have been inserted to illustrate:
When disks were busy, CPUs became idle, and vice versa. The activities being measured were data load and aggregation batch process routines. From this we were able to see the disk bottleneck and the impact that it was having on CPU utilization.
This type of measurement makes it possible to assess server behavior, and can be incorporated to provide accurate infrastructure specification criteria.
To sum up, in situ infrastructure assessments analyze detailed Essbase and infrastructure metrics to determine how and why the infrastructure is responding to specific Essbase processing requirements. An analysis is made of Essbase design characteristics, and tuning techniques are applied to ensure that Essbase processes are as efficient as possible. The analysis of Essbase settings and processing requirements enables an accurate estimation of hardware should the current infrastructure be found wanting.
In the final analysis, a complete list of Essbase settings and processing requirements are requisite to estimating infrastructure requirements.
Oracle certifies all of its current product line on their virtualization product, Oracle VM. It's a highly stable and proven solution which allows a highly flexible environment and a complete stack for your support needs!
If you have further questions, feel free to visit:
Oracle ACS is able to deliver a number of services from installation and configuration to consulting and more advanced solutions. Contact your Oracle services rep or SDM for further information!
If you missed it, Oracle keeps an archive of past presentations.
Please see MetaLink Note: 568127.1 on http://support.oracle.com/
Thursday, March 11, 2010
Clemens Utschig puts his focus on SOA for the java developer.
New this week at the Oracle E-Business Suite Technology blog:
OCFS2 for Linux Certified for E-Business Suite Release 12 Application Tiers
Performing Better: Improving Skills and Knowledge of EBS Tools and Technology
E-Business Suite Release 12.1.1 Consolidated Upgrade Patch 1 Now Available
Also in the realm of EBS this week is this excellent summary of patch types prepared by Renee Van Dusen of Oracle:
Patch Types & Reasons to Patch
Oracle consolidates and releases the following patch types. Patches include bug fixes as well as new functionality.
- Version Maintenance Pack – This would be a large consolidation of patches including all versions up to the latest for all products in the Oracle eBusiness Suite. For example, 11.5.10 would include all version changes prior to 11.5.10 such as 11.5.8, 11.5.9, etc. These patches are cumulative. Maintenance packs include all the relevant Family packs.
- Family Pack – This would be a consolidation of patches for a particular family of products such as Financials which includes General Ledger, Accounts Payable, Cash Management, etc. or CRM which includes Sales, Marketing, Service, etc. These patches are cumulative. Family packs include all the relevant Mini-Packs.
- Product Mini-Pack – This would be a consolidation of patches for a particular product such as General Ledger or Enterprise Budgeting and Planning. These patches are cumulative. For example, General Ledger Mini-Pack C includes mini-packs A, B, and C.
- Consolidated Rollups – These are rollup releases of patches as add-ons or fixes to Mini-packs, Family packs, Maintenance packs, or specific areas within a Mini-pack. Typically these don’t increase the version level of the Maintenance, Family, or Mini pack.
- Quarterly Security Patches – Oracle now releases on a quarterly basis a compilation of High Priority security patches for all tiers of the Oracle Applications: Database, Application Server, or Application. Some patches are cumulative, some aren’t.
- One-offs – One off patches are released to fix specific issues. They are generally smaller patches and usually at some point in time get rolled up into the other patch types described above.
Typically one would apply a patch type to fix a bug, keep current on the latest versions, implement new functionality, or implement a new product of the eBusiness Suite.
And now the combination of the two articles above, EBS and Patching. There are alerts out this week for our HP users running Oracle EBS 11i and 12i. Please look up the following doc IDs in My Oracle Support:
ADRELINK utility for E-Business Suite Release 12.0 and 12.1.1 result in large executables which may lead to out-of-memory issues (Doc ID 1060979.1) (effects both PA-RISC and Itanium)
New E-Business Suite Release 12.0 and 12.1 Operating System Patch Requirements on the HP-UX Itanium platform (Doc ID 1066323.1) (effecting Itanium users).
There will be member of the Oracle optimizer team presenting at ODTUG Kaleidoscope in July in Washington, DC, (who chose that location, the heat and humidity committee?). You can read all about it on their blog: Inside the Oracle Optimizer - Removing the black magic
PeopleSoft and the Optimizer
Speaking of the optimizer, this time from the PeopleSoft side of the equation, there's a handy technique described over at the PeopleSoft DBA blog on: Hinting Dynamic Generated SQL in Application Engine.
Wednesday, March 10, 2010
Oracle E-Business Suite Upgrade to Release 12
Many Oracle E-Business Suite customers are now faced with the task of upgrading to Release 12. Luckily, there are some terrific resources available on TechNet. Check out the following:
Whitepaper: Oracle E-Business Suite Release 12 Technology Stack Documentation Roadmap
Whitepaper: Best Practices for Adopting Oracle E-Business Suite Release 12
Whitepaper: Case Study: Oracle's Own Oracle E-Business Suite Release 12 Upgrade
Forum: Oracle E-Business Suite Release 12 Install/Upgrade
Thursday, March 4, 2010
Wednesday, March 3, 2010
Upgrade to 11g Performance Best Practices
Uday Moogala has written a great whitepaper on best practices of upgrading to 11g for E-Business Suite customers. It can be found at: http://www.oracle.com/apps_benchmark/doc/11g-upgrade-performance-best-practices.pdf
Official, Youbetcha Legalese
Oracle, JD Edwards, PeopleSoft, and Siebel are registered trademarks of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.