The Design of Software (CLOSED)

A public forum for discussing the design of software, from the user interface to the code architecture. Now closed.

The "Design of Software" discussion group has been merged with the main Joel on Software discussion group.

The archives will remain online indefinitely.

Top Ten Mistakes Found in Oracle Systems

Since it's a nice quiet Monday I thought I'd share Oracle Corp's own "Top Ten Mistakes Found in Oracle Systems" advice, extracted from their Database Performance Planning documentation: http://download-west.oracle.com/docs/cd/B10501_01/server.920/a96532/ch2.htm#13670

Enjoy!

>> BEGIN
This section lists the most common mistakes found in Oracle systems. By following Oracle's performance improvement methodology, you should be able to avoid these mistakes altogether. If you find these mistakes in your system, then re-engineer the application where the performance effort is worthwhile.
See Also:

Oracle9i Database Performance Tuning Guide and Reference for more information on wait events

  1. Bad Connection Management

      The application connects and disconnects for each database interaction. This problem is common with stateless middleware in application servers. It has over two orders of magnitude impact on performance, and it is totally unscalable.

  2. Bad Use of Cursors and the Shared Pool

      Not using cursors results in repeated parses. If bind variables are not used, then there is hard parsing of all SQL statements. This has an order of magnitude impact in performance, and it is totally unscalable. Use cursors with bind variables that open the cursor and execute it many times. Be suspicious of applications generating dynamic SQL.

  3. Getting Database I/O Wrong

      Many sites lay out their databases poorly over the available disks. Other sites specify the number of disks incorrectly, because they configure disks by disk space and not I/O bandwidth.

  4. Redo Log Setup Problems

      Many sites run with too few redo logs that are too small. Small redo logs cause system checkpoints to continuously put a high load on the buffer cache and I/O system. If there are too few redo logs, then the archive cannot keep up, and the database will wait for the archive process to catch up.

  5. Serialization of data blocks in the buffer cache due to lack of free lists, free list groups, transaction slots (INITRANS), or shortage of rollback segments.

      This is particularly common on INSERT-heavy applications, in applications that have raised the block size to 8K or 16K, or in applications with large numbers of active users and few rollback segments.

  6. Long Full Table Scans

      Long full table scans for high-volume or interactive online operations could indicate poor transaction design, missing indexes, or poor SQL optimization. Long table scans, by nature, are I/O intensive and unscalable.

  7. In Disk Sorting

      In disk sorts for online operations could indicate poor transaction design, missing indexes, or poor SQL optimization. Disk sorts, by nature, are I/O-intensive and unscalable.

  8. High Amounts of Recursive (SYS) SQL

      Large amounts of recursive SQL executed by SYS could indicate space management activities, such as extent allocations, taking place. This is unscalable and impacts user response time. Recursive SQL executed under another user ID is probably SQL and PL/SQL, and this is not a problem.

  9. Schema Errors and Optimizer Problems

      In many cases, an application uses too many resources because the schema owning the tables has not been successfully migrated from the development environment or from an older implementation. Examples of this are missing indexes or incorrect statistics. These errors can lead to sub-optimal execution plans and poor interactive user performance. When migrating applications of known performance, export the schema statistics to maintain plan stability using the DBMS_STATS package.

      Likewise, optimizer parameters set in the initialization parameter file can override proven optimal execution plans. For these reasons, schemas, schema statistics, and optimizer settings should be managed together as a group to ensure consistency of performance.

  10. Use of Nonstandard Initialization Parameters

      These might have been implemented based on poor advice or incorrect assumptions. In particular, parameters associated with SPIN_COUNT on latches and undocumented optimizer features can cause a great deal of problems that can require considerable investigation.
David Aldridge Send private email
Monday, December 05, 2005
 
 
I've seen the exact opposite advice for #1 in MS SQL. They say, because connections are an expensive resource, it's best to not to keep one open longer than necessary. Connection pooling is supposed to recycle existing connections and save your performance.
David Send private email
Monday, December 05, 2005
 
 
>  Connection pooling is supposed to recycle existing connections and save your performance.

I think it matters how you mxing and demuxing connections. Connection objects should be cheap unless they lock in threads and other resources, which they don't need to do.
son of parnas
Monday, December 05, 2005
 
 
I suppose it depends on your web/application server, but connection pooling is generally a good thing.

Having at various times made or seen made most of the errors on the list, I agree that it makes sense. Whether it is an ideal starting point for learning Oracle optimization, I dunno. I'd point inquirers to the Hotsos site for a first look.
George Jansen Send private email
Monday, December 05, 2005
 
 
"Connection pooling" Wait, isn't that twice the same thing, just differently worded? A connection is supposedly a heavy resource: you don't want lots of them around.  And you'd prefer not to carelessly throw away the ones you already got...
Dave
Monday, December 05, 2005
 
 
Dave,

I think the difference is that connection pooling removes the performance cost of creating/destroying connections so you can keep them open briefly, thereby supporting more sessions more efficiently than if each session held a connection when it wasn't needed.

I beleive this works as long as all connections are from the same authenticated user. If each connection has a different user you're SOL.
David Send private email
Monday, December 05, 2005
 
 
+1 David.

In order to get the advantages of connection pooling, your connection string must be identical.

Under SQLServer 2000, this in effect meant not using Windows authentication, as that would result in each user's Windows credentials being passed in, resulting in a different connection string for everyone.  I don't know if this was addressed in v2005.

Of course, you could run your services under one (or a few) Domain accounts, and not under a user account.
example Send private email
Monday, December 05, 2005
 
 
Since we're talking about Oracle: the so-called Fine-Grained Access Control (or Row-Level Security or ...) does handle this nicely. Same authenticated user, but depending on the context very different views of the database.
George Jansen Send private email
Tuesday, December 06, 2005
 
 
...but the issue still remains: for point #1, we're always said to make diconnected applications by keeping connections short lived and always closing them ASAP.
This is really a must to achieve performance on MS systems.

Seems like Oracle works better with the opposite paradigm of connected applications....

Really?
Johncharles Send private email
Tuesday, December 06, 2005
 
 
"Be suspicious of applications generating dynamic SQL."

Unless you generate your own (lowest-common-denominator) SQL per request clientside, how do you target multiple database platforms from a single client easily?  Entity classes and then database adaptors seems such overkill..
new nick, new rep
Tuesday, December 06, 2005
 
 
whoever is suspicious of dynamic SQL has never done pharma apps...when you've got 20,000 heterogeneous source tables to be extracted, plus you're in an insanely regulated environment, there's not enough programmers in India to do the job.

Of course, it helps that raw performance isn't really an issue. 10 minutes, 20 minutes, who cares?
Steve Hirsch Send private email
Wednesday, December 07, 2005
 
 
Oracle support of dynamic SQL is pretty good. There are tweaks to make cursor sharing work when the app is not using bind variables, PL/SQL dynamic SQL became vastly easier with EXECUTE IMMEDIATE (which has, as it happens a USING clause that lets you bind variables for dynamically generated statements).

The point is to know when you need dynamic SQL and when another couple of lines would let you use bind variables. As you say, when the parse phase is in milliseconds and the query runs for tens or hundreds of seconds there isn't much point worrying about the parsing overhead. For many systems, though the pattern is to have many short queries, and there the parse time can be a problem and lots of similar queries can bog down the shared pool.
George Jansen Send private email
Wednesday, December 07, 2005
 
 
"As you say, when the parse phase is in milliseconds and the query runs for tens or hundreds of seconds there isn't much point worrying about the parsing overhead."

Tom Kyte will be after you if he hear's you saying that :)
BGM
Thursday, December 08, 2005
 
 
In a datawarehouse I am not as worried about bind variables.  But in an OLTP application it can murder your scalability.  Also you get people doing some sort of web application and whent hey don't use bind variables you are screwed.  It opens you up to SQL Injection and trashing your database.  Here is a nice link explainig what I mean:
http://www.unixwiz.net/techtips/sql-injection.html
Jim Send private email
Thursday, December 08, 2005
 
 

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics
 
Powered by FogBugz