Monday, June 3, 2013

Performance Tuning Part I


Performance Tuning Part I
Note: This document is based on various real time experiences that have been reported on metalink, personal experiences, exchanged experiences with the persons in the field of database administration and Oracle documentation as also the opinions and information published on the Web Pages for universal consumption.

Business Rules and Performance:

The business owners and functional specialists define requirements of the business. The design, architecture and the implementation code have to achieve the set goals with no compromises. At the same time there cannot be any trade off of performance. If the set goal of the design is not performance but security to data this discussion has not much relevance.

Oracle has provided constraints and triggers to validate the data as required for business rules. The design exclusively based on implementing the constraints to validate the data is a good design. But, if there are rules that cannot be achieved by constraints then triggers are to be implemented to validate the business rules. It is necessary to understand that constraints do not need to make recursive calls to the database as in the case of triggers. The execution of the constraints is with in the Oracle engine. Constraints are the Oracle provided watchdogs. The triggers are the generated watchdogs, which require a provision of resources in addition to the resources, which are by default used by Oracle for working with constraints. It is always advisable to allow the dogs (constraints) to bark than our barking. There is every possibility to duplicate the un-named constraints like CHECK and NOT NULL for which the names, if not given in the table definition, are generated by oracle based on a sequence prefixed by ‘SYS_’. If duplicates exist in a databases like this, the first causality is performance and then do not blame the CONSTRAINTS for this as all these constraints are to be validated before the data is inserted into a table.

The duplicate constraints are created either when the import is done twice with ignore=y parameter or when the not null and check constraints are defined in the table script and the table is created and later when the same table is altered to created not null and check constraints.

Try this example:

SQL> Create table test_dup_constraints (c1 number(3) not null, c2 date);

Table created.
SQL> alter table test_dup_constraints add check(c1 is not null);

Table altered.
SQL> alter table test_dup_constraints add check(c1 is not null);

Table altered.

select table_name, constraint_type, search_condition
from user_constraints where table_name = 'TEST_DUP_CONSTRAINTS'
TABLE_NAME C SEARCH_CONDITION
--------------------- - ----------------
TEST_DUP_CONSTRAINTS C "C1" IS NOT NULL
TEST_DUP_CONSTRAINTS C c1 is not null
TEST_DUP_CONSTRAINTS C c1 is not null

The cost and benefit approach is essentially required when triggers are considered. The negative impacts of the triggers that work behind the back of the administrators cannot be forgotten (the code is the culprit here than the business rules). From my personal experience I do not prefer triggers unless I am left with no other alternative.

Thoroughly investigate the need of triggers in the database before they are implemented.

Data Model Design and Role of Index:

Data modeling is important to successful relational application design. This should be done in a way that quickly represents the business practices. Chances are, there will be heated debates about the correct data model. The important thing is to apply greatest modeling efforts to those entities affected by the most frequent business transactions. In the modeling phase, there is a great temptation to spend too much time modeling the non-core data elements, which results in increased development lead times. Use of modeling tools can then rapidly generate schema definitions and can be useful when a fast prototype is required.

It is necessary that almost every entity (table) in the logical model has at least one set of attributes which constitute a natural candidate key. If no such key is available in then a synthetic key (also called a surrogate key) of a single numeric column generated from a sequence number generator may well provide a better primary key. The following factors are to be considered before taking any decision:
Take the case of a table, which has a natural candidate key but gets updated when certain fields are updated on the table. A synthetic key is introduced as a candidate key and is the Primary Key for the table. It is necessary to have an index on the natural key also as it is natural and potentially used in joining with other tables to access data to meet a business requirement.

Where an un-intelligent key is used as primary key on a table, it is necessary to identify additional keys to join the table with another table, which impacts the cost of access of data for presentation as required.

As a synthetic key would be smaller, both the primary and foreign key indexes would be smaller, and quite possibly have shallower B*-trees. If so, joins using these indexes would be faster, as would fast full index scans

Although the addition of a synthetic key may make a table slightly larger, and hence need more multi-block reads for a full table scan, the related tables containing the foreign key would be significantly smaller and would thus scan more efficiently.

Natural key values do sometimes need to be updated, but synthetic keys do not. If a natural key is used as the primary key, then key updates will have to be cascaded. Apart from the extra work involved, for both the developers and the database engine, such cascading updates can dramatically reduce concurrency.
To use a synthetic key in a distributed environment with disconnected nodes that need to insert records, each node needs to generate a mutually exclusive range of key values from its own sequence generator. The allocation of key ranges needs careful management, whereas there is no such consideration if a natural key is used.
Using a synthetic key severely limits the available partitioning strategies.

The net effect of these considerations is normally to use a synthetic key in preference to a natural key.

So said by http://www.ixora.com.au/tips/design/synthetic_keys.htm on this page.

Reverse Key Indexes are designed to eliminate index hot spots on insert applications. These indexes are excellent for insert performance, but they are limited in that they cannot be used for index range scans.

Building and maintaining an index structure can be expensive, and it can consume resources such as disk space, CPU, and I/O capacity. Designers must ensure that the benefits of any index outweigh the negatives of index maintenance.

Each index maintained by an INSERT, DELETE, or UPDATE of the indexed keys requires about three times as much resource as the actual DML operation on the table. What this means is that if you INSERT into a table with three indexes, then it will be approximately 10 times slower than an INSERT into a table with no indexes. For DML, and particularly for INSERT-heavy applications, the index design should be seriously reviewed, which might require a compromise between the query and INSERT performance.

When an indexed column of a table is UPDATED, on the index two transactions take place. One is to DELETE the earlier row and the other is to INSERT a new row into the index. Which means that all UPDATE transactions do require more time and resources than an INSERT transaction. This has a natural down side impact on the performance. The impact is felt more when that table has got more and more columns. While designing the table the architects should have this in their mind.

An INSERT operation is also slow it needs two transactions against the database. They are, one record to insert the data record into the table and the other index record.

Another impact on such tables where there updates are more on the indexed column/s, the indexes get holed and the size of the index is more than it has to be. In this context FREELIST and FREELIST GROUPS are the important parameters that are to be considered. Oracle provides by default FREELIST 2 for all the indexes only because of this situation.

Use of sequences, or timestamps, to generate key values that are indexed themselves can lead to database hotspot problems, which affect response time and throughput. This is usually the result of a monotonically growing key that results in a right-growing index. To avoid this problem, try to generate keys that insert over the full range of the index. This results in a well-balanced index that is more scalable and space efficient. You can achieve this by using a reverse key index or using a cycling sequence to prefix and sequence values.

Designers should be flexible in defining any rules for index building. Depending on your circumstances, use one of the following two ways to order the keys in an index:

Oracle documents discuss two methods

(1) Order columns most selectively first. This method is the most commonly used, because it provides the fastest access with minimal I/O to the actual rowids required. This technique is used mainly for primary keys and for very selective range scans.

(2) Order columns to reduce I/O by clustering or sorting data. In large range scans, I/Os can usually be reduced by ordering the columns in the least selective order, or in a manner that sorts the data in the way it should be retrieved.

Basic Concepts on Indexes and the performance

Oracle has introduced skip-scan in 9i, which has increased the options of accessing data using indexes and not-using the indexes.

The amazing difference between Oracle 9i and earlier versions is ‘ Concatenated index is used only if the leading edge of the index is being used.


Create emp2 table à create table emp2 as select * from emp;
Create emp_ind_1 à create index emp2_ind_1 on emp2(empno,ename,deptno);
(This type of index is called concatenated or composite index).
Create emp2_ind_2 à create index emp2_ind_2 on emp2(sal);

Try to understand the execution paths of these following SQL.

Example 01
Select job,empno
From emp2
Where ename=’SMITH’

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'EMP2'

It is a full table scan.

Example 02

Select job,empno
From emp2
Where deptno=10;

It is again full table scan.

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'EMP2'

Example 03

select empno,ename
from emp2
where empno=7934;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 INDEX (RANGE SCAN) OF 'EMP_IND_1' (NON-UNIQUE)

Note that part of the index key is used.

Example 04

SELECT JOB,DEPTNO
FROM EMP2
WHERE ename='MILLER'
and empno=7934;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP2'
2 1 INDEX (RANGE SCAN) OF 'EMP_IND_1' (NON-UNIQUE)

SELECT JOB,DEPTNO
FROM EMP2
WHERE ename='MILLER'
and empno=7934
and deptno=10;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP2'
2 1 INDEX (RANGE SCAN) OF 'EMP_IND_1' (NON-UNIQUE)


Look at this. Oracle uses part of the index key and table access is done by index rowed

Hence care should be exercised while designing a composite or concatenated indexes. This design will be perfect if the functional specialists can lay down prior to the design of the table and indexes, the keys basing on which the data is to be presented and/ or accessed.

When a database is upgraded from earlier versions of oracle-to-oracle 9i, the code should be examined thoroughly, if concatenated or composite indexes are seen from the Meta data of the being upgraded databases.

Never the ROWID be hard coded in a code block. The format of the rowed is changing from version to version. You can pass on to pass on the ROWID as a bind variable.

Function Based Indexes can be implemented only when QUERY_REWRITE_ENABLED is set to TRUE, which by default is FALSE. This parameter fits within the 3-tier architecture of Session, System and Database. This may not be a wanted feature in some cases. Hence depending upon the requirements of business, the value may be set.

The existing functions like TO_DATE, TO_CHAR, and UPPER or LOWER or may create a function to meet a requirement of the business.

The down side of these indexes is, the INSER, UPDATE and DELETE take longer times than they take. Only those functions for which the indexes are built are to be used and other functions on that column should be restricted. (This is never assured). As the number of indexes on the table grow, the space required for these new objects (indexes) more. ‘What benefit and at what cost?’ should be the question for the DBA.
To drop a table that has many indexes on it, is also, surely to be considered, in case these tables are large and store achievable data, which on being achieved need to be dropped to re-claim the space.

Partitioned Indexes are the broken up indexes into multiple pieces to impact the I/O issues positively and thus help gain performance. There are certain limitations.
Only B-tree and Bitmap indexes can be partitioned and not the Hashed indexes.

The relationship between the base tables and the indexes on them can be defined in three ways. Single table and partitioned Indexes (One to Many), Partitioned Table and Single Index (Many to One) and Partitioned Table and Partitioned Index (Many to Many) the presently available models in Oracle Architecture. They impact the performance differently. If the SQL code is such that you are making large scale Table Scans and the data fetch is not based on Indexes, Partition the table, so that the readable data is in small pieces. If the data fetch is based on the access of indexes then have partitioned indexes and to get the benefit of both world get table as well as the index partitioned.

Another greatest advantage is Oracle supports parallel query and Parallel DML processing.

Indexes can be built Locally and Globally. Oracle defines them as under:

Local Indexes: A local index is an index on a partitioned table, which is partitioned in the exact same manner as the underlying partitioned table. Each partition of a local index corresponds to one and only one partition of the underlying table. Local Index has two subsets, prefixed and non-prefixed. Bitmap Indexes can have only Local Indexes only.

In a local index all entries in a single local index partition point to one and only one table partition. A partitioned index is considered locally prefixed if the partition key is based on the left most columns in the index.

Please go through these examples.
CREATE TABLE LOCAL_PREFIXED_PARTITION_TAB (
PRODUCT_ID NUMBER (5),
DATE_AND_TIME DATE,
UNIT_COST NUMBER (10,2),
UNIT_PRICE NUMBER (10,2))
TABLESPACE Users
PCTFREE 10
PCTUSED 40
INITRANS 1
MAXTRANS 255
PARTITION BY RANGE (PRODUCT_ID)
subpartition by hash (DATE_AND_TIME) subpartitions 2
(
PARTITION P_1 VALUES LESS THAN (3) TABLESPACE USERS,
PARTITION P_2 VALUES LESS THAN (6) TABLESPACE USERS_1
)
/

This is a composite local index with partitions and sub partitions

create index IDX_LOCAL_PREFIXED_PARTITION
on LOCAL_PREFIXED_PARTITION_TAB(product_id, time_id)
local store in (tools, indx)
/

This automatically generates a new partition in the Index but the problem
is it is generated in the same tablespace where the table partition is created.

alter table LOCAL_PREFIXED_PARTITION_TAB
add partition p_3
VALUES LESS THAN (9) TABLESPACE USERS;

To move that index partition to another table space when the index is a composite index. To know if the index is a composite index query dba_ind_partitions column composite for that index and under composite if the output is ‘YES’, then it is composite index and then to use this command:

alter index SCOTT.IDX_LOCAL_PREFIXED_PARTITION
modify default attributes for partition p_3 tablespace indx;

Please observe that for the composite partitioned indexes the status is N/A

INDEX_NAME COMPOSITE PARTITION_NAME TABLESPACE_NAME
IDX_LOCAL_PREFIXED_PARTITION YES P_1 INDX
IDX_LOCAL_PREFIXED_PARTITION YES P_2 INDX
IDX_LOCAL_PREFIXED_PARTITION YES P_3 INDX


To move that index partition to another table space when the index is a composite index. To know if the index is a composite index query dba_ind_partitions column composite for that index and under composite if the output is ‘NO’, then it is composite index and then to use this command:

alter index owner.index_name
rebuild partition partition_name
tablespace new_tablespace_name;

Out put of the query on the user_ind_partitions

select index_name,
composite,
partition_name,
tablespace_name,
status
from user_ind_partitions
where composite='NO'
/

INDEX_NAME COMPOSITE PARTITION_NAME TABLESPACE_NAME STATUS
TEST_IND_PARTITION NO IP1 INDX USABLE
TEST_IND_PARTITION NO IP2 INDX USABLE
TEST_IND_PARTITION NO IP3 INDX USABLE


When an index partition is unusable rebuild that index online. Be sure that you have the temporary tablespace enough to hold a copy of the index you are rebuilding as REBUILD until it is done holds two copies of the index.

Global Partitioned Indexes: A global partitioned index is an index on a partitioned or non-partitioned table which is partitioned using a different partitioning-key from the table. Global-partitioned indexes can be partitioned using range partitioning. For example, a table could be range-partitioned by month and have twelve partitions, while an index on that table could be range-partitioned using a different partitioning key and have a different number of partitions.

Do not forget that by default, non-partitioned indexes are global indexes. Oracle will partition the global index on the ranges of values from the table columns specified in column_list clause of the index. This cannot be done for a local index.

There are certain limitations. There cannot be more than 32 columns in the column_list . Column list cannot also contain the ROWID pseudo-column or a column of type ROWID.

Example of Partitioned-global-index:

NOTE: Global non-prefixed indexes are not supported. This means that the index-
partitioning key must always be the leftmost index column.

CREATE INDEX dept_idx ON dept2(dname)
GLOBAL PARTITION BY RANGE (dname)
(
PARTITION p1 VALUES LESS THAN ('N'),
PARTITION p2 VALUES LESS THAN (MAXVALUE)
)tablespace indx;
To rebuild the partition use this example:

alter index dept_idx
rebuild partition p1 tablespace tools;

a composite index on emp2 table

CREATE INDEX sal_index ON scott.emp2 (sal,comm)
GLOBAL PARTITION BY RANGE (sal)
(PARTITION p1 VALUES LESS THAN (1000),
PARTITION p2 VALUES LESS THAN (2500),
PARTITION p3 VALUES LESS THAN (MAXVALUE));

Index created.

Global Non-Partitioned Indexes: A global non-partitioned index is essentially identical to an index on a non-partitioned table. The index structure is not partitioned. Global non-prefixed indexes are not supported. This means that the index- partitioning key must always be the leftmost index column.

The above is a brief discussion on the indexes, partitioned indexes and their rebuilding, moving them around for I/O balance.

Configuration Parameters of and Tuning Parameters for an Instance

The initialization parameters are can be categorized basing on the three-tier structure that enables to manipulate the performance of the database.

(1) The parameters that demand the life-cycle of the database through startup-shutdown-startup
(2) The parameters that are modifiable at the system level and but also an edit to the initialization parameter file. Through alter system command the parameter is made effective without shutting down the instance/database, where as an edit to the initialization parameter file makes that change permanent when the system goes through the cycle of startup-shutdown-startup
(3) with out changing the system or database level parameters some parameters can be invoked for that session as such a change is required at the session level and in reality the system/database need not to change its behavior

To get all the parameters that have their impacts at session/system/database levels
the following SQL may be used. This SQL also gets the values that are working which are set by Oracle for the database functioning, which are often referred to as Un-documented Parameters.


select x.indx+1 instance_num,
ksppinm Parameter_name,
ksppity parameter_type,
ksppstvl parameter_value,
ksppstdf parameter_default_value,
decode (bitand(ksppiflg/256,1),1,’TRUE’,’FALSE’) is_sess_modi,
decode (bitand(ksppiflg/65536,3),
1,’IMMEDIATE’, 2,’DEFERRED’,’FALSE’) is_sys_modi,
decode (bitand(ksppstvf,7),1,’MODIFIED’,2,’TRUE’,’FALSE’) is_modifd,
decode (bitand(ksppstvf,2),1,’MODIFIED’,2,’TRUE’,’FALSE’) is_adjd,
ksppdesc description,
ksppstcmnt comment
from sys.x$ksppi x, sys.x$ksppsv y
where x.indx=y.indx
order by Parameter_name;


The following parameters are some of the most important parameters that affect the performance of the Instance:

Parameters that impact SGA

select name, value
from v$parameter
where name in
('sga_max_size',
'shared_pool_size',
'db_cache_size',
'large_pool_size');

shared_pool_size 50331648
sga_max_size 126950740
large_pool_size 8388608
db_cache_size 16777216

At instance startup the Oracle Server allocates the granule entries, one for
each granule to support SGA_MAX_SIZE bytes of address space. During the startup
each component acquires as many granules as it requires.

The minimum SGA is three granules, as follows:

01. One Granule for Fixed SGA (includes redo buffers)
02. One Granule for Buffer Cache
03. One Granule for Shared Pool

Issuing ‘alter system’ command, provided the increase is within the framework of sga_max_size, can alter them. If not the following error messages are spit out.

ORA-02097: parameter cannot be modified because specified value is invalid
ORA-04033: Insufficient memory to grow pool

Buffer Cache Sizing Parameters

The Buffer Cache consists of independent sub-caches for buffer pools and for multiple block sizes. The parameter db_block_size determines the primary block
size. This would be the block size used for the SYSTEM and temporary tablespaces.

01. DB_CACHE_SIZE
02. DB_KEEP_CACHE_SIZE
03. DB_RECYCLE_CACHE_SIZE

The db_keep_cache_size and db_recycle_cache_size are independent of db_cache_size. These parameters are specified in units of memory rather than in units of buffers.

The following parameters are automatically computed:

01. DB_BLOCK_LRU_LATCHES - The number of LRU latches in each buffer pool for each block size will be equal to the half the number of CPUs.
02. DB_WRITER_PROCESSES - The number of DBWR's will be equal to 1/8th the number of CPUs.

03. LARGE_POOL_SIZE or JAVA_POOL_SIZE parameters are dynamically alterable. But when they are being altered the following errors are possible.

alter system set large_pool_size=32m scope=memory;
alter system set large_pool_size=32m scope=memory
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-04033: Insufficient memory to grow pool

alter system set JAVA_POOL_SIZE='32M' SCOPE=spfile;

LARGE_POOL_SIZE and JAVA_POOL_SIZE are static parameters in Oracle9i Release1 (9.0.1) and higher. These parameters can only be changed within the SPFILE (Server Parameter File) scope.

The type of data stored in LARGE_POOL_SIZE and JAVA_POOL_SIZE parameters is currently string, not numeric. Therefore, in order to be able to change the values of these parameters within the SPFILE scope, you need to specify the number as a character string.

When PARALLEL_AUTOMATIC_TUNING is set to true, Oracle determines the default values for parameters that control parallel execution. In addition to setting this parameter, you must specify the PARALLEL clause for the target tables in the system. Oracle then tunes all subsequent parallel operations automatically. If parallel execution parameters have been used in a previous release and are now enabling PARALLEL_AUTOMATIC_TUNING, you should reduce the amount of memory allocated from the shared pool to account for the decreased demand on that pool. Thus the memory that has been released may now be allocated from the large pool, and will be computed automatically if LARGE_POOL_SIZE is left unset. As part of the automatic tuning, Oracle will enable the PARALLEL_ADAPTIVE_MULTI_USER parameter. Any of the system-provided defaults if desired can be over ridden.

SESSION_CACHED_CURSORS parameter does not directly affect the shared pool size, it does configure the number of cursors that can be kept in the session cursor cache to reduce the probability of soft parses and thus reduce the contention in the shared pool area. Set this parameter so a reasonable number of cursors can be cached. It does consume additional memory on a per session basis. So,’ it is a over head on the SGA’ - should not be forgotten.

To allow the DBA/Analysts to check whether the sessions’ cursor cache or open cursors are really a constraint and then increase the parameter
SESSION_CACHED_CURSORS or OPEN_CURSORS accordingly.

SESSION_CACHED_CURSORS lets you specify the number of session cursors to cache. Repeated parse calls of the same SQL statement cause the session cursor for
that statement to be moved into the session cursor cache. Subsequent parse calls will find the cursor in the cache and do not need to reopen the cursor and even do a soft parse.

The session cursor cache can be constrained by either the session_cached_cursors parameter, or the open_cursors parameter. This script reports the current maximum usage in any session with respect to these limits.

If either of the Usage column figures approaches 100%, then the corresponding parameter should normally be increased.

select 'session_cached_cursors' parameter,
lpad(value, 5) value,
decode(value, 0, ' n/a', to_char(100 * used / value, '990') || '%') usage
from ( select max(s.value) used
from v$statname n,
v$sesstat s
where n.name = 'session cursor cache count'
and s.statistic# = n.statistic#
),
( select value
from v$parameter
where name = 'session_cached_cursors'
)
union all
select 'open_cursors', lpad(value, 5),
to_char(100 * used / value, '990') || '%'
from ( select max(sum(s.value)) used
from v$statname n,
v$sesstat s
where n.name in ( 'opened cursors current',
'session cursor cache count')
and s.statistic# = n.statistic#
group by s.sid
),
( select value
from v$parameter
where name = 'open_cursors'
)
/

PARAMETER VALUE USAGE
session_cached_cursors 0 n/a
open_cursors 300 5%

Sources to look at for any problems/issues/errors etc to monitor the Performance.

Tuning Shared Pool Area

select pool,
sum(bytes/1024/1024) “In MB”
from v$sgastat
where pool = ‘shared pool’
group by pool
;

shared pool 64

select pool,
round(bytes/1024/1024,2)"In MB"
from v$sgastat
where name='free memory'
;

shared pool 41.47
large pool 8
java pool 32

A low value for free memory does not necessarily indicate a problem in offing with the share pool area. If there is a very large amount of free memory seen as in the above output, it should mean that the shared pool is oversized and also indicates a lot of aging is taking place in the shared pool.

In this situation it is necessary to tune SHARED POOL AREA.

Query V$SHARED_POOL_RESERVED table find out values for REQUEST_MISSES value. If over a period of time there are increasing values found under this column head, increase the size of SHARED_POOL_SIZE. SHARED_POOL_RESERVED_SIZE and LARGE_POOL_SIZE or JAVA_POOL_SIZE parameters can be used to provide zoning to the objects that reside in SHARED_POOL_AREA

Library Cache
This is the area where all the SQL statements being processed and the information about them. This speaks a lot about the shared pool area. If the ratios are high that means that the shared pool area is properly planned.

Caveat
In the case of Data Warehouse and Decision Supporting Systems the Library Cache Hit Ration remains low when compared to OLTP databases and Data Stores. So these hit ratios loose their relevance if the application is not using any bind-variables. If the application can not be fixed to use bind variables and the initialization parameter CURSOR_SHARING is not set to FORCE then the hit ratios look vary bad and can cause only heartburns.


SQL> select namespace,
2 gethitratio,pinhitratio
3 from v$librarycache;

NAMESPACE GETHITRATIO PINHITRATIO
SQL AREA .934129744 .973135034
TABLE/PROCEDURE .954277376 .989524109
BODY .995252226 .995252226
TRIGGER 1 1
INDEX .954653938 .930056711
CLUSTER .989614243 .988520408
OBJECT 1 1
PIPE 1 1


8 rows selected.

To know how many parses are occurring and their analysis

select a.value total,
b.value hard,
a.value - b.value soft--,
---nvl(round((a.value/b.value)*100,1),0) "hard_parse_%"
--– This is not required if the hard is 0 (zero)
from v$sysstat a,
v$sysstat b
where a.statistic#=171
and b.statistic#=172
/

These numbers are valid only with 8.1.6 and 8.1.7. They are different in 9i releases.
They are:
parse count (total) 232 in 9.2.0.3
parse count (hard) 233 in 9.2.0.3


A high % of hard parses means that there are lot dynamic SQL s or there is a lack of use of bind variables by the application SQL. The second reason is that the cursors are getting aged out and are not available for re-execution and hence are to be re-loaded. Reloads could result from too many objects or large objects like big packages.

SQL> select sum(reloads)/sum(pins)
2 from v$librarycache;

SUM(RELOADS)/SUM(PINS)
----------------------
.000367773

The aging out of the objects is not a problem given with a limited memory. Values less than 1 are not worth any effort to reduce by sizing the shared pool.
Any performance problems are not the results if PLSQL or SQL reloads.
(1) If the application uses bind variables
(2) If the application does not have any issues with Dynamic SQL but
(3) The reloads are more from the statistics gathered
Then without any second opinion and though increase the shared pool area. That helps push the performance up.

To identify the large objects that are competing for place in the library cache can be identified by issuing the following SQL

select name,
type,
sharable_mem
from v$db_object_cache
where type in ('FUNCTION','PROCEDURE','PACKAGE','PACKAGE BODY')
order by type
SQL> /

VALIDATE_DATE FUNCTION 9433
IN_QUOTES FUNCTION 1452
VALIDATE_DT_OF_BIRTH FUNCTION 10848
GET_FUSA_ACCT FUNCTION 12937
BANKONE_VERIFY_FUNCTION FUNCTION 14563
VALID_NUM FUNCTION 9117
IS_POSITIONAL FUNCTION 3256
IS_WILD FUNCTION 3598
ADD_AND FUNCTION 1662
CHECK_DATE FUNCTION 10830
DBMS_OUTPUT PACKAGE 13111
ASP_FIND_CHECKS PACKAGE 16430
DBMS_DDL PACKAGE 1996
DBMS_RCVMAN PACKAGE 142263
STANDARD PACKAGE 265036
PLITBLM PACKAGE 3327
DBMS_SHARED_POOL PACKAGE 10644
DBMS_TRANSACTION PACKAGE 14932
DBMS_APPLICATION_INFO PACKAGE 12413
DBMS_BACKUP_RESTORE PACKAGE 97483
DBMS_STATS PACKAGE 55782
DBMS_STANDARD PACKAGE 24369
DBMS_SYS_SQL PACKAGE 20640
DBMS_UTILITY PACKAGE 4348
CMN_PARTITION_MGMT PACKAGE 11910
DBMS_SQL PACKAGE 19280
DBMS_OUTPUT PACKAGE BODY 6203
DBMS_DDL PACKAGE BODY 12780
ASP_FIND_CHECKS PACKAGE BODY 24002
DBMS_SQL PACKAGE BODY 18652
DBMS_SHARED_POOL PACKAGE BODY 9112
DBMS_TRANSACTION PACKAGE BODY 6444
DBMS_APPLICATION_INFO PACKAGE BODY 3105
DBMS_BACKUP_RESTORE PACKAGE BODY 41559
DBMS_STATS PACKAGE BODY 174910
DBMS_SYS_SQL PACKAGE BODY 46852
DBMS_UTILITY PACKAGE BODY 20484
CMN_PARTITION_MGMT PACKAGE BODY 13550
STANDARD PACKAGE BODY 25796
DBMS_RCVMAN PACKAGE BODY 170739
CHK_PAR_TAB_STATS1 PROCEDURE 25722
CHK_PAR_IND_STATS PROCEDURE 26993
ARCH_USER_PROC_INFO PROCEDURE 25162
POP_USER_INFO PROCEDURE 15528
CHK_PAR_IND_STATS1 PROCEDURE 27006
CHK_PAR_TAB_STATS PROCEDURE 25709
COALESCE_TS PROCEDURE 19035

SQL>

The sizable packages from this list can be moved to reserved area. It is definitely good idea to reserve additional area for other objects also.

SHARED_POOL_RESERVED_SIZE can be defined for the database at 15 to 20 % of total SHARED_POOL_AREA. This brings down the competition between the larger objects and smaller objects.

Similar way the largest SQL s against the database can also be found by issuing the following query

select *
from v$sqlarea
where rownum <= 10
order by sharable_mem desc

These SQL give an idea how much space is being used by such large queries. The developers may be advised on those SQL and as they should not later complain about their SQL running slow as the SQL are to compete within them to find place in shared pool area. If necessary these unnamed objects can also be kept in the memory (shared pool area) using the address and hash value of that SQL from V$SQLAREA table.

Here is an example SQL

select sql_text,
address,
hash_value,
sharable_mem
from v$sqlarea
where rownum <= 10
-– here only top 10 are only displayed basing the values of sharable memory
order by sharable_mem desc

After finding out which cursor is to be kept then execute the package

exec dbms_shared_pool.keep(‘ address, hash_value’,’C’);

Please note that address and hash_value of the cursor to be kept is separated by a comma and are together are treated as a string.

It is to be kept in mind that flushing shared pool is not going to un-keep/release these pinned objects from the shared pool. So, only ‘UNKEEP’ procedure is to be used to release them from the shared pool.

Data Dictionary Cache

This cache contains the data/rows that have been read from the data dictionary in response to a recursive call. These rows are read into the data block buffers like any other data that has been read into and the relevant information is transferred to dictionary cache. Recursive SQL is executed in response to a regular SQL and as long as Oracle is can resolve a recursive SQL from the data dictionary cache, no need arises to re-read from the disk which means reduced I/O. Here under is the Hit Ratio Formula

select to_char(round((1-sum(getmisses)/sum(gets))*100,1))||' % ' "Hit Ratio"
from v$rowcache;

It was discussed that if necessary to reduce the competition among the objects for place in the shared pool area to introduce a new parameter (SHARED_POOL_RESERVED_SIZE) to the database which enables to reserve some space in shared pool for the big objects. It is time to know that Oracle has also given the Users the capability to pin the required objects in the shared pool area so that the I/O is reduced and the performance is up.

DBMS_SHARED_POOL package has a procedure to pin the objects in the shared pool and like wise another to take out of the shared pool area.

exec dbms_shared_pool.keep (‘STANDARD’);

To release an object

exec dbms_shared_pool.unkeep (‘STANDARD’);

Query the v$db_object_cache with where clause kept=’YES’ to get all the objects that are kept in shared pool area.

select OWNER,
NAME,
NAMESPACE,
TYPE,
SHARABLE_MEM,
KEPT
from v$db_object_cache
where kept='YES'
/

Now it is time to understand what is ORA-04031 and what causes the fragmentation of the shared pool and how it is to be talked which impacts the performance.

The factors that are responsible for Fragmentation of Shared Pool

01. Frequent aging of the objects from the shared pool because of the lack of sufficient space.
02. High values in Free Memory as a result of the aging.
03. Un-kept large objects which get aged
04. SQL s not using bind variables
05. The parameter CURSOR_SHARING is not set when the bind variables could not be used or not used by the application (there are three values for this parameter. They are EXACT, SIMILAR and FORCE).
06. Excessive Parsing partly due to not keeping the large objects in the shared pool and partly due to not using SESSION_CACHED_CURSORS parameter which could have avoided reloads

The events that can cause FRAGMENTATION of the shared pool area

select sw.sid,
s.username,
substr(sw.event,1,25),
sw.wait_time
from v$session s,
v$session_wait sw
where sw.sid=s.sid
and sw.event not like ‘SQL*Net%’
order by sw.wait_time,sw.event

In the output look for the following EVENTS

01. LATCH FREE – this event is result of the hard coded application variables which do not use bind variables and the parameter CURSOR_SHARING is not set.
02. LIBRARY CACHE LOAD LOCK and LIBRARY CACHE PIN – These events demands the reload of objects. The reasons for such reloads are due to invalidation of the objects frequently and recompilation of those objects. All those activities that positively impact LAST_DDL column values in the dba_objects are going to cause reload of those objects.

DocID 115656.1 On Metalink reads as Under

Library cache pins and load locks may occur during PL/SQL, views, types, etc. compilation or recompilation. The compilation is always explicit (application installation, upgrades, applying patches), but object recompilation may happen transparently due to object invalidations.

Dealing with slow downs related to "mysterious" library cache pins and load locks we should look for the reason of the database object invalidations. They are likely to be triggered by actions causing changes to "LAST_DDL" attribute of database objects that have other dependent ones. Typically they are the object maintenance operations - ALTER, GRANT, REVOKE, replacing views, etc. This behavior is described in Oracle Server Application Developer's Guide as object dependency maintenance.

After object invalidation, Oracle tries to recompile the object at the time of the first access to it. It may be a problem in case when other sessions have pinned the object to the library cache. It is obvious that it is more likely to occur with more active users and with more complex dependencies (eg. many cross-dependent packages or package bodies). In some cases waiting for object recompilation may even take hours blocking all the sessions trying to access it.

In some circumstances these errors may also occur: many sorts of ORA-600,
e.g. ORA-600[17285], ORA-4061, ORA-4065, ORA-6508. Note that they may be caused by some other reasons as well.

03. LIBRARY CAHE LOCK – This event is due to a DML operation that is hanging because the table which is accessed is currently undergoing changes (ALTER TABLE). This may take quite a long time depending on the size of the table and the type of the modification. Another reason could be that compilation of package will hang on Library Cache Lock and Library Cache Pin if some users are executing any Procedure/Function defined in the same package.


Doc ID 122793.1 lays down a method to identify the causes of such locks as in an environment of multiple concurrent users performing various functions against the database it is difficult to pin point the process that us causing the locks.

METHOD 2: EXAMINE THE X$KGLLK TABLE
-----------------------------------

The X$KGLLK table (accessible only as SYS/INTERNAL) contains all the
library object locks (both held & requested) for all sessions and
is more complete than the V$LOCK view although the column names don't
always reveal their meaning.

You can examine the locks requested (and held) by the waiting session
by looking up the session address (SADDR) in V$SESSION and doing the
following select:

select * from x$kgllk where KGLLKSES = 'saddr_from_v$session'

This will show you all the library locks held by this session where
KGLNAOBJ contains the first 80 characters of the name of the object.
The value in KGLLKHDL corresponds with the 'handle address' of the
object in METHOD 1.

You will see that at least one lock for the session has KGLLKREQ > 0
which means this is a REQUEST for a lock (thus, the session is waiting).
If we now match the KGLLKHDL with the handles of other sessions in
X$KGLLK that should give us the address of the blocking session since
KGLLKREQ=0 for this session, meaning it HAS the lock.

SELECT * FROM X$KGLLK LOCK_A
WHERE KGLLKREQ = 0
AND EXISTS (SELECT LOCK_B.KGLLKHDL FROM X$KGLLK LOCK_B
WHERE KGLLKSES = 'saddr_from_v$session' /* BLOCKED SESSION */
AND LOCK_A.KGLLKHDL = LOCK_B.KGLLKHDL
AND KGLLKREQ > 0);

If we look a bit further we can then again match KGLLKSES with SADDR
in v$session to find further information on the blocking session:

SELECT SID,USERNAME,TERMINAL,PROGRAM FROM V$SESSION
WHERE SADDR in
(SELECT KGLLKSES FROM X$KGLLK LOCK_A
WHERE KGLLKREQ = 0
AND EXISTS (SELECT LOCK_B.KGLLKHDL FROM X$KGLLK LOCK_B
WHERE KGLLKSES = 'saddr_from_v$session' /* BLOCKED SESSION */
AND LOCK_A.KGLLKHDL = LOCK_B.KGLLKHDL
AND KGLLKREQ > 0)
);

In the same way we can also find all the blocked sessions:

SELECT SID,USERNAME,TERMINAL,PROGRAM FROM V$SESSION
WHERE SADDR in
(SELECT KGLLKSES FROM X$KGLLK LOCK_A
WHERE KGLLKREQ > 0
AND EXISTS (SELECT LOCK_B.KGLLKHDL FROM X$KGLLK LOCK_B
WHERE KGLLKSES = 'saddr_from_v$session' /* BLOCKING SESSION */
AND LOCK_A.KGLLKHDL = LOCK_B.KGLLKHDL
AND KGLLKREQ = 0)
);


CAVEAT and DOWNSIDE OF CURSOR_SHARING

There is a sure overhead of at the least 25% on the system when CURSOR_SHARING is set to FORCE.


There are some more cards under the sleeves of Oracle to face the situation to take the control without imposing 25% Overhead.

Set the CURSOR_SHARING=EXACT or CURSOR_SHARING=SIMILAR values so that the overhead impact is not heavy.

Database Buffer Cache and Performance

Consider increasing the db_cache_size. When increasing the db_cache_size be sure that the OS can handle with out additional swapping/paging.

It is generally opinion that about 50% database buffer cache-hit-ration (CHR) is indicative of badly performing database. It may not be always true. This CHR is dependent on many factors and some time the CHR itself is not true.

For instance if a query is executed for the first time and it took more than an hour for execution, the same query may not take more than half an hour when executed for the second time. Does this mean that hit ratio is good when executed for the second time and the query is working fine. No. The representation is not true. The reasons for that can be understood as the discussion advances.

Basically the CHR depends upon the nature of the database. OLTP databases hit same set of data blocks repeatedly and hence the CHR remains very high. When a batch job is run to same CHR goes down. The reason is the database is accessing a wide range of data blocks. Here, it is not achieving a targeted ratio that is important, but it is reaching the desired output in as much less time as possible as desired by the user.

Hence, it is not alone the CHR that determines the performance of the database. There are various other factors that are also to be considered while also considering the CHR depending upon the nature of the database.

It is essential to understand and query the X$ tables that are associated with the database buffer cache.

Before jumping into that sea for a greater swim, first query the X$ tables to get all the parameters that affect the database ( Initialization Parameters for the database).

select x.indx+1 instance_num,
ksppinm Parameter_name,
ksppity parameter_type,
ksppstvl parameter_value,
ksppstdf parameter_default_value,
decode (bitand(ksppiflg/256,1),1,’TRUE’,’FALSE’) is_sess_modi,
decode (bitand(ksppiflg/65536,3),
1,’IMMEDIATE’, 2,’DEFERRED’,’FALSE’) is_sys_modi,
decode (bitand(ksppstvf,7),1,’MODIFIED’,2,’TRUE’,’FALSE’) is_modifd,
decode (bitand(ksppstvf,2),1,’MODIFIED’,2,’TRUE’,’FALSE’) is_adjd,
ksppdesc description,
ksppstcmnt comment
from sys.x$ksppi x, sys.x$ksppsv y
where x.indx=y.indx
order by Parameter_name;

This query is executable against 9.2.0.3 Version Oracle. This is to be mentioned here as the base structures are being changed by Oracle from release to release and this query may not be executable against 8.1.6, 8.1.7.


X$ Tables and Myths, Misconceptions and Truths

In this part of discussions it is tried to clear the misconceptions on X$ tables which are the most superstitiously scary part of the Oracle RDBMS.

01. They are not temporary or permanent tables that reside in the data files of Oracle.
02. They are memory resident only.
03. They are created when Instance is started, even before control files are created or opened.
04. They are platform specific. But the synonyms and views created on these X$ tables are consistent when the structures are viewed.
05. They cannot be dropped.
06. Only select permission is there on those tables.
07. SYS user owns them.
08. If grants are made a cryptic error message pops up ORA-02030 which reads ‘Can only grant select from fixed tables/views’.
09. They are destroyed once instance is shutdown and recreated when instance is brought back.
10. No Indexes can be created on these tables.
11. Oracle started indexing these tables from Oracle 8.x onwards.
12. The number of X$ tables vary from Oracle Release to Release. Deprecated Features of a version can cause some X$ tables not to be seen any more and like wise for New Features New X$ tables come in. The structures of these X$ tables do not also remain the same. But the Public Synonyms and Views built on these X$ tables generally do not change as Oracle Uses extensively decode function to create V$ Views and Synonyms. In 9.2.0.3 there are 394 X$ tables. In 8.1.6.3 it was 265.
13. V$INDEXED_FIXED_COLUMN view lists all the indexes built on these X$ tables.
14. User can create his own views and synonyms on these X$ tables and allow access to the Users. Any User can only select on these views/synonyms. They cannot be updated, inserted, deleted and dropped.

Here Are Some Important Init.Ora parameters
(The following are extracts from Oracle 9i Performance Tuning by Richard J Niemiec, Bradley D Brown and Joseph C Trezzo)

Memory

1. DB_CACHE_SIZE Memory allocated for data cache
2. PGA_AGGREGATE_TARGET Soft memory cap for Users’ PGA
3. SHARED_POOL_SIZE Memory for Data Dictionary, SQL and PLSQL
4. SGA_MAX_SIZE This parameter need not be fixed on SUN Solaris
5. LOG_BUFFER for uncommitted transactions.
6. DB_KEEP_CACHE_SIZE memory allocated to keep pool
7. DB_RECYCLE_CACHE_SIZE memory allocated for recycle pool
8. LARGE_POOL_SIZE
9. JAVA_POOL_SIZE for JVM
10. JAVA_MAX_SESSIONSPACE_SIZE
11. WORKAREA_SIZE_POLICY


Recovery Options

FAST_START_MTTR_TARGET
LOG_CHECKPOINT_INTERVAL

Optimization and Performance

1. OPTIMIZER_MODE CHOOSE, RULE, FIRST_ROWS, FIRST_ROWS_n, ALL_ROWS
2. CURSOR_SHARING EXACT, SIMILAR, FORCE
3. OPTIMIZER_INDEX_COST_ADJ adjustment of cost of full table scan and index
scan
4. QUERY_REWRITE_ENABLED for enabling materialized views and
function-based-indexes
5. STATISTICS_LEVEL
6. PARALLEL_AUTOMATIC_TUNING
7. HASH_JOIN_ENABLED
8. OPEN_CURSORS

File Destinations

1. CONTROL_FILES
2. BACKGROUND-DUMP_DEST
3. USER_DUMP_DESTINATION
4. CORE_DUMP_DESTINATION
5. LOG_ARCHIVE_DEST
6. UTL_FILE_DIR

To Enable the Media Recovery

1. LOG_RCHIVE_START set to TRUE for Archive Log/ Media Recovery Mode

Important Parameters with some purpose

1. COMPATIBLE set it to exploit the new features of the version installed
2. OPTIMIZER_FEATURES_ENABLED set it else you miss the new features
3. UNDO_MANAGEMENT set it to have auto undo/RB management
4. UNDO_TABLESPACE set this for undo management
5. JOB_QUEUE_PROCESSES set this to use DBMS_JOB
6. RECOVERY_PARALLELISM recovers using parallelel query option PQO

Undocumented Important Parameters
To get the list of the parameters (undocumented) use the following SQL

select a.ksppinm Name,
b.ksppstvl Value,
b.ksppstdf default_value,
a.ksppdesc description,
from sys.x$ksppi a,
sys.x$ksppcv b
where a.indx=b.indx
and substr(a.ksppinm,1,1)=’_’
order by a.ksppinm;

_CORRUPTED_ROLLBACK_SEGMENTS Use this parameter when you are not able to open the database when there are corrupt rollback segments reported, to start and open the database

_ALLOW_RESETLOGS_CORRUPTION it can force the database open after failed recover because of the lost/corrupt log files that are needed for recovery.

CPU_COUNT This is the number of CPU on the server. This can cause bugs with PQO (Parallel Query Option)

_INIT_SQL_FILE set this to create the database when location of sql.bsq is not default location. There were problems while creating 32k data block size databases
in Oracle 7.x and 8.x and 8.1.x versions. To overcome this problem, customize the sql.bsq file rename the original file to .old and create your own customized file and the restore sql.bsq file. In case the location is changed for the file set this parameter to enable oracle read the new location from the init.ora file while creating the database. I created a test databases on 9.0 and 9.2 with 32k data block size with no problems.

_KSMG_GRANULE_SIZE: This parameter is not there in Oracle 7.x, 8.x and 8.1.x.
This is an undocumented parameter that is used in Oracle 9.0 and 9.2. Playing with this parameter may be harmful. This parameter is the multiple for SGA pieces of memory such as SHARED_POOL_SIZE and DB_CACHE_SIZE.

Name value default description
------------------------------------------------------------------------------------------
_ksmg_granule_size 4194304 TRUE granule size in bytes
db_cache_size 16777216 FALSE Size of DEFAULT buffer
pool for standard block size buffers
shared_pool_size 50331648 FALSE size in bytes of shared
pool

-----------------------------------------------------------------------------------------------
Understand the ratio is 1:4:12. If this parameter is to be increased the other parameters are also to modify to be in precise multiples. The value is to be set in Bytes.

For development purposes Oracle has provided data file formats and default file generations that are associated with data files. This is not the place for those discussions

_FAST_FULL_SCAN_ENABLED parameter is consistently available since 7.0. This allows index fast full scan if only an index is needed.

_HASH_MULTIBLOCK_IO_COUNT parameter sets the number of blocks that will be read/written at once when a hash join is used in executing a SQL.

Note:

This parameter was a documented parameter in 7.0, 8.0 and 8.1. It was looking like HASH_MULTIBLOCK_IO_COUNT with no underscore as a prefix.

Doc Id 39023.1 on Metalink
--------------------------------
Parameter type: integer
Parameter class: dynamic, scope= ALTER SESSION, ALTER SYSTEM
Default value: For Oracle 7.3 and 8.0 = 1
For Oracle8i it is query dependent. Appears as 0 in V$PARAMETER.
Range of values: operating system dependent
typically 1 - (65,536/<Parameter:DB_BLOCK_SIZE>)

Related: <Parameter:DB_BLOCK_SIZE>
<Parameter:HASH_JOIN_ENABLED>
<Parameter:HASH_AREA_SIZE>

*** Note: As of Oracle9i this parameter is hidden
i.e.: It is renamed to _HASH_MULTIBLOCK_IO_COUNT
This hidden version of the parameter still behaves as
described below.

Description:
~~~~~~~~~~~~
This parameter specifies how many sequential blocks a hash join reads and writes in one IO. When operating in multi-threaded server mode, however, this parameter is ignored (that is, the default value of 1 is used even if you set the parameter to another value).

The maximum value for HASH_MULTIBLOCK_IO_COUNT varies by operating system. It is always less than the operating system's maximum I/O size expressed as Oracle blocks (max_IO_size/DB_BLOCK_SIZE).

For Oracle 8i you need not set or change the value of this parameter, because Oracle computes the value individually for every query. If you let Oracle do the automatic computation, the value of the parameter appears as 0 in the V$PARAMETER dynamic performance view.

This parameter strongly affects performance because it controls the number of partitions into which the input is divided. For Oracle 8i, Oracle Corporation does not recommend that you set or change the value of this parameter. If you must set it to investigate its effect on performance, make sure that the following formula remains
true:

R / M < Po2 (M/C)

where:

R = size of(<left input to the join>)
M = HASH_AREA_SIZE * 0.9
Po2(<n>) = largest power of 2 that is smaller than <n>
C = HASH_MULTIBLOCK_IO_COUNT * DB_BLOCK_SIZE

You can change the value of this parameter without shutting down your Oracle instance by using the ALTER SESSION or ALTER SYSTEM commands.

===============================================
There are instances where this setting has resulted in ORA-03232 and ORA-02063 when CREATE TEMPORARY TABLESPACE TEMP TEMPFILE …. EXTENT MANAGEMENT LOCAL UNIFORM size … ; is executed. This was considered a bug. But ultimately closed as ‘It was decided that the behavior needed to be better documented in the Oracle8i Reference manual (done in 8.1.7)’. Per Oracle Metalink Docs.

This is the case with Oracle 9.2.0.3 also on windows platform.
_INDEX_JOIN_ENABLED Use this to disable use of index joins. An execution plan with a bitmap access on the top of a full b*tree index scan can produce a wrong result, if all the columns of the b*tree index are nullable. This was considered a bug and the patches were released with patchsets 8.1.7.4, 9.0.1.4 and 9.2.0.1.

This parameter is set to TRUE by default. This is session modifiable. Hence, as a work around one may set it to false for that session.

alter session set "_INDEX_JOIN_ENABLED"=FALSE;

Some have experienced instance crashes with the following error:

ORA-07445: exception encountered: core dump[qervwRowProcedure()+123] [SIGSEGV] [Address not mapped to object] [0xC] [] []

ORA-7445 [qervwRestoreViewBufPtrs] they are possible from queries against clustered tables if an index join access path is used.

Workaround: Set _index_join_enabled=false


This was seen when a SELECT statement with an INLINE QUERY (with an outer join ) IN THE PREDICATE IS EXECUTED. The work around tried was using ORDERED RULE hint in the OUTER JOIN of the IN LINE query. For this also the work around is

alter session set "_INDEX_JOIN_ENABLED"=FALSE;

So this setting altering the session, where a query is upsetting the performance gains of earlier versions, or where a SQL, which has in-line queries in the where clause of the SQL and leads to a crash of the instance, one can gain the performance back and also save the instance.

In some cases Select for update failing with ORA-02014. For this also the work around is

alter session set "_INDEX_JOIN_ENABLED"=FALSE;

If the Oracle patch is installed there is no problem.

_OPTIMIZER_ADJUST_FOR_NULLS This parameter is to adjust selectivity for null values. This parameter is available since version 7.0 is set to FALSE all through.
This parameter can be set for a session and hence can be used by setting it to TRUE and used whenever and wherever required.

The Parameter is set to TRUE the selectivity is the for nulls.

_ORACLE_TRACE_EVENTS This parameter is introduced in Oracle 7.3.2. This is to flag Oracle Traces.

Performance Tuning Part II
(Application Tuning - Hints)

Hints:

Metalink says: Hints are directives added to the select list of a query. They force the query to operate in a particular manner. Hints are available to alter all manner of factors that affect queries including single table access methods, join order and methods.

Hints are the suggestive directives given to the Optimizer, to select so and so path for execution. They are not commands to Optimizer. It does not always get used. Sometimes Optimizer ignores. The conditions, where Optimizer ignores are documented as caveats and they can be found in the Oracle Document on Performance Tuning.

The optimizer recognizes hints only when using the cost-based approach. If you include a hint (except the RULE hint) in a statement block, then the optimizer automatically uses the cost-based approach.

Types of Hints:

Hints are of two types basically. (1) Usage Directives and (2) Compiler Directives.

Usage Directives are explicitly opted in the init.ora parameter file for the database.
These parameters can be seen in the initialization parameter file for the database. Apart from these explicitly set parameters for the database Oracle invokes some parameters for the Optimizer. These parameters are listed by Oracle in the trace file generated by Oracle when event 10053 is invoked.


Invoking event 10053:

This is better invoked for that query for that session to identify the paths of execution as identified by the Optimizer parameters that are operative in the Oracle. This is available from Oracle 9.0 onwards.

Do not set the trace at database level ("event=" parameter in init.ora). This requires the shutdown and restart of the database. apart from this setting at the database level results in space eating machine. and user dump destination becomes full at no time.

The trace that is produced shows the steps the CBO followed to produce an execution plan:
(1) query text
(2) parameters used by optimizer
(3) base statistical information (tables, columns, index statistics)
(4) single table access path
(5) join order and method evaluation
(6) final cost

Here is a trace file for the event 10053
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

*** 2003-09-04 22:24:39.000
QUERY
explain plan for
select * from emp
where job='MANAGER'
AND deptno = ( select deptno from dept where dname='ACCOUNTING')
***************************************
PARAMETERS USED BY THE OPTIMIZER
********************************
OPTIMIZER_FEATURES_ENABLE = 9.2.0
OPTIMIZER_MODE/GOAL = Choose
_OPTIMIZER_PERCENT_PARALLEL = 101
HASH_AREA_SIZE = 1048576
HASH_JOIN_ENABLED = TRUE
HASH_MULTIBLOCK_IO_COUNT = 0
SORT_AREA_SIZE = 1048576
OPTIMIZER_SEARCH_LIMIT = 5
PARTITION_VIEW_ENABLED = FALSE
_ALWAYS_STAR_TRANSFORMATION = FALSE
_B_TREE_BITMAP_PLANS = TRUE
STAR_TRANSFORMATION_ENABLED = TRUE
_COMPLEX_VIEW_MERGING = TRUE
_PUSH_JOIN_PREDICATE = TRUE
PARALLEL_BROADCAST_ENABLED = TRUE
OPTIMIZER_MAX_PERMUTATIONS = 2000
OPTIMIZER_INDEX_CACHING = 0
_SYSTEM_INDEX_CACHING = 0
OPTIMIZER_INDEX_COST_ADJ = 100
OPTIMIZER_DYNAMIC_SAMPLING = 1
_OPTIMIZER_DYN_SMP_BLKS = 32
QUERY_REWRITE_ENABLED = TRUE
QUERY_REWRITE_INTEGRITY = ENFORCED
_INDEX_JOIN_ENABLED = TRUE
_SORT_ELIMINATION_COST_RATIO = 0
_OR_EXPAND_NVL_PREDICATE = TRUE
_NEW_INITIAL_JOIN_ORDERS = TRUE
ALWAYS_ANTI_JOIN = CHOOSE
ALWAYS_SEMI_JOIN = CHOOSE
_OPTIMIZER_MODE_FORCE = TRUE
_OPTIMIZER_UNDO_CHANGES = FALSE
_UNNEST_SUBQUERY = TRUE
_PUSH_JOIN_UNION_VIEW = TRUE
_FAST_FULL_SCAN_ENABLED = TRUE
_OPTIM_ENHANCE_NNULL_DETECTION = TRUE
_ORDERED_NESTED_LOOP = TRUE
_NESTED_LOOP_FUDGE = 100
_NO_OR_EXPANSION = FALSE
_QUERY_COST_REWRITE = TRUE
QUERY_REWRITE_EXPRESSION = TRUE
_IMPROVED_ROW_LENGTH_ENABLED = TRUE
_USE_NOSEGMENT_INDEXES = FALSE
_ENABLE_TYPE_DEP_SELECTIVITY = TRUE
_IMPROVED_OUTERJOIN_CARD = TRUE
_OPTIMIZER_ADJUST_FOR_NULLS = TRUE
_OPTIMIZER_CHOOSE_PERMUTATION = 0
_USE_COLUMN_STATS_FOR_FUNCTION = TRUE
_SUBQUERY_PRUNING_ENABLED = TRUE
_SUBQUERY_PRUNING_REDUCTION_FACTOR = 50
_SUBQUERY_PRUNING_COST_FACTOR = 20
_LIKE_WITH_BIND_AS_EQUALITY = FALSE
_TABLE_SCAN_COST_PLUS_ONE = TRUE
_SORTMERGE_INEQUALITY_JOIN_OFF = FALSE
_DEFAULT_NON_EQUALITY_SEL_CHECK = TRUE
_ONESIDE_COLSTAT_FOR_EQUIJOINS = TRUE
_OPTIMIZER_COST_MODEL = CHOOSE
_GSETS_ALWAYS_USE_TEMPTABLES = FALSE
DB_FILE_MULTIBLOCK_READ_COUNT = 32
_NEW_SORT_COST_ESTIMATE = TRUE
_GS_ANTI_SEMI_JOIN_ALLOWED = TRUE
_CPU_TO_IO = 0
_PRED_MOVE_AROUND = TRUE
***************************************
BASE STATISTICAL INFORMATION
***********************
Table stats Table: EMP Alias: EMP
TOTAL :: CDN: 14 NBLKS: 1 AVG_ROW_LEN: 40
-- Index stats
INDEX NAME: PK_EMP COL#: 1
TOTAL :: LVLS: 0 #LB: 1 #DK: 14 LB/K: 1 DB/K: 1 CLUF: 1
_OPTIMIZER_PERCENT_PARALLEL = 0
***************************************
SINGLE TABLE ACCESS PATH
Column: JOB Col#: 3 Table: EMP Alias: EMP
NDV: 5 NULLS: 0 DENS: 2.0000e-001
NO HISTOGRAM: #BKT: 1 #VAL: 2
Column: DEPTNO Col#: 8 Table: EMP Alias: EMP
NDV: 3 NULLS: 0 DENS: 3.3333e-001 LO: 10 HI: 30
NO HISTOGRAM: #BKT: 1 #VAL: 2
TABLE: EMP ORIG CDN: 14 ROUNDED CDN: 1 CMPTD CDN: 1
Access path: tsc Resc: 2 Resp: 2
BEST_CST: 2.00 PATH: 2 Degree: 1
***************************************
OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***********************
Join order[1]: EMP [EMP]
Best so far: TABLE#: 0 CST: 2 CDN: 1 BYTES: 32
Final:
CST: 2 CDN: 1 RSC: 2 RSP: 2 BYTES: 32
IO-RSC: 2 IO-RSP: 2 CPU-RSC: 0 CPU-RSP: 0
PLAN
Cost of plan: 2
Operation...........Object name.....Options.........Id...Pid..
SELECT STATEMENT 0
TABLE ACCESS EMP FULL 1
TABLE ACCESS DEPT FULL 2 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The TKPROF utility cannot be used, as it cannot interpret its layout. This is almost another Utility for SQL tuning apart from Explain Plan and TKPROF.

To set the event it is better to check for the following:

(1) Existence of the PLAN_TABLE for that user
(2) Analyzed objects – because they are needed in CBO

Command line syntax for event 10053

To switch on the event:
alter session set events '10053 trace name context forever[, level {1|2}]'

To switch off the event:
alter session set events '10053 trace name context off'

Another way:
on:
sys.dbms_system.set_ev (<sid>, <serial#>, 10053, {1|2}, '')
off:
sys.dbms_system.set_ev (<sid>, <serial#>, 10053, 0, '')

The sid and serial# are to be collected from the session from the v$session synonym
When they are to be used?

Compiler Directives: These directives are many kinds

Compiler Directive Type Compiler Directive Hints
To Change the execution Path – optimization goal ALL_ROWS
FIRST_ROWS
FIRST_ROWS(n)
CHOOSE
RULE
To change Access Methods AND_EQUAL
HASH
CLUSTER
FULL
INDEX
INDEX_ASC
INDEX_COMBINE
INDEX_DESC
INDEX_FFS
INDEX_JOIN
NO_INDEX
ROWID
To change Transformation methods of query(Useful in DW Env. where FACT and DIMENSION tables exist) FACT
NO_FACT
MERGE
NO_MERGE
REWRITE
NOREWRITE
STAR
STAR_TRANSFORMATION
NO_EXPAND
USE_CONCAT
To Change the Join Operations DRIVING_SITE
LEADING
NL_AJ
HASH_AJ
MRGE_AJ
NL_SJ
HASH_SJ
MERGE_SJ
ORDERED
PUSH_SUBQ
USE_NL
USE_HASH
USE_MERGE
To change the parallel execution Methods NOPARALLEL
NOPARALLEL_INDEX
PARALLEL
PARALLEL_INDEX
PQ_DISTRIBUTE
Miscellaneous APPEND
NOAPPEND
CACHE
NOCACHE
UNNEST
NO_UNNEST

Metalink Docs:
(1) For USE_MERGE, USE_HASH, USE_NL refer to Note:104817.1 --Discussion on Oracle Joins - Costs - Algorithms & Hints
(2) For USE_CONCAT refer to Note::17214.1 --Using the USE_CONCAT hint with IN/OR

Caveats

For access path hints, Oracle ignores the hint if you specify the SAMPLE option in the FROM clause of a SELECT statement.

The optimizer ignores FIRST_ROWS(n) hint in DELETE and UPDATE statement blocks and in SELECT statement blocks that contain any of the following syntax:

(a) Set operators (UNION, INTERSECT, MINUS, UNION ALL)
(b) GROUP BY clause
(c) FOR UPDATE clause
(d) Aggregate functions
(e) DISTINCT operator
(f) ORDER BY clauses, when there is no index on the ordering columns

The INDEX hint applies to IN-list predicates. It forces the optimizer to use the hinted index, if possible, for an IN-list predicate. Multicolumn IN-lists will not use an index.

The FULL hint explicitly chooses a full table scan for the specified table.

The ROWID hint explicitly chooses a table scan by rowid for the specified table

The CLUSTER hint explicitly chooses a cluster scan to access the specified table. It applies only to clustered objects.

The HASH hint explicitly chooses a hash scan to access the specified table. It applies only to tables stored in a cluster.

The INDEX hint explicitly chooses an index scan for the specified table. You can use the INDEX hint for domain, B-tree, bitmap, and bitmap join indexes. However, Oracle recommends using INDEX_COMBINE rather than INDEX for bitmap indexes, because it is a more versatile hint.

If this hint specifies no indexes, then the optimizer considers the cost of a scan on each available index on the table and then performs the index scan with the lowest cost. The optimizer can also choose to scan multiple indexes and merge the results, if such an access path has the lowest cost. The optimizer does not consider a full table scan.
ex:
select /*+ Index*/* from emp where empno=7609
no rows selected
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=32)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=1 Card=1 Bytes=32)
2 1 INDEX (UNIQUE SCAN) OF 'U_EMPNO' (UNIQUE)

INDEX_ASC

This hint explicitly chooses an index scan for the specified table. If the statement uses an index range scan, then Oracle scans the index entries in ascending order of their indexed values.

INDEX_COMBINE

This hint explicitly chooses a bitmap access path for the table. If no indexes are given as arguments for the INDEX_COMBINE hint, then the optimizer uses whatever Boolean combination of bitmap indexes has the best cost estimate for the table. If certain indexes are given as arguments, then the optimizer tries to use some Boolean combination of those particular bitmap indexes.

INDEX_JOIN

This hint explicitly instructs the optimizer to use an index join as an access path. If the number of Indexes is more, it looses its directive. The NO_INDEX hint applies to function-based, B-tree, bitmap, cluster, or domain indexes.
SQL> select /*+NO_index*/* from emp where empno=7934;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7934 MILLER CLERK 7782 23-JAN-82 1300 10


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=32)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=1 Bytes=32)
2 1 INDEX (UNIQUE SCAN) OF 'EMP_EMPNO' (UNIQUE) (Cost=1 Card=14)

When a specific index is mentioned Optimizer goes by the hint.

SQL> select /*+ no_index (emp EMP_EMPNO)*/* from emp where empno=7934;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7934 MILLER CLERK 7782 23-JAN-82 1300 10


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=32)
1 0 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 Bytes=32)

AND_EQUAL

This hint explicitly chooses an execution plan that uses an access path that merges the scans on several single-column indexes.

SELECT /*+USE_CONCAT*/ *
FROM emp
WHERE empno > 50 OR sal < 50000;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=15 Bytes=480)
1 0 CONCATENATION
2 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 Bytes=32)
3 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 Bytes=32)
select /*+ USE_NL emp */EMP.deptno,ename,empno,sal
from emp,dept
where emp.deptno=dept.deptno

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=14 Bytes=210)
1 0 NESTED LOOPS (Cost=2 Card=14 Bytes=210)
2 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=14 Bytes=182)
3 1 INDEX (UNIQUE SCAN) OF 'U_DEPTNO' (UNIQUE)

The USE_NL hint causes Oracle to join each specified table to another row source with a nested loops join, using the specified table as the inner table.

USE_MERGE

SELECT /*+USE_MERGE(emp dept)*/ *
FROM emp, dept
WHERE emp.deptno = dept.deptno;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6 Card=14 Bytes=714)
1 0 MERGE JOIN (Cost=6 Card=14 Bytes=714)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'DEPT' (Cost=2 Card=5 Bytes=95)
3 2 INDEX (FULL SCAN) OF 'U_DEPTNO' (UNIQUE) (Cost=1 Card=5)
4 1 SORT (JOIN) (Cost=4 Card=14 Bytes=448)
5 4 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=14 Bytes=448)


INDEX_FFS DIRECTS THE optimizer to perform FAST FULL INDEX Scan.

Example: There is a bit map index on deptno column on emp table

select /*+ index_ffs(emp bmp_dept)*/
deptno,empno
from emp
/

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=14 Bytes=70)
1 0 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=14 Bytes=70)

The optimizer uses no index even though there is a directive.

Drop that bitmap index and create a composite index on empno and deptno columns and then try the same hint

SQL> select /*+ index_ffs(emp bmp_dept)*/
2 deptno,empno
3 from emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=14 Bytes=70)
1 0 INDEX (FULL SCAN) OF 'IND_DEPTNO_EMPNO' (NON-UNIQUE) (Cost
=1 Card=14 Bytes=70)


This means that if all the columns of the composite index are selected in the query and with the hint for fast full scan of the index Optimizer implements that directive.

SQL> select /*+index_ffs (emp deptno)*/
2 deptno from emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=23 Card=14 Bytes=28)
1 0 BITMAP CONVERSION (TO ROWIDS)
2 1 BITMAP INDEX (FAST FULL SCAN) OF 'DEPTNO'

SQL> select distinct /*+ emp deptno */ deptno from emp;

DEPTNO
----------
10
20
30


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=3 Bytes=6)
1 0 SORT (UNIQUE) (Cost=4 Card=3 Bytes=6)
2 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=14 Bytes=28)

ORDERED hint is directive to access the table in a particular order

1 select /*+ ordered */
2 empno,ename,dept.deptno
3 from emp,dept
4 where emp.deptno=dept.deptno
5* and empno=7900
SQL> /

EMPNO ENAME DEPTNO
---------- ---------- ----------
7900 JAMES 30


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=12)
1 0 NESTED LOOPS (Cost=1 Card=1 Bytes=12)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=1 Card=1 By
tes=10)

3 2 INDEX (UNIQUE SCAN) OF 'EMP_EMPNO' (UNIQUE)
4 1 INDEX (UNIQUE SCAN) OF 'U_DEPTNO' (UNIQUE)
SQL>

Here there are indexes on these two tables and hence the access of the data is based on the indexes.

Now, let the indexes be dropped and then run the same SQL.

SQL> select /*+ ordered */
2 empno,ename,dept.deptno
3 from emp,dept
4 where emp.deptno=dept.deptno
5 and empno=7900
6 /

EMPNO ENAME DEPTNO
---------- ---------- ----------
7900 JAMES 30


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=1 Bytes=12)
1 0 HASH JOIN (Cost=5 Card=1 Bytes=12)
2 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 Bytes=10)
3 1 TABLE ACCESS (FULL) OF 'DEPT' (Cost=2 Card=5 Bytes=10)

1 select /*+ ordered */
2 empno,ename,dept.deptno
3 from dept,emp
4 where emp.deptno=dept.deptno
5* and empno=7900
SQL> /

EMPNO ENAME DEPTNO
---------- ---------- ----------
7900 JAMES 30


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=1 Bytes=12)
1 0 HASH JOIN (Cost=5 Card=1 Bytes=12)
2 1 TABLE ACCESS (FULL) OF 'DEPT' (Cost=2 Card=5 Bytes=10)
3 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 Bytes=10)

SQL>

SQL>
LEADING hint takes care of the problems for the ORDERED hint when there is lot of complexity in the query to confuse optimizer to follow hint.


1 select /*+ leading dept */
2 empno,ename,dept.deptno,sal,grade
3 from emp,dept,salgrade
4 where dept.deptno=emp.deptno
5 and sal >= 2001 and sal <= 3000
6* and hisal = 3000 and losal=2001
SQL> /

EMPNO ENAME DEPTNO SAL GRADE
---------- ---------- ---------- ---------- ----------
7566 JONES 20 2975 4
7698 BLAKE 30 2850 4
7782 CLARK 10 2450 4
7788 SCOTT 20 3000 4
7902 FORD 20 3000 4


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=7 Card=1 Bytes=54)
1 0 HASH JOIN (Cost=7 Card=1 Bytes=54)
2 1 MERGE JOIN (CARTESIAN) (Cost=4 Card=1 Bytes=41)
3 2 TABLE ACCESS (FULL) OF 'SALGRADE' (Cost=2 Card=1 Bytes=39)
4 2 BUFFER (SORT) (Cost=2 Card=1 Bytes=2)
5 4 TABLE ACCESS (FULL) OF 'DEPT' (Cost=2 Card=1 Bytes=2)
6 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=6 Bytes=78)

SQL>

For the sake of this example a Cartesian join is also show. Never mind of that, but concentrate on the order the tables are accessed as per the hint LEADING where it defined that DEPT tables shall be the LEADING table in this complex join.


ORDERED_PREDICATES is another variation to meet the complexities of the tables and the joins and the order of the predicating conditions in the where clause.

1 select /*+ ordered_predicates */
2 empno,job
3 from emp
4 where empno=7782
5 and deptno=10
6* and ename='CLARK'
SQL> /

EMPNO JOB
---------- ---------
7782 MANAGER


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=17)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=1 Bytes=17)
2 1 INDEX (UNIQUE SCAN) OF 'EMPNO_PK' (UNIQUE) (Cost=1 Card=14)

SQL>

Optimizer implements the order of clauses in the where clause.

DRIVING_SITE hint is identical to ORDERED hint where the access is processes basing on the driving site.

1 select /*+ driving_site srinivas.us.oracle.com*/
2 empno,
3 ename,
4 dept.deptno
5 from emp,scott.dept@srinivas dept
6 where emp.deptno=dept.deptno
7 and dept.deptno=30
8* and empno=7900
SQL> /

EMPNO ENAME DEPTNO
---------- ---------- ----------
7900 JAMES 30


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=23)
1 0 NESTED LOOPS (Cost=2 Card=1 Bytes=23)
2 1 REMOTE* (Cost=1 Card=1 Bytes=13) SRINIVAS.US.ORACLE.COM
3 1 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=1 Card=1 Bytes=10)
4 3 INDEX (UNIQUE SCAN) OF 'EMPNO_PK' (UNIQUE)
2 SERIAL_FROM_REMOTE SELECT "DEPTNO" FROM "SCOTT"."DEPT" "DEPT" WHERE "DEPTNO"=30

SQL>

When ever remote objects are to be accessed using DRIVING_SITE potentially reduces the network traffic.

USE_HASH hint is useful join many rows together from multiple tables.

SQL> select /*+use_has(test_2)*/
2 count(*)
3 from test_1,
4 test_2
5 where test_2.object_type=test_1.object_type
6 and test_1.object_type='PROCEDURE'
7 /

COUNT(*)
----------
40


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=20 Card=1 Bytes=18)
1 0 SORT (AGGREGATE)
2 1 HASH JOIN (Cost=20 Card=33 Bytes=594)
3 2 TABLE ACCESS (FULL) OF 'TEST_2' (Cost=2 Card=1 Bytes=10)
4 2 TABLE ACCESS (FULL) OF 'TEST_1' (Cost=17 Card=975 Bytes=7800)


SQL>

PUSH_SUBQ is nothing but PUSH SUB-QUERY first. If a sub-query is to return less number of rows and for all performance gains use this hint.


1 select /*+push_subq*/test_1.object_name,test_3.owner
2 from test_1,test_3
3 where test_1.owner=test_3.owner
4 and test_1.object_type=(select object_type
5* from test_2 where sl_no=4)
SQL> /

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=22 Card=968 Bytes=40656)

1 0 FILTER
2 1 HASH JOIN (Cost=20 Card=968 Bytes=40656)
3 2 TABLE ACCESS (FULL) OF 'TEST_3' (Cost=2 Card=30 Bytes=180)
4 2 TABLE ACCESS (FULL) OF 'TEST_1' (Cost=17 Card=968 Bytes=34848)
5 1 TABLE ACCESS (FULL) OF 'TEST_2' (Cost=2 Card=1 Bytes=12)

CAVEAT
This hint has no effect if the sub query is applied to a remote table or one that is joined using a merge join.


NO_PUSH_SUBQ

1 select /*+ NO_PUSH_SUBQ */test_1.object_name,test_3.owner
2 from test_1,test_3
3 where test_1.owner=test_3.owner
4 and test_1.object_type=(select object_type
5* from test_2 where sl_no=4)
SQL> /

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=20 Card=968 Bytes=40656)
1 0 HASH JOIN (Cost=20 Card=968 Bytes=40656)
2 1 TABLE ACCESS (FULL) OF 'TEST_3' (Cost=2 Card=30 Bytes=180)
3 1 TABLE ACCESS (FULL) OF 'TEST_1' (Cost=17 Card=968 Bytes=34848)
4 3 TABLE ACCESS (FULL) OF 'TEST_2' (Cost=2 Card=1 Bytes=12)

ORDERED_PRIDICATES

The ORDERED_PREDICATES hint forces the optimizer to preserve the order of predicate evaluation, except for predicates used as index keys. Use this hint in the WHERE clause of SELECT statements.

If you do not use the ORDERED_PREDICATES hint, then Oracle evaluates all predicates in the following order:

Predicates without user-defined functions, type methods, or sub-queries are evaluated first, in the order specified in the WHERE clause.

Predicates with user-defined functions and type methods that have user-computed costs are evaluated next, in increasing order of their cost.

Predicates with user-defined functions and type methods without user-computed costs are evaluated next, in the order specified in the WHERE clause.

Predicates not specified in the WHERE clause (for example, predicates transitively generated by the optimizer) are evaluated next.

Predicates with sub-queries are evaluated last, in the order specified in the WHERE clause.

Note: Never heard any guy claiming to have been benefited nor never I could prove this hint.

CURSOR_SHARING_EXACT

This is controlled with the CURSOR_SHARING startup parameter. The CURSOR_SHARING_EXACT hint causes this behavior to be switched off. In other words, Oracle executes the SQL statement without any attempt to replace literals by bind variables.
The initialization parameter CURSOR_SHARING is introduced in 8.1. The value is set to EXACT in those two releases 8.1.6 and 8.1.7

This parameter should only be set to FORCE in 8i when the risk of sub-optimal plans is outweighed by the improvements in cursor sharing.

Oracle Metalink explains in One Document as under:

If the answer YES to these 2 questions then CURSOR_SHARING=FORCE may be worth considering:

1. Are there a large number of statements in the shared pool that differ only in the values of their literals?
And
2. Is the response time low due to a very high number of library cache misses? (i.e.: hard parses and library cache latch contention)

The default value of EXACT maintains <pre 8.1.6> behavior for literal SQL statements in that literals are not replaced with bind variables and the literal values can be used by CBO.


Note #1:

Forcing cursor sharing among similar (but not identical) statements can have unexpected results in some DSS applications and in applications using stored outlines.

Note #2:

Setting CURSOR_SHARING to FORCE causes an increase in the maximum lengths (as returned by DESCRIBE) of any selected expressions that contain literals (in a SELECT statement).

However, the actual length of the data returned will not change.

In Oracle 9i this parameter enables to have options

FORCE
Forces statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect the meaning of the statement.

SIMILAR
Causes statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect either the meaning of the statement or the degree to which the plan is optimized.

EXACT
Only allows statements with identical text to share the same cursor.

NOTE
Forcing cursor sharing among similar (but not identical) statements can have unexpected results in some DSS applications, or applications that use stored outlines

DYNAMIC_SAMPLING


Oracle Docs says so.

The DYNAMIC_SAMPLING hint lets you control dynamic sampling to improve server performance by determining more accurate selectivity and cardinality estimates. You can set the value of DYNAMIC_SAMPLING to a value from 0 to 10. The higher the level, the more effort the compiler puts into dynamic sampling and the more broadly it is applied. Sampling defaults to cursor level unless you specify a table.

The hint is enabled when the following conditions are met.

01.There is more than one table in the query.
02.Some table has not been analyzed and has no indexes.
03.The optimizer determines that a relatively expensive table scan would be required for this unanalyzed table.

The sampling levels are as follows if the dynamic sampling level used is from a cursor hint or from the optimizer_dynamic_sampling parameter:

Level 0: Do not use dynamic sampling.

Level 1: Sample all unanalyzed tables if the following criteria are met:

(1) There is at least 1 unanalyzed table in the query;
(2) This unanalyzed table is joined to another table or appears in a sub-query or non-merge able view;
(3) This unanalyzed table has no indexes;
(4) This unanalyzed table has more blocks than the number of blocks that would be used for dynamic sampling of this table. The number of blocks sampled is the default number of dynamic sampling blocks (32).

Level 2: Apply dynamic sampling to all unanalyzed tables. The number of blocks sampled is the default number of dynamic sampling blocks.

Level 3: Apply dynamic sampling to all tables that meet Level 2 criteria, plus all tables for which standard selectivity estimation used a guess for some predicate that is a potential dynamic sampling predicate. The number of blocks sampled is the default number of dynamic sampling blocks.

Level 4: Apply dynamic sampling to all tables that meet Level 3 criteria, plus all tables that have single-table predicates that reference 2 or more columns. The number of blocks sampled is the default number of dynamic sampling blocks.

Level 5: Apply dynamic sampling to all tables that meet the Level 4 criteria using 2 times the default number of dynamic sampling blocks.

Level 6: Apply dynamic sampling to all tables that meet the Level 5 criteria using 4 times the default number of dynamic sampling blocks.

Level 7: Apply dynamic sampling to all tables that meet the Level 6 criteria using 8 times the default number of dynamic sampling blocks.

Level 8: Apply dynamic sampling to all tables that meet the Level 7 criteria using 32 times the default number of dynamic sampling blocks.

Level 9: Apply dynamic sampling to all tables that meet the Level 8 criteria using 128 times the default number of dynamic sampling blocks.
Level 10: Apply dynamic sampling to all tables that meet the Level 9 criteria using all blocks in the table.

The sampling levels are as follows if the dynamic sampling level used is from a table hint:

Level 0: Do not use dynamic sampling.

Level 1: The number of blocks sampled is the default number of dynamic sampling blocks (32).

Level 2: The number of blocks sampled is 2 times the default number of dynamic sampling blocks.

Level 3: The number of blocks sampled is 4 times the default number of dynamic sampling blocks.

Level 4: The number of blocks sampled is 8 times the default number of dynamic sampling blocks.

Level 5: The number of blocks sampled is 16 times the default number of dynamic sampling blocks.

Level 6: The number of blocks sampled is 32 times the default number of dynamic sampling blocks.

Level 7: The number of blocks sampled is 64 times the default number of dynamic sampling blocks.

Level 8: The number of blocks sampled is 128 times the default number of dynamic sampling blocks.

Level 9: The number of blocks sampled is 256 times the default number of dynamic sampling blocks.

Level 10: Read all blocks in the table.

The sampling levels are as follows if the dynamic sampling level used is from a table hint:
Level 0: Do not use dynamic sampling.

Level 1: The number of blocks sampled is the default number of dynamic sampling blocks (32).

Level 2: The number of blocks sampled is 2 times the default number of dynamic sampling blocks. (64)

Level 3: The number of blocks sampled is 4 times the default number of dynamic sampling blocks. (128)

Level 4: The number of blocks sampled is 8 times the default number of dynamic sampling blocks. (256)

Level 5: The number of blocks sampled is 16 times the default number of dynamic sampling blocks. (512)

Level 6: The number of blocks sampled is 32 times the default number of dynamic sampling blocks. (1024)

Level 7: The number of blocks sampled is 64 times the default number of dynamic sampling blocks. (2048)

Level 8: The number of blocks sampled is 128 times the default number of dynamic sampling blocks. (4096)

Level 9: The number of blocks sampled is 256 times the default number of dynamic sampling blocks. (8192)

Level 10: Read all blocks in the table. (Full Table)

The number of blocks that are set for dynamic sampling can be decided by accessing dba_extents or dba_segments table.

select owner,
table_name,
blocks
from dba_segments

or

select owner,
table_name,
blocks
from dba_extents

When a table is very large and that table is not analyzed, but is part of the query or constitutes the only table for the query, then depending upon the number of blocks set the sample. It works well.

Please do not forget to use an alias to the table, This applies to the usage hints in general where ever it is required to mention the name of the table with the hint.

Ex:
SQL> SELECT /*+ dynamic_sampling (t 1) */ count(*)
2 FROM test.test_1 t;
3
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=1)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TEST_1' (Cost=17 Card=20744)

SQL> SELECT /*+ dynamic_sampling (t 5) */ count(*)
2 FROM test.test_1 t;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=1)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TEST_1' (Cost=17 Card=29047)

SQL> SELECT SUM(BLOCKS)
2 FROM USER_EXTENTS
3 WHERE SEGMENT_NAME='TEST_1'
4 group by segment_name;

SUM(BLOCKS)
-----------
256

Rule is

Level 4: The number of blocks sampled is 8 times the default number of dynamic sampling blocks. (256)

SQL> SELECT /*+ dynamic_sampling (t 4)*/ count(*)
2 FROM test.test_1 t;

COUNT(*)
----------
29047


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=1)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TEST_1' (Cost=17 Card=29047)


The cardinality statistic is used, if it exists. If there is a predicate, dynamic sampling is done with a table hint and cardinality is not estimated.

To force cardinality estimation even for an analyzed table, you can use a further hint, dynamic_sampling_est_cdn, as in the following example:

SELECT /*+ dynamic_sampling (e 1) dynamic_sampling_est_cdn(t) */ count(*) FROM scott.emp e;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FULL SCAN) OF 'PK_EMP' (UNIQUE) (Cost=1 Card=14)

This forces cardinality estimation for emp, even if the table is analyzed. The following query does both selectivity and cardinality estimation for emp:

SELECT /*+ dynamic_sampling (e 1) dynamic_sampling_est_cdn(e) */ count(*) FROM scott.emp e WHERE sal >= 3000;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=3)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=3 Bytes=9)

PARALLEL EXECUTION HINTS ON VIEWS

SQL> SELECT /*+ INDEX(v2.v1.emp emp_empno) FULL(v2.dept) */ *
2 FROM v2
3 WHERE deptno = 20;

no rows selected

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=1 Bytes=39)
1 0 HASH JOIN (Cost=5 Card=1 Bytes=39)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=1 Bytes=26)
3 2 BITMAP CONVERSION (TO ROWIDS)
4 3 BITMAP INDEX (SINGLE VALUE) OF 'BMI_DEPTNO'
5 1 TABLE ACCESS (FULL) OF 'DEPT' (Cost=2 Card=3 Bytes=39)

PARALLEL, NOPARALLEL, PARALLEL_INDEX, and NOPARALLEL_INDEX hints on views are applied recursively to all the tables in the referenced view. Parallel execution hints in a top-level query override such hints inside a referenced view.

PARALLEL EXECUTION HINTS INSIDE VIEWS

PARALLEL, NOPARALLEL, PARALLEL_INDEX, and NOPARALLEL_INDEX hints inside views are preserved when the view is merged with the top-level query. Parallel execution hints on the view in a top-level query override such hints inside a referenced view.

PARALLEL
The PARALLEL hint must use the table alias, if an alias is specified in the query. The hint can then take two values, separated by commas after the table name. The first value specifies the degree of parallelism for the given table, and the second value specifies how the table is to be split among the Oracle Real Application Clusters instances. Specifying DEFAULT or no value signifies that the query coordinator should examine the settings of the initialization parameters to determine the default degree of parallelism.

The test database is not an OPS/ORAC database, parallel parameter is set to false.
Here are the parameters:


parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_execution_message_size integer 2148
parallel_instance_group string
parallel_max_servers integer 5
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_server boolean FALSE
parallel_server_instances integer 1

SELECT /*+ FULL(scott_emp) PARALLEL(scott_emp, DEFAULT,DEFAULT) */ *
FROM scott.emp scott_emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=14 Bytes=448)
1 0 TABLE ACCESS* (FULL) OF 'EMP' (Cost=1 Card=14 Bytes=448) :Q6000
1 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND ROWID(A1) */ A1."EMPNO" ,A1."ENAME",A1."JOB",A1."MGR",A1."HI

SQL> select /*+ parallel(emp,5) */ * from emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=327 Bytes=28449)
1 0 TABLE ACCESS* (FULL) OF 'EMP' (Cost=1 Card=327 Bytes=28449 :Q8000)
1 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND ROWID(A1) */ A1."EMPNO"
,A1."ENAME",A1."JOB",A1."MGR",A1."HI

SQL> select /*+ parallel(emp,1,5) */ * from emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=327 Bytes=28449)
1 0 TABLE ACCESS* (FULL) OF 'EMP' (Cost=1 Card=327 Bytes=28449 :Q10000)
1 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND ROWID(A1) */ A1."EMPNO"
,A1."ENAME",A1."JOB",A1."MGR",A1."HI

WHY THIS BEHAVIOR? ORACLE METALINK DOCS say:

Note that the affect of setting a degree/instances of parallelism on an index has changed. Prior to 8i this would have had no affect. Post 8i a parallel plan will be investigated. The physical presence of multiple instances (i.e. a multi instance Oracle Parallel Server) is irrelevant. If instances is set (to an integer value > 1 or to ‘DEFAULT’) on an object then this will mean a parallel plan is considered. Remember that the optimizer works on a cost basis so just because a parallel plan may be considered, does not mean that it will be chosen.

SQL> select degree, instances from user_tables where table_name= 'EMP';

DEGREE INSTANCES
---------- ----------
1 1

It is true that there are no other instances configured.

With no hint the execution plan is
SQL> select * from emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'EMP'

Alter the table structure now.
alter table emp parallel(degree 5,instances 5)
*
ERROR at line 1:
ORA-12818: invalid option in PARALLEL clause

Elapsed: 00:00:00.00
SQL> ed
Wrote file afiedt.buf
1* alter table emp parallel(degree 5)
SQL> /
Table altered.

Elapsed: 00:00:00.00
SQL> alter table emp parallel(instances 5);

Table altered.

Elapsed: 00:00:00.00
SQL>

SQL> select /*+ parallel(emp,5) */ * from emp;


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=327 Bytes=28449)
1 0 TABLE ACCESS* (FULL) OF 'EMP' (Cost=1 Card=327 Bytes=28449 :Q12000)
1 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND ROWID(A1) */ A1."EMPNO"
,A1."ENAME",A1."JOB",A1."MGR",A1."HI
SQL> select /*+ rule */ * from emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: RULE (Cost=1 Card=327 Bytes=28449)
1 0 TABLE ACCESS* (FULL) OF 'EMP' (Cost=1 Card=327 Bytes=28449 :Q14000)
1 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND ROWID(A1) */ A1."EMPNO"
,A1."ENAME",A1."JOB",A1."MGR",A1."HI

SQL>

NOPARALLEL

This hint has no impact on table that is configured parallel.

SQL> select degree,instances
2 from user_tables
3 where table_name='EMP';

DEGREE INSTANCES
---------- ----------
5 5

SQL> SELECT /*+ NOPARALLEL(emp) */ ename
2 FROM emp emp;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=327 Bytes=28449)
1 0 TABLE ACCESS* (FULL) OF 'EMP' (Cost=1 Card=327 Bytes=28449 :Q21000)
1 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND ROWID(A1) */ A1."EMPNO"
,A1."ENAME",A1."JOB",A1."MGR",A1."HI

Performance Tuning Part III

What Optimizer Can Do?

Oracle has a four-tiered structure to give out put to the queries submitted by the users.

The first structure is PARSER.

Parser parses the SQL statement submitted by the User and validates the syntax, rights and privileges to access the data. Parser spits out the error messages for all the syntactical errors giving leads to the user on the source of syntactical error. For all the security violations, Parser may sometimes spits direct messages on the lack of rights and privileges or spits out an indirect message which may confuse the User like object does not exist, object requires declaration etc.

Examine one SQL statement with ‘WHERE’ clause and with NO ‘WHERE’ clause. This enables understand which part of SQL is parsed first.

FROM clause…. whether it is there or not.

If the table cannot be accessed by the user who has issued the SQL statement for the simple reason he has no access rights granted to him on that table, the error message spit out is ORA-00942 table or view does not exist. This means that such and such table does not exist within the user schema and PARSER has not checked other schemas, as there is no meaningful informative instruction to the PARSER to check other schemas.

If the table has no identifier prefixed to it, by default it is checked in the user schema who has issued the SQL statement.

The instruction to PARSER can be issued by using an identifier for the table to be selected.

SQL> select count(*) from srv;
select count(*) from srv
*
ERROR at line 1:
ORA-00942: table or view does not exist


SQL> select count(*) from sysslp.srv;

COUNT(*)
----------
1

SQL>

In case the table exists in the schema of the identifier for the table, but the user has NOT been granted any access rights to that object then also the same error message is spit out.
If the identifier is not properly marked, the following error is spit.

ORA-01747: invalid user.table.column, table.column, or column specification

(2) then validates the WHERE clause, if any.
(3) then validates the SUBJECT selected columns with the functions, if any and passes on the validated SQL to the OPTIMIZER.

Study these examples to understand how parser works:

SQL> select ENAME,emp_no >> wrong column
2 FROM EMPL; >>>> wrong table name
FROM EMPL
*
ERROR at line 2:
ORA-00942: table or view does not exist


SQL> select ENAME-EMP_NO
2 FROM EMP;
select ENAME-EMP_NO
*
ERROR at line 1:
ORA-00904: "EMP_NO": invalid identifier

When there is a where clause:

SQL> select ename,emp_no
2 from empl
3 where enames='SMITH'; >>> wrong column
from empl
*
ERROR at line 2:
ORA-00942: table or view does not exist

1 select ename,emp_no
2 from emp
3* where enames='SMITH'
SQL> /
where enames='SMITH'
*
ERROR at line 3:
ORA-00904: "ENAMES": invalid identifier

1 select ename,emp_no
2 from emp
3* where ename='SMITH'
SQL> /
select ename,emp_no
*
ERROR at line 1:
ORA-00904: "EMP_NO": invalid identifier

1 select ename,empno
2 from emp
3* where ename='SMITH'
SQL> /

ENAME EMPNO
---------- ----------
SMITH 7369


The second structure is OPTIMIZER.

Optimizer looks for the availability of the statistics for the objects to be accessed. If they are available prefers CBO (cost Based Optimization) else chooses RBO (Rule Based Optimization). This is a broader statement on the role of the OPTIMIZER.

The third structure is RESOURCE GENERATOR.

On receiving from the OPTIMIZER the most efficient access path plan outputs the execution plan for the SQL submitted by the User. If the resources are busy and the Resource Generator is not able to generate the required Resource like Locks, Latches, CPU time may spit some times error messages like obtain the resource with NOWAIT like messages. Some times, depending upon the contention may be hanging on until is able to acquire the resources. If there is a time out for any reason, error message is spit out. The user may have to retry the SQL. There are ways and methods to tune these waits and contentions. But, if the resource crunch / contention is due to the non-availability of the resources (hardware) either they are to be provided or live with the low performance.

The fourth structure is SQL EXECUTION ENGINE, which produces the result to the User who submitted the query.

The SQL EXECUTION ENGINE executes the SQL and returns the output to the user. The output display is dependent on the Hardware STANDARD OUTPUT DEVICES. The role is limited to Oracle. Part of this output generation is tunable for performance. Like using DBMS_OUTPUT package and it limits. Using UTL_FILE package and its limits etc.

For the sake of this paper ’What OPTIMIZER can do?’ is discussed.

Optimizer basing the setting in the initialization parameters that are set at the time of creating the database depending upon the nature of the application that the database is to run chooses the cost based or rule based. If the cost based optimization of the SQL is to be done Optimizer has to be provided with the statistics based on the selectivity, cardinality and cost. For these three different measures the statistics are required. If the rule based optimization is chosen the optimizer does so.
But there certain oracle features which can be used only when Optimizer is set to Cost Based Optimization. They are:

Partitioned tables and indexes
Index-organized tables
Reverse key indexes
Function-based indexes
SAMPLE clauses in a SELECT statement
Parallel query and parallel DML
Star transformations and star joins
Extensible optimizer
Query rewrite with materialized views
Enterprise Manager progress meter
Hash joins
Bitmap indexes and bitmap join indexes
Index skip scans

NOTE:

Oracle Corporation strongly advises the use of cost-based optimization. The rule-based optimization is available for backward compatibility with legacy applications and will be deprecated in a future release.

How to Collect Statistics for Optimizer?

Oracle has provided ANALYZE command and DBMS_STATS Utility. Oracle recommends to use DBMS_STATS Utility to collect the stats as that has some performance gains as well as additional abilities to collect stats.

DBMS_STATS is product of Oracle 8i. This Utility is suggested by Oracle against ANALYZE command for collecting statistics for the following reasons.

An article on dbasupport.com reads as under:

The DBMS_STATS package is a godsend for the Oracle DBA in managing database statistics only for the cost based optimizer.
The package itself allows us to create, modify, view and delete statistics from a standard, well-defined set of package procedures. The statistics can be gathered on tables, indexes, columns, partitions and schemas, but note that it does not generate statistics for clusters.
Some of its many features are:
· Statistics can be computed or estimated from a random sample
· Can gather statistics for sub-partitions or partitions
· Whenever possible, dbms_stats routines will run via parallel query or operate serially
· Statistics can be generated to a statistics table and can be imported or exported between databases and re-loaded into the data dictionary at any time.
This allows the DBA to experiment with various statistics. It is recommended over the analyze command for CBO statistics generation because:
· analyze always runs serially
· analyze cant overwrite or delete some statistics generated by dbms_stats
· statistic generation via analyse for partitions is not optimal, namely for global partition statistics.
Even so, dbms_stats does not generate information about chained rows and the structural integrity of segments. It should also be mentioned that in 8i, certain DDL commands (ie. create index), statistics are automatically generated, therefore eliminating the need to generate statistics explicitly after DDL command. Oracle will determine if statistics are automatically generated based on the expected overhead.

What are the Steps involved in CBO?

When the query after the syntactical and security check by PARSER is passed on to OPTIMIZER, as has been said in the beginning of this Document, and as a first step the query is transformed by the QUERY TRANSFORMER and then are passed on to ESTIMATOR who uses the collected statistics to find the cost of each query plan and from ESTIMATOR to PLAN GENERATOR.

QUERY TRANSFORMER

The input to QUERY TRANSFORMER is the output of the PARSER after the syntactical and security check is done.

Oracle says:

The input query ‘ is represented by a set of query blocks. The query blocks are nested or interrelated to each other. The form of the query determines how the query blocks are interrelated to each other. The main objective of the query transformer is to determine if it is advantageous to change the form of the query so that it enables generation of a better query plan’.

Query Transformation Techniques Employed by QUERY TRANSFORMER are:

View Merging
Predicate Pushing
Subquery Unnesting
Query Rewrite with Materialized Views

Any combination of these transformations can be applied to a given query.

View Merging

The Parser expands each view referenced in a query into a separate query block. The query block essentially represents the view definition, and therefore the result of a view.

One option for the optimizer is to analyze the view query block separately and generate a view sub-plan. The optimizer then processes the rest of the query by using the view sub-plan in the generation of an overall query plan.

This technique usually leads to a sub-optimal query plan, because the view is optimized separately from rest of the query.

The query transformer then removes the potentially sub-optimal plan by merging the view query block into the query block that contains the view. Most types of views are merged. When a view is merged, the query block representing the view is merged into the containing query block. Generating a sub-plan is no longer necessary, because the view query block is eliminated.

Predicate Pushing

For those views that are not merged, the query transformer can push the relevant predicates from the containing query block into the view query block. This technique improves the sub-plan of the non-merged view, because the pushed-in predicates can be used either to access indexes or to act as filters.

Subquery Un-nesting

A separate query block, like a view, represents a sub-query. Because a sub-query is nested within the main query or another sub-query, the plan generator is constrained in trying out different possible plans before it finds a plan with the lowest cost. For this reason, the query plan produced might not be the optimal one. The restrictions due to the nesting of sub-queries can be removed by un-nesting the sub-queries and converting them into joins. Most sub-queries are un-nested by the query transformer. For those sub-queries that are not un-nested, separate sub-plans are generated. To improve execution speed of the overall query plan, the sub-plans are ordered in an efficient manner.

Query Rewrite with Materialized Views

A materialized view is like a query with a result that is materialized and stored in a table. When a user query is found compatible with the query associated with a materialized view, the user query can be rewritten in terms of the materialized view.

This technique improves the execution of the user query, because most of the query result has been precomputed. The query transformer looks for any materialized views that are compatible with the user query and selects one or more materialized views to rewrite the user query.

The use of materialized views to rewrite a query is cost-based. That is, the query is not rewritten if the plan generated without the materialized views has a lower cost than the plan generated with the materialized views.

After the QUERY TRANSFORMER has done its job the query is handed over to ESTIMATOR.

ESTIMATOR

The ESTIMATOR does the cost measurements. The end task of the estimator is to estimate the cost of each query. While doing so, ESTIMATOR does the following depends on three criteria.

01. Selectivity
02. Cardinality
03. Cost

SELECTIVITY

SELECTIVITY represents a fraction of rows from a ROWSET.

ROWSET is can be (1) a table or (2) a view or (3) the result of a join operator or (4) the result of a GROUP BY operator.

It is clear that SELECTIVITY is tied to ROWSET, which in turn is the result of PREDICATE.

A predicate acts as a FILTER that filters a certain number of rows from a row set. Therefore, the selectivity of a predicate indicates how many rows from a row set will pass the predicate test. Selectivity lies in a VALUE RANGE from 0.0 to 1.0. A selectivity of 0.0 means that no rows will be selected from a row set, and a selectivity of 1.0 means that all rows will be selected.

The measures shall me more close to accuracy if the statistics are available.

The estimator uses an INTERNAL DEFAULT VALUE for selectivity, if NO STATISTICS are available. Different internal defaults are used, depending on the predicate type.

When statistics are available, the estimator uses them to estimate selectivity. For example, for an equality predicate (col_3 = 'ABCDEF'), selectivity is set to the reciprocal of the number ‘n’ of distinct values of col_3, because the query selects rows that all contain one out of ‘n’ distinct values. If a histogram is available on the col_3 column, then the estimator uses it instead of the number of distinct values. The histogram captures the distribution of different values in a column, so it yields better selectivity estimates. Having histograms on columns that contain skewed data (in other words, values with large variations in number of duplicates) greatly helps the CBO generate good selectivity estimates.

Access these data dictionary views/tables/synonyms DBA_TAB_HISTOGRAMS, DBA_TAB_COLUMNS, DBA_TAB_COL_STATISTICS, for the data on each column values to find out the selectivity aspects of the data. Also access the LAST_ANALYZED date so that a reliability of the statistics is also assessed.

CARDINALITY

Cardinality represents the number of rows in a row set. Here, the row set can be a base table, a view, or the result of a join or GROUP BY operator.

There are various cardinalities that are required to measure the cardinality by the estimator are as under:

BASE CARDINALITY

This represents the number of rows in the base table. This data is collected when a table is analyzed. This is represented by NUM_ROWS in DBA_TABLES. The value of LAST_ANALYZED column helps understand the reliability of the data.

If the table is not analyzed, ESTIMATOR uses the number of extents occupied by the table to estimate the base cardinality.

Example:

This query gives the base cardinality if the table/s is/are analyzed

select table_name,
num_rows base_cardinality,
last_analyzed
from dba_tables

If the table is not analyzed this query helps you know the extents for that table.
To know the oracle block size

select name,value
from v$parameter
where name ='db_block_size'

To find the blocks for the segment/object

select owner,
segment_name,
blocks
from dba_extents
where segment_name=’<table_name>’
and owner=’<owner_name>’

EFFECTIVE CARDINALITY

This represents the number of rows selected by the query. The effective cardinality depends on the predicates specified on different columns of a base table, with each predicate acting as a successive filter on the rows of the base table. The effective cardinality is computed as the product of the base cardinality and combined selectivity of all predicates specified on a table. When there is no predicate on a table, its effective cardinality equals its base cardinality.

JOIN CARDINALITY

This represents the number of rows produced when two row sets are joined together. A join is a Cartesian product of two row sets, with the join predicate applied as a filter to the result. Therefore, the join cardinality is the product of the cardinalities of two row sets, multiplied by the selectivity of the join predicate.

DISTINCT CARDINALITY

This represents the number of distinct values in a column of a row set. The distinct cardinality of a row set is based on the data in the column.

GROUP CARDINALITY

This represents the number of rows produced from a row set after the GROUP BY operator is applied. The effect of the GROUP BY operator is to decrease the number of rows in a row set. The group cardinality depends on the distinct cardinality of each of the grouping columns and on the number of rows in the row set.

COST

Cost represents the units of resources used. These resources are identified as I/O, CPU Usage and Memory Usage. The cost of a query plan is the number of work units that are to be incurred when the query is executed and its result produced.

The cost of full table scan or full fast index scan depends upon the number of blocks to be scanned and the multi-block read count value.

The access path represents the number of units of work required to get data from a base table

Access paths are ways in which data is retrieved from the database. Any row in any table can be located and retrieved by one of the following methods:

01. Full Table Scan
02. Index Scan
03. ROWID Scan

FULL TABLE SCAN

When Oracle performs a full table scan, the blocks are read sequentially. Because the blocks are adjacent, I/O calls larger than a single block can be used to speed up the process. The size of the read calls range from one block to the number of blocks indicated by the initialization parameter DB_FILE_MULTIBLOCK_READ_COUNT. Using multi-block reads means a full table scan can be performed very efficiently. Each block is read only once.

SQL> select * from emp
2 where upper(ename)='ADAMS';

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=37)
1 0 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 Bytes=37)

This data, better be accessed using an index on that column.

SQL> create index ind_ename on emp(ename);

Index created.

SQL> select * from emp
2 where upper(ename)='ADAMS';

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=37)
1 0 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 Bytes=37)

SQL>

The optimizer is unable to use the index because there is a function on the indexed column.

This means that there are some factors that influence the optimizer to choose FULL TABLE SCANS.

WHAT FACTORS INFLUENCE THE OPTIMIZER TO CHOOSE FULL TABLE SCAN?

(1) If the query is unable to use any existing indexes

SQL> select count(*)
2 from test_1;

COUNT(*)
----------
29723

SQL> desc test_1
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER NOT NULL VARCHAR2(30)
OBJECT_TYPE VARCHAR2(18)
OBJECT_NAME NOT NULL VARCHAR2(30)
OBJ_NUM NOT NULL NUMBER

1 select index_name
2 from dba_indexes
3* where table_name='TEST_1'
SQL> /

INDEX_NAME
------------------------------
PK_TEST_1
BMI_OWNER
IND_OBJ_NAME

SQL> select *
2 from test_1
3 where upper(owner)='TEST'; >>> this is the cause of full table scan even though there are indexes that can be used


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=290 Bytes=12180)
1 0 TABLE ACCESS (FULL) OF 'TEST_1' (Cost=17 Card=290 Bytes=12180)

(2) If a table contains less than DB_FILE_MULTIBLOCK_READ_COUNT blocks under the high water mark, which can be read in a single I/O call, then a full table scan might be cheaper than an index range scan, regardless of the fraction of tables being accessed or indexes present.


SQL> select count(*)
2 from test_1
3 ;

COUNT(*)
----------
29723


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TEST_1'

(3) If the table has not been analyzed since it was created, and if it has less than DB_FILE_MULTIBLOCK_READ_COUNT blocks under the high water mark, then the optimizer thinks that the table is small and uses a full table scan. Look at the LAST_ANALYZED and BLOCKS columns in the DBA_TABLES table to see what the statistics reflect.
SQL> select count(*)
2 from test_1;

COUNT(*)
----------
29723

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TEST_1'

select segment_name,count(block_id)
from dba_extents
where segment_name='TEST_1'
group by segment_name

TEST_1 17

SQL> select nvl(blocks,0)
2 from dba_tables
3 where table_name='TEST_1'
4 /

NVL(BLOCKS,0)
-------------
0

1 row selected.

SQL> sho parameter DB_FILE_MULTIBLOCK_READ_COUNT

NAME TYPE VALUE
------------------------------------ ----------- ---------------
db_file_multiblock_read_count integer 32
SQL>


(4) A high degree of parallelism for a table skews the optimizer toward full table scans over range scans. Examine the DEGREE column in ALL_TABLES for the table to determine the degree of parallelism.


SQL> alter table test_1 parallel(degree 5);

Table altered.

SQL> alter table test_1 parallel(instances 5);

Table altered.

select count(*) from test_1
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1)
1 0 SORT (AGGREGATE)
2 1 SORT* (AGGREGATE): Q34000
3 2 TABLE ACCESS* (FULL) OF 'TEST_1' (Cost=1 Card=19930) :Q34000

2 PARALLEL_TO_SERIAL SELECT /*+ PIV_SSF */ SYS_OP_MSR(COUNT(*)) FROM (SELECT /*+ NO_EXPAND ROWID(A2)

3 PARALLEL_COMBINED_WITH_PARENT

SQL>


To enable the usage of Indexes Oracle documentation lays down a few principles:

The optimizer uses the following criteria when determining which index to use(per Oracle Docs):

Number of rows in the index (cardinality).

Number of distinct keys. These define the selectivity of the index.

Level or height of the index. This indicates how deeply the data probe must search in order to find the data.

Number of leaf blocks in the index. This is the number of I/Os needed to find the desired rows of data.

Clustering factor (CF). This is the collocation amount of the index block relative to data blocks. The higher the CF, the less likely the optimizer is to select this index.

Average leaf blocks for each key (ALFBKEY). Average number of leaf blocks in which each distinct value in the index appears, rounded to the nearest integer. For indexes that enforce UNIQUE and PRIMARY KEY constraints, this value is always one.

These stats can be viewed by issuing the following statement:

select index_name,
num_rows,
distinct_keys,
leaf_blocks,
clustering_factor ,
blevel,
avg_leaf_blocks_per_key
from dba_indexes
where owner = '<owner_of_the _index>'
order by index_name;

these stats are available only when the objects are analyzed. That is a basic requirement. The gathering of the stats for the objects/schema/database has been already been discussed earlier.

To determine if the right index has been chosen or not

select column

Performance Tuning Part I
Note: This document is based on various real time experiences that have been reported on metalink, personal experiences, exchanged experiences with the persons in the field of database administration and Oracle documentation as also the opinions and information published on the Web Pages for universal consumption.

Business Rules and Performance:

The business owners and functional specialists define requirements of the business. The design, architecture and the implementation code have to achieve the set goals with no compromises. At the same time there cannot be any trade off of performance. If the set goal of the design is not performance but security to data this discussion has not much relevance.

Oracle has provided constraints and triggers to validate the data as required for business rules. The design exclusively based on implementing the constraints to validate the data is a good design. But, if there are rules that cannot be achieved by constraints then triggers are to be implemented to validate the business rules. It is necessary to understand that constraints do not need to make recursive calls to the database as in the case of triggers. The execution of the constraints is with in the Oracle engine. Constraints are the Oracle provided watchdogs. The triggers are the generated watchdogs, which require a provision of resources in addition to the resources, which are by default used by Oracle for working with constraints. It is always advisable to allow the dogs (constraints) to bark than our barking. There is every possibility to duplicate the un-named constraints like CHECK and NOT NULL for which the names, if not given in the table definition, are generated by oracle based on a sequence prefixed by ‘SYS_’. If duplicates exist in a databases like this, the first causality is performance and then do not blame the CONSTRAINTS for this as all these constraints are to be validated before the data is inserted into a table.

The duplicate constraints are created either when the import is done twice with ignore=y parameter or when the not null and check constraints are defined in the table script and the table is created and later when the same table is altered to created not null and check constraints.

Try this example:

SQL> Create table test_dup_constraints (c1 number(3) not null, c2 date);

Table created.
SQL> alter table test_dup_constraints add check(c1 is not null);

Table altered.
SQL> alter table test_dup_constraints add check(c1 is not null);

Table altered.

select table_name, constraint_type, search_condition
from user_constraints where table_name = 'TEST_DUP_CONSTRAINTS'
TABLE_NAME C SEARCH_CONDITION
--------------------- - ----------------
TEST_DUP_CONSTRAINTS C "C1" IS NOT NULL
TEST_DUP_CONSTRAINTS C c1 is not null
TEST_DUP_CONSTRAINTS C c1 is not null

The cost and benefit approach is essentially required when triggers are considered. The negative impacts of the triggers that work behind the back of the administrators cannot be forgotten (the code is the culprit here than the business rules). From my personal experience I do not prefer triggers unless I am left with no other alternative.

Thoroughly investigate the need of triggers in the database before they are implemented.

Data Model Design and Role of Index:

Data modeling is important to successful relational application design. This should be done in a way that quickly represents the business practices. Chances are, there will be heated debates about the correct data model. The important thing is to apply greatest modeling efforts to those entities affected by the most frequent business transactions. In the modeling phase, there is a great temptation to spend too much time modeling the non-core data elements, which results in increased development lead times. Use of modeling tools can then rapidly generate schema definitions and can be useful when a fast prototype is required.

It is necessary that almost every entity (table) in the logical model has at least one set of attributes which constitute a natural candidate key. If no such key is available in then a synthetic key (also called a surrogate key) of a single numeric column generated from a sequence number generator may well provide a better primary key. The following factors are to be considered before taking any decision:
Take the case of a table, which has a natural candidate key but gets updated when certain fields are updated on the table. A synthetic key is introduced as a candidate key and is the Primary Key for the table. It is necessary to have an index on the natural key also as it is natural and potentially used in joining with other tables to access data to meet a business requirement.

Where an un-intelligent key is used as primary key on a table, it is necessary to identify additional keys to join the table with another table, which impacts the cost of access of data for presentation as required.

As a synthetic key would be smaller, both the primary and foreign key indexes would be smaller, and quite possibly have shallower B*-trees. If so, joins using these indexes would be faster, as would fast full index scans

Although the addition of a synthetic key may make a table slightly larger, and hence need more multi-block reads for a full table scan, the related tables containing the foreign key would be significantly smaller and would thus scan more efficiently.

Natural key values do sometimes need to be updated, but synthetic keys do not. If a natural key is used as the primary key, then key updates will have to be cascaded. Apart from the extra work involved, for both the developers and the database engine, such cascading updates can dramatically reduce concurrency.
To use a synthetic key in a distributed environment with disconnected nodes that need to insert records, each node needs to generate a mutually exclusive range of key values from its own sequence generator. The allocation of key ranges needs careful management, whereas there is no such consideration if a natural key is used.
Using a synthetic key severely limits the available partitioning strategies.

The net effect of these considerations is normally to use a synthetic key in preference to a natural key.

So said by http://www.ixora.com.au/tips/design/synthetic_keys.htm on this page.

Reverse Key Indexes are designed to eliminate index hot spots on insert applications. These indexes are excellent for insert performance, but they are limited in that they cannot be used for index range scans.

Building and maintaining an index structure can be expensive, and it can consume resources such as disk space, CPU, and I/O capacity. Designers must ensure that the benefits of any index outweigh the negatives of index maintenance.

Each index maintained by an INSERT, DELETE, or UPDATE of the indexed keys requires about three times as much resource as the actual DML operation on the table. What this means is that if you INSERT into a table with three indexes, then it will be approximately 10 times slower than an INSERT into a table with no indexes. For DML, and particularly for INSERT-heavy applications, the index design should be seriously reviewed, which might require a compromise between the query and INSERT performance.

When an indexed column of a table is UPDATED, on the index two transactions take place. One is to DELETE the earlier row and the other is to INSERT a new row into the index. Which means that all UPDATE transactions do require more time and resources than an INSERT transaction. This has a natural down side impact on the performance. The impact is felt more when that table has got more and more columns. While designing the table the architects should have this in their mind.

An INSERT operation is also slow it needs two transactions against the database. They are, one record to insert the data record into the table and the other index record.

Another impact on such tables where there updates are more on the indexed column/s, the indexes get holed and the size of the index is more than it has to be. In this context FREELIST and FREELIST GROUPS are the important parameters that are to be considered. Oracle provides by default FREELIST 2 for all the indexes only because of this situation.

Use of sequences, or timestamps, to generate key values that are indexed themselves can lead to database hotspot problems, which affect response time and throughput. This is usually the result of a monotonically growing key that results in a right-growing index. To avoid this problem, try to generate keys that insert over the full range of the index. This results in a well-balanced index that is more scalable and space efficient. You can achieve this by using a reverse key index or using a cycling sequence to prefix and sequence values.

Designers should be flexible in defining any rules for index building. Depending on your circumstances, use one of the following two ways to order the keys in an index:

Oracle documents discuss two methods

(1) Order columns most selectively first. This method is the most commonly used, because it provides the fastest access with minimal I/O to the actual rowids required. This technique is used mainly for primary keys and for very selective range scans.

(2) Order columns to reduce I/O by clustering or sorting data. In large range scans, I/Os can usually be reduced by ordering the columns in the least selective order, or in a manner that sorts the data in the way it should be retrieved.

Basic Concepts on Indexes and the performance

Oracle has introduced skip-scan in 9i, which has increased the options of accessing data using indexes and not-using the indexes.

The amazing difference between Oracle 9i and earlier versions is ‘ Concatenated index is used only if the leading edge of the index is being used.


Create emp2 table à create table emp2 as select * from emp;
Create emp_ind_1 à create index emp2_ind_1 on emp2(empno,ename,deptno);
(This type of index is called concatenated or composite index).
Create emp2_ind_2 à create index emp2_ind_2 on emp2(sal);

Try to understand the execution paths of these following SQL.

Example 01
Select job,empno
From emp2
Where ename=’SMITH’

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'EMP2'

It is a full table scan.

Example 02

Select job,empno
From emp2
Where deptno=10;

It is again full table scan.

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'EMP2'

Example 03

select empno,ename
from emp2
where empno=7934;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 INDEX (RANGE SCAN) OF 'EMP_IND_1' (NON-UNIQUE)

Note that part of the index key is used.

Example 04

SELECT JOB,DEPTNO
FROM EMP2
WHERE ename='MILLER'
and empno=7934;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP2'
2 1 INDEX (RANGE SCAN) OF 'EMP_IND_1' (NON-UNIQUE)

SELECT JOB,DEPTNO
FROM EMP2
WHERE ename='MILLER'
and empno=7934
and deptno=10;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP2'
2 1 INDEX (RANGE SCAN) OF 'EMP_IND_1' (NON-UNIQUE)


Look at this. Oracle uses part of the index key and table access is done by index rowed

Hence care should be exercised while designing a composite or concatenated indexes. This design will be perfect if the functional specialists can lay down prior to the design of the table and indexes, the keys basing on which the data is to be presented and/ or accessed.

When a database is upgraded from earlier versions of oracle-to-oracle 9i, the code should be examined thoroughly, if concatenated or composite indexes are seen from the Meta data of the being upgraded databases.

Never the ROWID be hard coded in a code block. The format of the rowed is changing from version to version. You can pass on to pass on the ROWID as a bind variable.

Function Based Indexes can be implemented only when QUERY_REWRITE_ENABLED is set to TRUE, which by default is FALSE. This parameter fits within the 3-tier architecture of Session, System and Database. This may not be a wanted feature in some cases. Hence depending upon the requirements of business, the value may be set.

The existing functions like TO_DATE, TO_CHAR, and UPPER or LOWER or may create a function to meet a requirement of the business.

The down side of these indexes is, the INSER, UPDATE and DELETE take longer times than they take. Only those functions for which the indexes are built are to be used and other functions on that column should be restricted. (This is never assured). As the number of indexes on the table grow, the space required for these new objects (indexes) more. ‘What benefit and at what cost?’ should be the question for the DBA.
To drop a table that has many indexes on it, is also, surely to be considered, in case these tables are large and store achievable data, which on being achieved need to be dropped to re-claim the space.

Partitioned Indexes are the broken up indexes into multiple pieces to impact the I/O issues positively and thus help gain performance. There are certain limitations.
Only B-tree and Bitmap indexes can be partitioned and not the Hashed indexes.

The relationship between the base tables and the indexes on them can be defined in three ways. Single table and partitioned Indexes (One to Many), Partitioned Table and Single Index (Many to One) and Partitioned Table and Partitioned Index (Many to Many) the presently available models in Oracle Architecture. They impact the performance differently. If the SQL code is such that you are making large scale Table Scans and the data fetch is not based on Indexes, Partition the table, so that the readable data is in small pieces. If the data fetch is based on the access of indexes then have partitioned indexes and to get the benefit of both world get table as well as the index partitioned.

Another greatest advantage is Oracle supports parallel query and Parallel DML processing.

Indexes can be built Locally and Globally. Oracle defines them as under:

Local Indexes: A local index is an index on a partitioned table, which is partitioned in the exact same manner as the underlying partitioned table. Each partition of a local index corresponds to one and only one partition of the underlying table. Local Index has two subsets, prefixed and non-prefixed. Bitmap Indexes can have only Local Indexes only.

In a local index all entries in a single local index partition point to one and only one table partition. A partitioned index is considered locally prefixed if the partition key is based on the left most columns in the index.

Please go through these examples.
CREATE TABLE LOCAL_PREFIXED_PARTITION_TAB (
PRODUCT_ID NUMBER (5),
DATE_AND_TIME DATE,
UNIT_COST NUMBER (10,2),
UNIT_PRICE NUMBER (10,2))
TABLESPACE Users
PCTFREE 10
PCTUSED 40
INITRANS 1
MAXTRANS 255
PARTITION BY RANGE (PRODUCT_ID)
subpartition by hash (DATE_AND_TIME) subpartitions 2
(
PARTITION P_1 VALUES LESS THAN (3) TABLESPACE USERS,
PARTITION P_2 VALUES LESS THAN (6) TABLESPACE USERS_1
)
/

This is a composite local index with partitions and sub partitions

create index IDX_LOCAL_PREFIXED_PARTITION
on LOCAL_PREFIXED_PARTITION_TAB(product_id, time_id)
local store in (tools, indx)
/

This automatically generates a new partition in the Index but the problem
is it is generated in the same tablespace where the table partition is created.

alter table LOCAL_PREFIXED_PARTITION_TAB
add partition p_3
VALUES LESS THAN (9) TABLESPACE USERS;

To move that index partition to another table space when the index is a composite index. To know if the index is a composite index query dba_ind_partitions column composite for that index and under composite if the output is ‘YES’, then it is composite index and then to use this command:

alter index SCOTT.IDX_LOCAL_PREFIXED_PARTITION
modify default attributes for partition p_3 tablespace indx;

Please observe that for the composite partitioned indexes the status is N/A

INDEX_NAME COMPOSITE PARTITION_NAME TABLESPACE_NAME
IDX_LOCAL_PREFIXED_PARTITION YES P_1 INDX
IDX_LOCAL_PREFIXED_PARTITION YES P_2 INDX
IDX_LOCAL_PREFIXED_PARTITION YES P_3 INDX


To move that index partition to another table space when the index is a composite index. To know if the index is a composite index query dba_ind_partitions column composite for that index and under composite if the output is ‘NO’, then it is composite index and then to use this command:

alter index owner.index_name
rebuild partition partition_name
tablespace new_tablespace_name;

Out put of the query on the user_ind_partitions

select index_name,
composite,
partition_name,
tablespace_name,
status
from user_ind_partitions
where composite='NO'
/

INDEX_NAME COMPOSITE PARTITION_NAME TABLESPACE_NAME STATUS
TEST_IND_PARTITION NO IP1 INDX USABLE
TEST_IND_PARTITION NO IP2 INDX USABLE
TEST_IND_PARTITION NO IP3 INDX USABLE


When an index partition is unusable rebuild that index online. Be sure that you have the temporary tablespace enough to hold a copy of the index you are rebuilding as REBUILD until it is done holds two copies of the index.

Global Partitioned Indexes: A global partitioned index is an index on a partitioned or non-partitioned table which is partitioned using a different partitioning-key from the table. Global-partitioned indexes can be partitioned using range partitioning. For example, a table could be range-partitioned by month and have twelve partitions, while an index on that table could be range-partitioned using a different partitioning key and have a different number of partitions.

Do not forget that by default, non-partitioned indexes are global indexes. Oracle will partition the global index on the ranges of values from the table columns specified in column_list clause of the index. This cannot be done for a local index.

There are certain limitations. There cannot be more than 32 columns in the column_list . Column list cannot also contain the ROWID pseudo-column or a column of type ROWID.

Example of Partitioned-global-index:

NOTE: Global non-prefixed indexes are not supported. This means that the index-
partitioning key must always be the leftmost index column.

CREATE INDEX dept_idx ON dept2(dname)
GLOBAL PARTITION BY RANGE (dname)
(
PARTITION p1 VALUES LESS THAN ('N'),
PARTITION p2 VALUES LESS THAN (MAXVALUE)
)tablespace indx;
To rebuild the partition use this example:

alter index dept_idx
rebuild partition p1 tablespace tools;

a composite index on emp2 table

CREATE INDEX sal_index ON scott.emp2 (sal,comm)
GLOBAL PARTITION BY RANGE (sal)
(PARTITION p1 VALUES LESS THAN (1000),
PARTITION p2 VALUES LESS THAN (2500),
PARTITION p3 VALUES LESS THAN (MAXVALUE));

Index created.

Global Non-Partitioned Indexes: A global non-partitioned index is essentially identical to an index on a non-partitioned table. The index structure is not partitioned. Global non-prefixed indexes are not supported. This means that the index- partitioning key must always be the leftmost index column.

The above is a brief discussion on the indexes, partitioned indexes and their rebuilding, moving them around for I/O balance.

Configuration Parameters of and Tuning Parameters for an Instance

The initialization parameters are can be categorized basing on the three-tier structure that enables to manipulate the performance of the database.

(1) The parameters that demand the life-cycle of the database through startup-shutdown-startup
(2) The parameters that are modifiable at the system level and but also an edit to the initialization parameter file. Through alter system command the parameter is made effective without shutting down the instance/database, where as an edit to the initialization parameter file makes that change permanent when the system goes through the cycle of startup-shutdown-startup
(3) with out changing the system or database level parameters some parameters can be invoked for that session as such a change is required at the session level and in reality the system/database need not to change its behavior

To get all the parameters that have their impacts at session/system/database levels
the following SQL may be used. This SQL also gets the values that are working which are set by Oracle for the database functioning, which are often referred to as Un-documented Parameters.


select x.indx+1 instance_num,
ksppinm Parameter_name,
ksppity parameter_type,
ksppstvl parameter_value,
ksppstdf parameter_default_value,
decode (bitand(ksppiflg/256,1),1,’TRUE’,’FALSE’) is_sess_modi,
decode (bitand(ksppiflg/65536,3),
1,’IMMEDIATE’, 2,’DEFERRED’,’FALSE’) is_sys_modi,
decode (bitand(ksppstvf,7),1,’MODIFIED’,2,’TRUE’,’FALSE’) is_modifd,
decode (bitand(ksppstvf,2),1,’MODIFIED’,2,’TRUE’,’FALSE’) is_adjd,
ksppdesc description,
ksppstcmnt comment
from sys.x$ksppi x, sys.x$ksppsv y
where x.indx=y.indx
order by Parameter_name;


The following parameters are some of the most important parameters that affect the performance of the Instance:

Parameters that impact SGA

select name, value
from v$parameter
where name in
('sga_max_size',
'shared_pool_size',
'db_cache_size',
'large_pool_size');

shared_pool_size 50331648
sga_max_size 126950740
large_pool_size 8388608
db_cache_size 16777216

At instance startup the Oracle Server allocates the granule entries, one for
each granule to support SGA_MAX_SIZE bytes of address space. During the startup
each component acquires as many granules as it requires.

The minimum SGA is three granules, as follows:

01. One Granule for Fixed SGA (includes redo buffers)
02. One Granule for Buffer Cache
03. One Granule for Shared Pool

Issuing ‘alter system’ command, provided the increase is within the framework of sga_max_size, can alter them. If not the following error messages are spit out.

ORA-02097: parameter cannot be modified because specified value is invalid
ORA-04033: Insufficient memory to grow pool

Buffer Cache Sizing Parameters

The Buffer Cache consists of independent sub-caches for buffer pools and for multiple block sizes. The parameter db_block_size determines the primary block
size. This would be the block size used for the SYSTEM and temporary tablespaces.

01. DB_CACHE_SIZE
02. DB_KEEP_CACHE_SIZE
03. DB_RECYCLE_CACHE_SIZE

The db_keep_cache_size and db_recycle_cache_size are independent of db_cache_size. These parameters are specified in units of memory rather than in units of buffers.

The following parameters are automatically computed:

01. DB_BLOCK_LRU_LATCHES - The number of LRU latches in each buffer pool for each block size will be equal to the half the number of CPUs.
02. DB_WRITER_PROCESSES - The number of DBWR's will be equal to 1/8th the number of CPUs.

03. LARGE_POOL_SIZE or JAVA_POOL_SIZE parameters are dynamically alterable. But when they are being altered the following errors are possible.

alter system set large_pool_size=32m scope=memory;
alter system set large_pool_size=32m scope=memory
*
ERROR at line 1:
ORA-02097: parameter cannot be modified because specified value is invalid
ORA-04033: Insufficient memory to grow pool

alter system set JAVA_POOL_SIZE='32M' SCOPE=spfile;

LARGE_POOL_SIZE and JAVA_POOL_SIZE are static parameters in Oracle9i Release1 (9.0.1) and higher. These parameters can only be changed within the SPFILE (Server Parameter File) scope.

The type of data stored in LARGE_POOL_SIZE and JAVA_POOL_SIZE parameters is currently string, not numeric. Therefore, in order to be able to change the values of these parameters within the SPFILE scope, you need to specify the number as a character string.

When PARALLEL_AUTOMATIC_TUNING is set to true, Oracle determines the default values for parameters that control parallel execution. In addition to setting this parameter, you must specify the PARALLEL clause for the target tables in the system. Oracle then tunes all subsequent parallel operations automatically. If parallel execution parameters have been used in a previous release and are now enabling PARALLEL_AUTOMATIC_TUNING, you should reduce the amount of memory allocated from the shared pool to account for the decreased demand on that pool. Thus the memory that has been released may now be allocated from the large pool, and will be computed automatically if LARGE_POOL_SIZE is left unset. As part of the automatic tuning, Oracle will enable the PARALLEL_ADAPTIVE_MULTI_USER parameter. Any of the system-provided defaults if desired can be over ridden.

SESSION_CACHED_CURSORS parameter does not directly affect the shared pool size, it does configure the number of cursors that can be kept in the session cursor cache to reduce the probability of soft parses and thus reduce the contention in the shared pool area. Set this parameter so a reasonable number of cursors can be cached. It does consume additional memory on a per session basis. So,’ it is a over head on the SGA’ - should not be forgotten.

To allow the DBA/Analysts to check whether the sessions’ cursor cache or open cursors are really a constraint and then increase the parameter
SESSION_CACHED_CURSORS or OPEN_CURSORS accordingly.

SESSION_CACHED_CURSORS lets you specify the number of session cursors to cache. Repeated parse calls of the same SQL statement cause the session cursor for
that statement to be moved into the session cursor cache. Subsequent parse calls will find the cursor in the cache and do not need to reopen the cursor and even do a soft parse.

The session cursor cache can be constrained by either the session_cached_cursors parameter, or the open_cursors parameter. This script reports the current maximum usage in any session with respect to these limits.

If either of the Usage column figures approaches 100%, then the corresponding parameter should normally be increased.

select 'session_cached_cursors' parameter,
lpad(value, 5) value,
decode(value, 0, ' n/a', to_char(100 * used / value, '990') || '%') usage
from ( select max(s.value) used
from v$statname n,
v$sesstat s
where n.name = 'session cursor cache count'
and s.statistic# = n.statistic#
),
( select value
from v$parameter
where name = 'session_cached_cursors'
)
union all
select 'open_cursors', lpad(value, 5),
to_char(100 * used / value, '990') || '%'
from ( select max(sum(s.value)) used
from v$statname n,
v$sesstat s
where n.name in ( 'opened cursors current',
'session cursor cache count')
and s.statistic# = n.statistic#
group by s.sid
),
( select value
from v$parameter
where name = 'open_cursors'
)
/

PARAMETER VALUE USAGE
session_cached_cursors 0 n/a
open_cursors 300 5%

Sources to look at for any problems/issues/errors etc to monitor the Performance.

Tuning Shared Pool Area

select pool,
sum(bytes/1024/1024) “In MB”
from v$sgastat
where pool = ‘shared pool’
group by pool
;

shared pool 64

select pool,
round(bytes/1024/1024,2)"In MB"
from v$sgastat
where name='free memory'
;

shared pool 41.47
large pool 8
java pool 32

A low value for free memory does not necessarily indicate a problem in offing with the share pool area. If there is a very large amount of free memory seen as in the above output, it should mean that the shared pool is oversized and also indicates a lot of aging is taking place in the shared pool.

In this situation it is necessary to tune SHARED POOL AREA.

Query V$SHARED_POOL_RESERVED table find out values for REQUEST_MISSES value. If over a period of time there are increasing values found under this column head, increase the size of SHARED_POOL_SIZE. SHARED_POOL_RESERVED_SIZE and LARGE_POOL_SIZE or JAVA_POOL_SIZE parameters can be used to provide zoning to the objects that reside in SHARED_POOL_AREA

Library Cache
This is the area where all the SQL statements being processed and the information about them. This speaks a lot about the shared pool area. If the ratios are high that means that the shared pool area is properly planned.

Caveat
In the case of Data Warehouse and Decision Supporting Systems the Library Cache Hit Ration remains low when compared to OLTP databases and Data Stores. So these hit ratios loose their relevance if the application is not using any bind-variables. If the application can not be fixed to use bind variables and the initialization parameter CURSOR_SHARING is not set to FORCE then the hit ratios look vary bad and can cause only heartburns.


SQL> select namespace,
2 gethitratio,pinhitratio
3 from v$librarycache;

NAMESPACE GETHITRATIO PINHITRATIO
SQL AREA .934129744 .973135034
TABLE/PROCEDURE .954277376 .989524109
BODY .995252226 .995252226
TRIGGER 1 1
INDEX .954653938 .930056711
CLUSTER .989614243 .988520408
OBJECT 1 1
PIPE 1 1


8 rows selected.

To know how many parses are occurring and their analysis

select a.value total,
b.value hard,
a.value - b.value soft--,
---nvl(round((a.value/b.value)*100,1),0) "hard_parse_%"
--– This is not required if the hard is 0 (zero)
from v$sysstat a,
v$sysstat b
where a.statistic#=171
and b.statistic#=172
/

These numbers are valid only with 8.1.6 and 8.1.7. They are different in 9i releases.
They are:
parse count (total) 232 in 9.2.0.3
parse count (hard) 233 in 9.2.0.3


A high % of hard parses means that there are lot dynamic SQL s or there is a lack of use of bind variables by the application SQL. The second reason is that the cursors are getting aged out and are not available for re-execution and hence are to be re-loaded. Reloads could result from too many objects or large objects like big packages.

SQL> select sum(reloads)/sum(pins)
2 from v$librarycache;

SUM(RELOADS)/SUM(PINS)
----------------------
.000367773

The aging out of the objects is not a problem given with a limited memory. Values less than 1 are not worth any effort to reduce by sizing the shared pool.
Any performance problems are not the results if PLSQL or SQL reloads.
(1) If the application uses bind variables
(2) If the application does not have any issues with Dynamic SQL but
(3) The reloads are more from the statistics gathered
Then without any second opinion and though increase the shared pool area. That helps push the performance up.

To identify the large objects that are competing for place in the library cache can be identified by issuing the following SQL

select name,
type,
sharable_mem
from v$db_object_cache
where type in ('FUNCTION','PROCEDURE','PACKAGE','PACKAGE BODY')
order by type
SQL> /

VALIDATE_DATE FUNCTION 9433
IN_QUOTES FUNCTION 1452
VALIDATE_DT_OF_BIRTH FUNCTION 10848
GET_FUSA_ACCT FUNCTION 12937
BANKONE_VERIFY_FUNCTION FUNCTION 14563
VALID_NUM FUNCTION 9117
IS_POSITIONAL FUNCTION 3256
IS_WILD FUNCTION 3598
ADD_AND FUNCTION 1662
CHECK_DATE FUNCTION 10830
DBMS_OUTPUT PACKAGE 13111
ASP_FIND_CHECKS PACKAGE 16430
DBMS_DDL PACKAGE 1996
DBMS_RCVMAN PACKAGE 142263
STANDARD PACKAGE 265036
PLITBLM PACKAGE 3327
DBMS_SHARED_POOL PACKAGE 10644
DBMS_TRANSACTION PACKAGE 14932
DBMS_APPLICATION_INFO PACKAGE 12413
DBMS_BACKUP_RESTORE PACKAGE 97483
DBMS_STATS PACKAGE 55782
DBMS_STANDARD PACKAGE 24369
DBMS_SYS_SQL PACKAGE 20640
DBMS_UTILITY PACKAGE 4348
CMN_PARTITION_MGMT PACKAGE 11910
DBMS_SQL PACKAGE 19280
DBMS_OUTPUT PACKAGE BODY 6203
DBMS_DDL PACKAGE BODY 12780
ASP_FIND_CHECKS PACKAGE BODY 24002
DBMS_SQL PACKAGE BODY 18652
DBMS_SHARED_POOL PACKAGE BODY 9112
DBMS_TRANSACTION PACKAGE BODY 6444
DBMS_APPLICATION_INFO PACKAGE BODY 3105
DBMS_BACKUP_RESTORE PACKAGE BODY 41559
DBMS_STATS PACKAGE BODY 174910
DBMS_SYS_SQL PACKAGE BODY 46852
DBMS_UTILITY PACKAGE BODY 20484
CMN_PARTITION_MGMT PACKAGE BODY 13550
STANDARD PACKAGE BODY 25796
DBMS_RCVMAN PACKAGE BODY 170739
CHK_PAR_TAB_STATS1 PROCEDURE 25722
CHK_PAR_IND_STATS PROCEDURE 26993
ARCH_USER_PROC_INFO PROCEDURE 25162
POP_USER_INFO PROCEDURE 15528
CHK_PAR_IND_STATS1 PROCEDURE 27006
CHK_PAR_TAB_STATS PROCEDURE 25709
COALESCE_TS PROCEDURE 19035

SQL>

The sizable packages from this list can be moved to reserved area. It is definitely good idea to reserve additional area for other objects also.

SHARED_POOL_RESERVED_SIZE can be defined for the database at 15 to 20 % of total SHARED_POOL_AREA. This brings down the competition between the larger objects and smaller objects.

Similar way the largest SQL s against the database can also be found by issuing the following query

select *
from v$sqlarea
where rownum <= 10
order by sharable_mem desc

These SQL give an idea how much space is being used by such large queries. The developers may be advised on those SQL and as they should not later complain about their SQL running slow as the SQL are to compete within them to find place in shared pool area. If necessary these unnamed objects can also be kept in the memory (shared pool area) using the address and hash value of that SQL from V$SQLAREA table.

Here is an example SQL

select sql_text,
address,
hash_value,
sharable_mem
from v$sqlarea
where rownum <= 10
-– here only top 10 are only displayed basing the values of sharable memory
order by sharable_mem desc

After finding out which cursor is to be kept then execute the package

exec dbms_shared_pool.keep(‘ address, hash_value’,’C’);

Please note that address and hash_value of the cursor to be kept is separated by a comma and are together are treated as a string.

It is to be kept in mind that flushing shared pool is not going to un-keep/release these pinned objects from the shared pool. So, only ‘UNKEEP’ procedure is to be used to release them from the shared pool.

Data Dictionary Cache

This cache contains the data/rows that have been read from the data dictionary in response to a recursive call. These rows are read into the data block buffers like any other data that has been read into and the relevant information is transferred to dictionary cache. Recursive SQL is executed in response to a regular SQL and as long as Oracle is can resolve a recursive SQL from the data dictionary cache, no need arises to re-read from the disk which means reduced I/O. Here under is the Hit Ratio Formula

select to_char(round((1-sum(getmisses)/sum(gets))*100,1))||' % ' "Hit Ratio"
from v$rowcache;

It was discussed that if necessary to reduce the competition among the objects for place in the shared pool area to introduce a new parameter (SHARED_POOL_RESERVED_SIZE) to the database which enables to reserve some space in shared pool for the big objects. It is time to know that Oracle has also given the Users the capability to pin the required objects in the shared pool area so that the I/O is reduced and the performance is up.

DBMS_SHARED_POOL package has a procedure to pin the objects in the shared pool and like wise another to take out of the shared pool area.

exec dbms_shared_pool.keep (‘STANDARD’);

To release an object

exec dbms_shared_pool.unkeep (‘STANDARD’);

Query the v$db_object_cache with where clause kept=’YES’ to get all the objects that are kept in shared pool area.

select OWNER,
NAME,
NAMESPACE,
TYPE,
SHARABLE_MEM,
KEPT
from v$db_object_cache
where kept='YES'
/

Now it is time to understand what is ORA-04031 and what causes the fragmentation of the shared pool and how it is to be talked which impacts the performance.

The factors that are responsible for Fragmentation of Shared Pool

01. Frequent aging of the objects from the shared pool because of the lack of sufficient space.
02. High values in Free Memory as a result of the aging.
03. Un-kept large objects which get aged
04. SQL s not using bind variables
05. The parameter CURSOR_SHARING is not set when the bind variables could not be used or not used by the application (there are three values for this parameter. They are EXACT, SIMILAR and FORCE).
06. Excessive Parsing partly due to not keeping the large objects in the shared pool and partly due to not using SESSION_CACHED_CURSORS parameter which could have avoided reloads

The events that can cause FRAGMENTATION of the shared pool area

select sw.sid,
s.username,
substr(sw.event,1,25),
sw.wait_time
from v$session s,
v$session_wait sw
where sw.sid=s.sid
and sw.event not like ‘SQL*Net%’
order by sw.wait_time,sw.event

In the output look for the following EVENTS

01. LATCH FREE – this event is result of the hard coded application variables which do not use bind variables and the parameter CURSOR_SHARING is not set.
02. LIBRARY CACHE LOAD LOCK and LIBRARY CACHE PIN – These events demands the reload of objects. The reasons for such reloads are due to invalidation of the objects frequently and recompilation of those objects. All those activities that positively impact LAST_DDL column values in the dba_objects are going to cause reload of those objects.

DocID 115656.1 On Metalink reads as Under

Library cache pins and load locks may occur during PL/SQL, views, types, etc. compilation or recompilation. The compilation is always explicit (application installation, upgrades, applying patches), but object recompilation may happen transparently due to object invalidations.

Dealing with slow downs related to "mysterious" library cache pins and load locks we should look for the reason of the database object invalidations. They are likely to be triggered by actions causing changes to "LAST_DDL" attribute of database objects that have other dependent ones. Typically they are the object maintenance operations - ALTER, GRANT, REVOKE, replacing views, etc. This behavior is described in Oracle Server Application Developer's Guide as object dependency maintenance.

After object invalidation, Oracle tries to recompile the object at the time of the first access to it. It may be a problem in case when other sessions have pinned the object to the library cache. It is obvious that it is more likely to occur with more active users and with more complex dependencies (eg. many cross-dependent packages or package bodies). In some cases waiting for object recompilation may even take hours blocking all the sessions trying to access it.

In some circumstances these errors may also occur: many sorts of ORA-600,
e.g. ORA-600[17285], ORA-4061, ORA-4065, ORA-6508. Note that they may be caused by some other reasons as well.

03. LIBRARY CAHE LOCK – This event is due to a DML operation that is hanging because the table which is accessed is currently undergoing changes (ALTER TABLE). This may take quite a long time depending on the size of the table and the type of the modification. Another reason could be that compilation of package will hang on Library Cache Lock and Library Cache Pin if some users are executing any Procedure/Function defined in the same package.


Doc ID 122793.1 lays down a method to identify the causes of such locks as in an environment of multiple concurrent users performing various functions against the database it is difficult to pin point the process that us causing the locks.

METHOD 2: EXAMINE THE X$KGLLK TABLE
-----------------------------------

The X$KGLLK table (accessible only as SYS/INTERNAL) contains all the
library object locks (both held & requested) for all sessions and
is more complete than the V$LOCK view although the column names don't
always reveal their meaning.

You can examine the locks requested (and held) by the waiting session
by looking up the session address (SADDR) in V$SESSION and doing the
following select:

select * from x$kgllk where KGLLKSES = 'saddr_from_v$session'

This will show you all the library locks held by this session where
KGLNAOBJ contains the first 80 characters of the name of the object.
The value in KGLLKHDL corresponds with the 'handle address' of the
object in METHOD 1.

You will see that at least one lock for the session has KGLLKREQ > 0
which means this is a REQUEST for a lock (thus, the session is waiting).
If we now match the KGLLKHDL with the handles of other sessions in
X$KGLLK that should give us the address of the blocking session since
KGLLKREQ=0 for this session, meaning it HAS the lock.

SELECT * FROM X$KGLLK LOCK_A
WHERE KGLLKREQ = 0
AND EXISTS (SELECT LOCK_B.KGLLKHDL FROM X$KGLLK LOCK_B
WHERE KGLLKSES = 'saddr_from_v$session' /* BLOCKED SESSION */
AND LOCK_A.KGLLKHDL = LOCK_B.KGLLKHDL
AND KGLLKREQ > 0);

If we look a bit further we can then again match KGLLKSES with SADDR
in v$session to find further information on the blocking session:

SELECT SID,USERNAME,TERMINAL,PROGRAM FROM V$SESSION
WHERE SADDR in
(SELECT KGLLKSES FROM X$KGLLK LOCK_A
WHERE KGLLKREQ = 0
AND EXISTS (SELECT LOCK_B.KGLLKHDL FROM X$KGLLK LOCK_B
WHERE KGLLKSES = 'saddr_from_v$session' /* BLOCKED SESSION */
AND LOCK_A.KGLLKHDL = LOCK_B.KGLLKHDL
AND KGLLKREQ > 0)
);

In the same way we can also find all the blocked sessions:

SELECT SID,USERNAME,TERMINAL,PROGRAM FROM V$SESSION
WHERE SADDR in
(SELECT KGLLKSES FROM X$KGLLK LOCK_A
WHERE KGLLKREQ > 0
AND EXISTS (SELECT LOCK_B.KGLLKHDL FROM X$KGLLK LOCK_B
WHERE KGLLKSES = 'saddr_from_v$session' /* BLOCKING SESSION */
AND LOCK_A.KGLLKHDL = LOCK_B.KGLLKHDL
AND KGLLKREQ = 0)
);


CAVEAT and DOWNSIDE OF CURSOR_SHARING

There is a sure overhead of at the least 25% on the system when CURSOR_SHARING is set to FORCE.


There are some more cards under the sleeves of Oracle to face the situation to take the control without imposing 25% Overhead.

Set the CURSOR_SHARING=EXACT or CURSOR_SHARING=SIMILAR values so that the overhead impact is not heavy.

Database Buffer Cache and Performance

Consider increasing the db_cache_size. When increasing the db_cache_size be sure that the OS can handle with out additional swapping/paging.

It is generally opinion that about 50% database buffer cache-hit-ration (CHR) is indicative of badly performing database. It may not be always true. This CHR is dependent on many factors and some time the CHR itself is not true.

For instance if a query is executed for the first time and it took more than an hour for execution, the same query may not take more than half an hour when executed for the second time. Does this mean that hit ratio is good when executed for the second time and the query is working fine. No. The representation is not true. The reasons for that can be understood as the discussion advances.

Basically the CHR depends upon the nature of the database. OLTP databases hit same set of data blocks repeatedly and hence the CHR remains very high. When a batch job is run to same CHR goes down. The reason is the database is accessing a wide range of data blocks. Here, it is not achieving a targeted ratio that is important, but it is reaching the desired output in as much less time as possible as desired by the user.

Hence, it is not alone the CHR that determines the performance of the database. There are various other factors that are also to be considered while also considering the CHR depending upon the nature of the database.

It is essential to understand and query the X$ tables that are associated with the database buffer cache.

Before jumping into that sea for a greater swim, first query the X$ tables to get all the parameters that affect the database ( Initialization Parameters for the database).

select x.indx+1 instance_num,
ksppinm Parameter_name,
ksppity parameter_type,
ksppstvl parameter_value,
ksppstdf parameter_default_value,
decode (bitand(ksppiflg/256,1),1,’TRUE’,’FALSE’) is_sess_modi,
decode (bitand(ksppiflg/65536,3),
1,’IMMEDIATE’, 2,’DEFERRED’,’FALSE’) is_sys_modi,
decode (bitand(ksppstvf,7),1,’MODIFIED’,2,’TRUE’,’FALSE’) is_modifd,
decode (bitand(ksppstvf,2),1,’MODIFIED’,2,’TRUE’,’FALSE’) is_adjd,
ksppdesc description,
ksppstcmnt comment
from sys.x$ksppi x, sys.x$ksppsv y
where x.indx=y.indx
order by Parameter_name;

This query is executable against 9.2.0.3 Version Oracle. This is to be mentioned here as the base structures are being changed by Oracle from release to release and this query may not be executable against 8.1.6, 8.1.7.


X$ Tables and Myths, Misconceptions and Truths

In this part of discussions it is tried to clear the misconceptions on X$ tables which are the most superstitiously scary part of the Oracle RDBMS.

01. They are not temporary or permanent tables that reside in the data files of Oracle.
02. They are memory resident only.
03. They are created when Instance is started, even before control files are created or opened.
04. They are platform specific. But the synonyms and views created on these X$ tables are consistent when the structures are viewed.
05. They cannot be dropped.
06. Only select permission is there on those tables.
07. SYS user owns them.
08. If grants are made a cryptic error message pops up ORA-02030 which reads ‘Can only grant select from fixed tables/views’.
09. They are destroyed once instance is shutdown and recreated when instance is brought back.
10. No Indexes can be created on these tables.
11. Oracle started indexing these tables from Oracle 8.x onwards.
12. The number of X$ tables vary from Oracle Release to Release. Deprecated Features of a version can cause some X$ tables not to be seen any more and like wise for New Features New X$ tables come in. The structures of these X$ tables do not also remain the same. But the Public Synonyms and Views built on these X$ tables generally do not change as Oracle Uses extensively decode function to create V$ Views and Synonyms. In 9.2.0.3 there are 394 X$ tables. In 8.1.6.3 it was 265.
13. V$INDEXED_FIXED_COLUMN view lists all the indexes built on these X$ tables.
14. User can create his own views and synonyms on these X$ tables and allow access to the Users. Any User can only select on these views/synonyms. They cannot be updated, inserted, deleted and dropped.

Here Are Some Important Init.Ora parameters
(The following are extracts from Oracle 9i Performance Tuning by Richard J Niemiec, Bradley D Brown and Joseph C Trezzo)

Memory

1. DB_CACHE_SIZE Memory allocated for data cache
2. PGA_AGGREGATE_TARGET Soft memory cap for Users’ PGA
3. SHARED_POOL_SIZE Memory for Data Dictionary, SQL and PLSQL
4. SGA_MAX_SIZE This parameter need not be fixed on SUN Solaris
5. LOG_BUFFER for uncommitted transactions.
6. DB_KEEP_CACHE_SIZE memory allocated to keep pool
7. DB_RECYCLE_CACHE_SIZE memory allocated for recycle pool
8. LARGE_POOL_SIZE
9. JAVA_POOL_SIZE for JVM
10. JAVA_MAX_SESSIONSPACE_SIZE
11. WORKAREA_SIZE_POLICY


Recovery Options

FAST_START_MTTR_TARGET
LOG_CHECKPOINT_INTERVAL

Optimization and Performance

1. OPTIMIZER_MODE CHOOSE, RULE, FIRST_ROWS, FIRST_ROWS_n, ALL_ROWS
2. CURSOR_SHARING EXACT, SIMILAR, FORCE
3. OPTIMIZER_INDEX_COST_ADJ adjustment of cost of full table scan and index
scan
4. QUERY_REWRITE_ENABLED for enabling materialized views and
function-based-indexes
5. STATISTICS_LEVEL
6. PARALLEL_AUTOMATIC_TUNING
7. HASH_JOIN_ENABLED
8. OPEN_CURSORS

File Destinations

1. CONTROL_FILES
2. BACKGROUND-DUMP_DEST
3. USER_DUMP_DESTINATION
4. CORE_DUMP_DESTINATION
5. LOG_ARCHIVE_DEST
6. UTL_FILE_DIR

To Enable the Media Recovery

1. LOG_RCHIVE_START set to TRUE for Archive Log/ Media Recovery Mode

Important Parameters with some purpose

1. COMPATIBLE set it to exploit the new features of the version installed
2. OPTIMIZER_FEATURES_ENABLED set it else you miss the new features
3. UNDO_MANAGEMENT set it to have auto undo/RB management
4. UNDO_TABLESPACE set this for undo management
5. JOB_QUEUE_PROCESSES set this to use DBMS_JOB
6. RECOVERY_PARALLELISM recovers using parallelel query option PQO

Undocumented Important Parameters
To get the list of the parameters (undocumented) use the following SQL

select a.ksppinm Name,
b.ksppstvl Value,
b.ksppstdf default_value,
a.ksppdesc description,
from sys.x$ksppi a,
sys.x$ksppcv b
where a.indx=b.indx
and substr(a.ksppinm,1,1)=’_’
order by a.ksppinm;

_CORRUPTED_ROLLBACK_SEGMENTS Use this parameter when you are not able to open the database when there are corrupt rollback segments reported, to start and open the database

_ALLOW_RESETLOGS_CORRUPTION it can force the database open after failed recover because of the lost/corrupt log files that are needed for recovery.

CPU_COUNT This is the number of CPU on the server. This can cause bugs with PQO (Parallel Query Option)

_INIT_SQL_FILE set this to create the database when location of sql.bsq is not default location. There were problems while creating 32k data block size databases
in Oracle 7.x and 8.x and 8.1.x versions. To overcome this problem, customize the sql.bsq file rename the original file to .old and create your own customized file and the restore sql.bsq file. In case the location is changed for the file set this parameter to enable oracle read the new location from the init.ora file while creating the database. I created a test databases on 9.0 and 9.2 with 32k data block size with no problems.

_KSMG_GRANULE_SIZE: This parameter is not there in Oracle 7.x, 8.x and 8.1.x.
This is an undocumented parameter that is used in Oracle 9.0 and 9.2. Playing with this parameter may be harmful. This parameter is the multiple for SGA pieces of memory such as SHARED_POOL_SIZE and DB_CACHE_SIZE.

Name value default description
------------------------------------------------------------------------------------------
_ksmg_granule_size 4194304 TRUE granule size in bytes
db_cache_size 16777216 FALSE Size of DEFAULT buffer
pool for standard block size buffers
shared_pool_size 50331648 FALSE size in bytes of shared
pool

-----------------------------------------------------------------------------------------------
Understand the ratio is 1:4:12. If this parameter is to be increased the other parameters are also to modify to be in precise multiples. The value is to be set in Bytes.

For development purposes Oracle has provided data file formats and default file generations that are associated with data files. This is not the place for those discussions

_FAST_FULL_SCAN_ENABLED parameter is consistently available since 7.0. This allows index fast full scan if only an index is needed.

_HASH_MULTIBLOCK_IO_COUNT parameter sets the number of blocks that will be read/written at once when a hash join is used in executing a SQL.

Note:

This parameter was a documented parameter in 7.0, 8.0 and 8.1. It was looking like HASH_MULTIBLOCK_IO_COUNT with no underscore as a prefix.

Doc Id 39023.1 on Metalink
--------------------------------
Parameter type: integer
Parameter class: dynamic, scope= ALTER SESSION, ALTER SYSTEM
Default value: For Oracle 7.3 and 8.0 = 1
For Oracle8i it is query dependent. Appears as 0 in V$PARAMETER.
Range of values: operating system dependent
typically 1 - (65,536/<Parameter:DB_BLOCK_SIZE>)

Related: <Parameter:DB_BLOCK_SIZE>
<Parameter:HASH_JOIN_ENABLED>
<Parameter:HASH_AREA_SIZE>

*** Note: As of Oracle9i this parameter is hidden
i.e.: It is renamed to _HASH_MULTIBLOCK_IO_COUNT
This hidden version of the parameter still behaves as
described below.

Description:
~~~~~~~~~~~~
This parameter specifies how many sequential blocks a hash join reads and writes in one IO. When operating in multi-threaded server mode, however, this parameter is ignored (that is, the default value of 1 is used even if you set the parameter to another value).

The maximum value for HASH_MULTIBLOCK_IO_COUNT varies by operating system. It is always less than the operating system's maximum I/O size expressed as Oracle blocks (max_IO_size/DB_BLOCK_SIZE).

For Oracle 8i you need not set or change the value of this parameter, because Oracle computes the value individually for every query. If you let Oracle do the automatic computation, the value of the parameter appears as 0 in the V$PARAMETER dynamic performance view.

This parameter strongly affects performance because it controls the number of partitions into which the input is divided. For Oracle 8i, Oracle Corporation does not recommend that you set or change the value of this parameter. If you must set it to investigate its effect on performance, make sure that the following formula remains
true:

R / M < Po2 (M/C)

where:

R = size of(<left input to the join>)
M = HASH_AREA_SIZE * 0.9
Po2(<n>) = largest power of 2 that is smaller than <n>
C = HASH_MULTIBLOCK_IO_COUNT * DB_BLOCK_SIZE

You can change the value of this parameter without shutting down your Oracle instance by using the ALTER SESSION or ALTER SYSTEM commands.

===============================================
There are instances where this setting has resulted in ORA-03232 and ORA-02063 when CREATE TEMPORARY TABLESPACE TEMP TEMPFILE …. EXTENT MANAGEMENT LOCAL UNIFORM size … ; is executed. This was considered a bug. But ultimately closed as ‘It was decided that the behavior needed to be better documented in the Oracle8i Reference manual (done in 8.1.7)’. Per Oracle Metalink Docs.

This is the case with Oracle 9.2.0.3 also on windows platform.
_INDEX_JOIN_ENABLED Use this to disable use of index joins. An execution plan with a bitmap access on the top of a full b*tree index scan can produce a wrong result, if all the columns of the b*tree index are nullable. This was considered a bug and the patches were released with patchsets 8.1.7.4, 9.0.1.4 and 9.2.0.1.

This parameter is set to TRUE by default. This is session modifiable. Hence, as a work around one may set it to false for that session.

alter session set "_INDEX_JOIN_ENABLED"=FALSE;

Some have experienced instance crashes with the following error:

ORA-07445: exception encountered: core dump[qervwRowProcedure()+123] [SIGSEGV] [Address not mapped to object] [0xC] [] []

ORA-7445 [qervwRestoreViewBufPtrs] they are possible from queries against clustered tables if an index join access path is used.

Workaround: Set _index_join_enabled=false


This was seen when a SELECT statement with an INLINE QUERY (with an outer join ) IN THE PREDICATE IS EXECUTED. The work around tried was using ORDERED RULE hint in the OUTER JOIN of the IN LINE query. For this also the work around is

alter session set "_INDEX_JOIN_ENABLED"=FALSE;

So this setting altering the session, where a query is upsetting the performance gains of earlier versions, or where a SQL, which has in-line queries in the where clause of the SQL and leads to a crash of the instance, one can gain the performance back and also save the instance.

In some cases Select for update failing with ORA-02014. For this also the work around is

alter session set "_INDEX_JOIN_ENABLED"=FALSE;

If the Oracle patch is installed there is no problem.

_OPTIMIZER_ADJUST_FOR_NULLS This parameter is to adjust selectivity for null values. This parameter is available since version 7.0 is set to FALSE all through.
This parameter can be set for a session and hence can be used by setting it to TRUE and used whenever and wherever required.

The Parameter is set to TRUE the selectivity is the for nulls.

_ORACLE_TRACE_EVENTS This parameter is introduced in Oracle 7.3.2. This is to flag Oracle Traces.

Performance Tuning Part II
(Application Tuning - Hints)

Hints:

Metalink says: Hints are directives added to the select list of a query. They force the query to operate in a particular manner. Hints are available to alter all manner of factors that affect queries including single table access methods, join order and methods.

Hints are the suggestive directives given to the Optimizer, to select so and so path for execution. They are not commands to Optimizer. It does not always get used. Sometimes Optimizer ignores. The conditions, where Optimizer ignores are documented as caveats and they can be found in the Oracle Document on Performance Tuning.

The optimizer recognizes hints only when using the cost-based approach. If you include a hint (except the RULE hint) in a statement block, then the optimizer automatically uses the cost-based approach.

Types of Hints:

Hints are of two types basically. (1) Usage Directives and (2) Compiler Directives.

Usage Directives are explicitly opted in the init.ora parameter file for the database.
These parameters can be seen in the initialization parameter file for the database. Apart from these explicitly set parameters for the database Oracle invokes some parameters for the Optimizer. These parameters are listed by Oracle in the trace file generated by Oracle when event 10053 is invoked.


Invoking event 10053:

This is better invoked for that query for that session to identify the paths of execution as identified by the Optimizer parameters that are operative in the Oracle. This is available from Oracle 9.0 onwards.

Do not set the trace at database level ("event=" parameter in init.ora). This requires the shutdown and restart of the database. apart from this setting at the database level results in space eating machine. and user dump destination becomes full at no time.

The trace that is produced shows the steps the CBO followed to produce an execution plan:
(1) query text
(2) parameters used by optimizer
(3) base statistical information (tables, columns, index statistics)
(4) single table access path
(5) join order and method evaluation
(6) final cost

Here is a trace file for the event 10053
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

*** 2003-09-04 22:24:39.000
QUERY
explain plan for
select * from emp
where job='MANAGER'
AND deptno = ( select deptno from dept where dname='ACCOUNTING')
***************************************
PARAMETERS USED BY THE OPTIMIZER
********************************
OPTIMIZER_FEATURES_ENABLE = 9.2.0
OPTIMIZER_MODE/GOAL = Choose
_OPTIMIZER_PERCENT_PARALLEL = 101
HASH_AREA_SIZE = 1048576
HASH_JOIN_ENABLED = TRUE
HASH_MULTIBLOCK_IO_COUNT = 0
SORT_AREA_SIZE = 1048576
OPTIMIZER_SEARCH_LIMIT = 5
PARTITION_VIEW_ENABLED = FALSE
_ALWAYS_STAR_TRANSFORMATION = FALSE
_B_TREE_BITMAP_PLANS = TRUE
STAR_TRANSFORMATION_ENABLED = TRUE
_COMPLEX_VIEW_MERGING = TRUE
_PUSH_JOIN_PREDICATE = TRUE
PARALLEL_BROADCAST_ENABLED = TRUE
OPTIMIZER_MAX_PERMUTATIONS = 2000
OPTIMIZER_INDEX_CACHING = 0
_SYSTEM_INDEX_CACHING = 0
OPTIMIZER_INDEX_COST_ADJ = 100
OPTIMIZER_DYNAMIC_SAMPLING = 1
_OPTIMIZER_DYN_SMP_BLKS = 32
QUERY_REWRITE_ENABLED = TRUE
QUERY_REWRITE_INTEGRITY = ENFORCED
_INDEX_JOIN_ENABLED = TRUE
_SORT_ELIMINATION_COST_RATIO = 0
_OR_EXPAND_NVL_PREDICATE = TRUE
_NEW_INITIAL_JOIN_ORDERS = TRUE
ALWAYS_ANTI_JOIN = CHOOSE
ALWAYS_SEMI_JOIN = CHOOSE
_OPTIMIZER_MODE_FORCE = TRUE
_OPTIMIZER_UNDO_CHANGES = FALSE
_UNNEST_SUBQUERY = TRUE
_PUSH_JOIN_UNION_VIEW = TRUE
_FAST_FULL_SCAN_ENABLED = TRUE
_OPTIM_ENHANCE_NNULL_DETECTION = TRUE
_ORDERED_NESTED_LOOP = TRUE
_NESTED_LOOP_FUDGE = 100
_NO_OR_EXPANSION = FALSE
_QUERY_COST_REWRITE = TRUE
QUERY_REWRITE_EXPRESSION = TRUE
_IMPROVED_ROW_LENGTH_ENABLED = TRUE
_USE_NOSEGMENT_INDEXES = FALSE
_ENABLE_TYPE_DEP_SELECTIVITY = TRUE
_IMPROVED_OUTERJOIN_CARD = TRUE
_OPTIMIZER_ADJUST_FOR_NULLS = TRUE
_OPTIMIZER_CHOOSE_PERMUTATION = 0
_USE_COLUMN_STATS_FOR_FUNCTION = TRUE
_SUBQUERY_PRUNING_ENABLED = TRUE
_SUBQUERY_PRUNING_REDUCTION_FACTOR = 50
_SUBQUERY_PRUNING_COST_FACTOR = 20
_LIKE_WITH_BIND_AS_EQUALITY = FALSE
_TABLE_SCAN_COST_PLUS_ONE = TRUE
_SORTMERGE_INEQUALITY_JOIN_OFF = FALSE
_DEFAULT_NON_EQUALITY_SEL_CHECK = TRUE
_ONESIDE_COLSTAT_FOR_EQUIJOINS = TRUE
_OPTIMIZER_COST_MODEL = CHOOSE
_GSETS_ALWAYS_USE_TEMPTABLES = FALSE
DB_FILE_MULTIBLOCK_READ_COUNT = 32
_NEW_SORT_COST_ESTIMATE = TRUE
_GS_ANTI_SEMI_JOIN_ALLOWED = TRUE
_CPU_TO_IO = 0
_PRED_MOVE_AROUND = TRUE
***************************************
BASE STATISTICAL INFORMATION
***********************
Table stats Table: EMP Alias: EMP
TOTAL :: CDN: 14 NBLKS: 1 AVG_ROW_LEN: 40
-- Index stats
INDEX NAME: PK_EMP COL#: 1
TOTAL :: LVLS: 0 #LB: 1 #DK: 14 LB/K: 1 DB/K: 1 CLUF: 1
_OPTIMIZER_PERCENT_PARALLEL = 0
***************************************
SINGLE TABLE ACCESS PATH
Column: JOB Col#: 3 Table: EMP Alias: EMP
NDV: 5 NULLS: 0 DENS: 2.0000e-001
NO HISTOGRAM: #BKT: 1 #VAL: 2
Column: DEPTNO Col#: 8 Table: EMP Alias: EMP
NDV: 3 NULLS: 0 DENS: 3.3333e-001 LO: 10 HI: 30
NO HISTOGRAM: #BKT: 1 #VAL: 2
TABLE: EMP ORIG CDN: 14 ROUNDED CDN: 1 CMPTD CDN: 1
Access path: tsc Resc: 2 Resp: 2
BEST_CST: 2.00 PATH: 2 Degree: 1
***************************************
OPTIMIZER STATISTICS AND COMPUTATIONS
***************************************
GENERAL PLANS
***********************
Join order[1]: EMP [EMP]
Best so far: TABLE#: 0 CST: 2 CDN: 1 BYTES: 32
Final:
CST: 2 CDN: 1 RSC: 2 RSP: 2 BYTES: 32
IO-RSC: 2 IO-RSP: 2 CPU-RSC: 0 CPU-RSP: 0
PLAN
Cost of plan: 2
Operation...........Object name.....Options.........Id...Pid..
SELECT STATEMENT 0
TABLE ACCESS EMP FULL 1
TABLE ACCESS DEPT FULL 2 1
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

The TKPROF utility cannot be used, as it cannot interpret its layout. This is almost another Utility for SQL tuning apart from Explain Plan and TKPROF.

To set the event it is better to check for the following:

(1) Existence of the PLAN_TABLE for that user
(2) Analyzed objects – because they are needed in CBO

Command line syntax for event 10053

To switch on the event:
alter session set events '10053 trace name context forever[, level {1|2}]'

To switch off the event:
alter session set events '10053 trace name context off'

Another way:
on:
sys.dbms_system.set_ev (<sid>, <serial#>, 10053, {1|2}, '')
off:
sys.dbms_system.set_ev (<sid>, <serial#>, 10053, 0, '')

The sid and serial# are to be collected from the session from the v$session synonym
When they are to be used?

Compiler Directives: These directives are many kinds

Compiler Directive Type Compiler Directive Hints
To Change the execution Path – optimization goal ALL_ROWS
FIRST_ROWS
FIRST_ROWS(n)
CHOOSE
RULE
To change Access Methods AND_EQUAL
HASH
CLUSTER
FULL
INDEX
INDEX_ASC
INDEX_COMBINE
INDEX_DESC
INDEX_FFS
INDEX_JOIN
NO_INDEX
ROWID
To change Transformation methods of query(Useful in DW Env. where FACT and DIMENSION tables exist) FACT
NO_FACT
MERGE
NO_MERGE
REWRITE
NOREWRITE
STAR
STAR_TRANSFORMATION
NO_EXPAND
USE_CONCAT
To Change the Join Operations DRIVING_SITE
LEADING
NL_AJ
HASH_AJ
MRGE_AJ
NL_SJ
HASH_SJ
MERGE_SJ
ORDERED
PUSH_SUBQ
USE_NL
USE_HASH
USE_MERGE
To change the parallel execution Methods NOPARALLEL
NOPARALLEL_INDEX
PARALLEL
PARALLEL_INDEX
PQ_DISTRIBUTE
Miscellaneous APPEND
NOAPPEND
CACHE
NOCACHE
UNNEST
NO_UNNEST

Metalink Docs:
(1) For USE_MERGE, USE_HASH, USE_NL refer to Note:104817.1 --Discussion on Oracle Joins - Costs - Algorithms & Hints
(2) For USE_CONCAT refer to Note::17214.1 --Using the USE_CONCAT hint with IN/OR

Caveats

For access path hints, Oracle ignores the hint if you specify the SAMPLE option in the FROM clause of a SELECT statement.

The optimizer ignores FIRST_ROWS(n) hint in DELETE and UPDATE statement blocks and in SELECT statement blocks that contain any of the following syntax:

(a) Set operators (UNION, INTERSECT, MINUS, UNION ALL)
(b) GROUP BY clause
(c) FOR UPDATE clause
(d) Aggregate functions
(e) DISTINCT operator
(f) ORDER BY clauses, when there is no index on the ordering columns

The INDEX hint applies to IN-list predicates. It forces the optimizer to use the hinted index, if possible, for an IN-list predicate. Multicolumn IN-lists will not use an index.

The FULL hint explicitly chooses a full table scan for the specified table.

The ROWID hint explicitly chooses a table scan by rowid for the specified table

The CLUSTER hint explicitly chooses a cluster scan to access the specified table. It applies only to clustered objects.

The HASH hint explicitly chooses a hash scan to access the specified table. It applies only to tables stored in a cluster.

The INDEX hint explicitly chooses an index scan for the specified table. You can use the INDEX hint for domain, B-tree, bitmap, and bitmap join indexes. However, Oracle recommends using INDEX_COMBINE rather than INDEX for bitmap indexes, because it is a more versatile hint.

If this hint specifies no indexes, then the optimizer considers the cost of a scan on each available index on the table and then performs the index scan with the lowest cost. The optimizer can also choose to scan multiple indexes and merge the results, if such an access path has the lowest cost. The optimizer does not consider a full table scan.
ex:
select /*+ Index*/* from emp where empno=7609
no rows selected
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=32)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=1 Card=1 Bytes=32)
2 1 INDEX (UNIQUE SCAN) OF 'U_EMPNO' (UNIQUE)

INDEX_ASC

This hint explicitly chooses an index scan for the specified table. If the statement uses an index range scan, then Oracle scans the index entries in ascending order of their indexed values.

INDEX_COMBINE

This hint explicitly chooses a bitmap access path for the table. If no indexes are given as arguments for the INDEX_COMBINE hint, then the optimizer uses whatever Boolean combination of bitmap indexes has the best cost estimate for the table. If certain indexes are given as arguments, then the optimizer tries to use some Boolean combination of those particular bitmap indexes.

INDEX_JOIN

This hint explicitly instructs the optimizer to use an index join as an access path. If the number of Indexes is more, it looses its directive. The NO_INDEX hint applies to function-based, B-tree, bitmap, cluster, or domain indexes.
SQL> select /*+NO_index*/* from emp where empno=7934;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7934 MILLER CLERK 7782 23-JAN-82 1300 10


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=32)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=1 Bytes=32)
2 1 INDEX (UNIQUE SCAN) OF 'EMP_EMPNO' (UNIQUE) (Cost=1 Card=14)

When a specific index is mentioned Optimizer goes by the hint.

SQL> select /*+ no_index (emp EMP_EMPNO)*/* from emp where empno=7934;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7934 MILLER CLERK 7782 23-JAN-82 1300 10


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=32)
1 0 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 Bytes=32)

AND_EQUAL

This hint explicitly chooses an execution plan that uses an access path that merges the scans on several single-column indexes.

SELECT /*+USE_CONCAT*/ *
FROM emp
WHERE empno > 50 OR sal < 50000;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=15 Bytes=480)
1 0 CONCATENATION
2 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 Bytes=32)
3 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 Bytes=32)
select /*+ USE_NL emp */EMP.deptno,ename,empno,sal
from emp,dept
where emp.deptno=dept.deptno

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=14 Bytes=210)
1 0 NESTED LOOPS (Cost=2 Card=14 Bytes=210)
2 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=14 Bytes=182)
3 1 INDEX (UNIQUE SCAN) OF 'U_DEPTNO' (UNIQUE)

The USE_NL hint causes Oracle to join each specified table to another row source with a nested loops join, using the specified table as the inner table.

USE_MERGE

SELECT /*+USE_MERGE(emp dept)*/ *
FROM emp, dept
WHERE emp.deptno = dept.deptno;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=6 Card=14 Bytes=714)
1 0 MERGE JOIN (Cost=6 Card=14 Bytes=714)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'DEPT' (Cost=2 Card=5 Bytes=95)
3 2 INDEX (FULL SCAN) OF 'U_DEPTNO' (UNIQUE) (Cost=1 Card=5)
4 1 SORT (JOIN) (Cost=4 Card=14 Bytes=448)
5 4 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=14 Bytes=448)


INDEX_FFS DIRECTS THE optimizer to perform FAST FULL INDEX Scan.

Example: There is a bit map index on deptno column on emp table

select /*+ index_ffs(emp bmp_dept)*/
deptno,empno
from emp
/

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=14 Bytes=70)
1 0 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=14 Bytes=70)

The optimizer uses no index even though there is a directive.

Drop that bitmap index and create a composite index on empno and deptno columns and then try the same hint

SQL> select /*+ index_ffs(emp bmp_dept)*/
2 deptno,empno
3 from emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=14 Bytes=70)
1 0 INDEX (FULL SCAN) OF 'IND_DEPTNO_EMPNO' (NON-UNIQUE) (Cost
=1 Card=14 Bytes=70)


This means that if all the columns of the composite index are selected in the query and with the hint for fast full scan of the index Optimizer implements that directive.

SQL> select /*+index_ffs (emp deptno)*/
2 deptno from emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=23 Card=14 Bytes=28)
1 0 BITMAP CONVERSION (TO ROWIDS)
2 1 BITMAP INDEX (FAST FULL SCAN) OF 'DEPTNO'

SQL> select distinct /*+ emp deptno */ deptno from emp;

DEPTNO
----------
10
20
30


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=4 Card=3 Bytes=6)
1 0 SORT (UNIQUE) (Cost=4 Card=3 Bytes=6)
2 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=14 Bytes=28)

ORDERED hint is directive to access the table in a particular order

1 select /*+ ordered */
2 empno,ename,dept.deptno
3 from emp,dept
4 where emp.deptno=dept.deptno
5* and empno=7900
SQL> /

EMPNO ENAME DEPTNO
---------- ---------- ----------
7900 JAMES 30


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=12)
1 0 NESTED LOOPS (Cost=1 Card=1 Bytes=12)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=1 Card=1 By
tes=10)

3 2 INDEX (UNIQUE SCAN) OF 'EMP_EMPNO' (UNIQUE)
4 1 INDEX (UNIQUE SCAN) OF 'U_DEPTNO' (UNIQUE)
SQL>

Here there are indexes on these two tables and hence the access of the data is based on the indexes.

Now, let the indexes be dropped and then run the same SQL.

SQL> select /*+ ordered */
2 empno,ename,dept.deptno
3 from emp,dept
4 where emp.deptno=dept.deptno
5 and empno=7900
6 /

EMPNO ENAME DEPTNO
---------- ---------- ----------
7900 JAMES 30


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=1 Bytes=12)
1 0 HASH JOIN (Cost=5 Card=1 Bytes=12)
2 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 Bytes=10)
3 1 TABLE ACCESS (FULL) OF 'DEPT' (Cost=2 Card=5 Bytes=10)

1 select /*+ ordered */
2 empno,ename,dept.deptno
3 from dept,emp
4 where emp.deptno=dept.deptno
5* and empno=7900
SQL> /

EMPNO ENAME DEPTNO
---------- ---------- ----------
7900 JAMES 30


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=1 Bytes=12)
1 0 HASH JOIN (Cost=5 Card=1 Bytes=12)
2 1 TABLE ACCESS (FULL) OF 'DEPT' (Cost=2 Card=5 Bytes=10)
3 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 Bytes=10)

SQL>

SQL>
LEADING hint takes care of the problems for the ORDERED hint when there is lot of complexity in the query to confuse optimizer to follow hint.


1 select /*+ leading dept */
2 empno,ename,dept.deptno,sal,grade
3 from emp,dept,salgrade
4 where dept.deptno=emp.deptno
5 and sal >= 2001 and sal <= 3000
6* and hisal = 3000 and losal=2001
SQL> /

EMPNO ENAME DEPTNO SAL GRADE
---------- ---------- ---------- ---------- ----------
7566 JONES 20 2975 4
7698 BLAKE 30 2850 4
7782 CLARK 10 2450 4
7788 SCOTT 20 3000 4
7902 FORD 20 3000 4


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=7 Card=1 Bytes=54)
1 0 HASH JOIN (Cost=7 Card=1 Bytes=54)
2 1 MERGE JOIN (CARTESIAN) (Cost=4 Card=1 Bytes=41)
3 2 TABLE ACCESS (FULL) OF 'SALGRADE' (Cost=2 Card=1 Bytes=39)
4 2 BUFFER (SORT) (Cost=2 Card=1 Bytes=2)
5 4 TABLE ACCESS (FULL) OF 'DEPT' (Cost=2 Card=1 Bytes=2)
6 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=6 Bytes=78)

SQL>

For the sake of this example a Cartesian join is also show. Never mind of that, but concentrate on the order the tables are accessed as per the hint LEADING where it defined that DEPT tables shall be the LEADING table in this complex join.


ORDERED_PREDICATES is another variation to meet the complexities of the tables and the joins and the order of the predicating conditions in the where clause.

1 select /*+ ordered_predicates */
2 empno,job
3 from emp
4 where empno=7782
5 and deptno=10
6* and ename='CLARK'
SQL> /

EMPNO JOB
---------- ---------
7782 MANAGER


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=17)
1 0 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=1 Bytes=17)
2 1 INDEX (UNIQUE SCAN) OF 'EMPNO_PK' (UNIQUE) (Cost=1 Card=14)

SQL>

Optimizer implements the order of clauses in the where clause.

DRIVING_SITE hint is identical to ORDERED hint where the access is processes basing on the driving site.

1 select /*+ driving_site srinivas.us.oracle.com*/
2 empno,
3 ename,
4 dept.deptno
5 from emp,scott.dept@srinivas dept
6 where emp.deptno=dept.deptno
7 and dept.deptno=30
8* and empno=7900
SQL> /

EMPNO ENAME DEPTNO
---------- ---------- ----------
7900 JAMES 30


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=23)
1 0 NESTED LOOPS (Cost=2 Card=1 Bytes=23)
2 1 REMOTE* (Cost=1 Card=1 Bytes=13) SRINIVAS.US.ORACLE.COM
3 1 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=1 Card=1 Bytes=10)
4 3 INDEX (UNIQUE SCAN) OF 'EMPNO_PK' (UNIQUE)
2 SERIAL_FROM_REMOTE SELECT "DEPTNO" FROM "SCOTT"."DEPT" "DEPT" WHERE "DEPTNO"=30

SQL>

When ever remote objects are to be accessed using DRIVING_SITE potentially reduces the network traffic.

USE_HASH hint is useful join many rows together from multiple tables.

SQL> select /*+use_has(test_2)*/
2 count(*)
3 from test_1,
4 test_2
5 where test_2.object_type=test_1.object_type
6 and test_1.object_type='PROCEDURE'
7 /

COUNT(*)
----------
40


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=20 Card=1 Bytes=18)
1 0 SORT (AGGREGATE)
2 1 HASH JOIN (Cost=20 Card=33 Bytes=594)
3 2 TABLE ACCESS (FULL) OF 'TEST_2' (Cost=2 Card=1 Bytes=10)
4 2 TABLE ACCESS (FULL) OF 'TEST_1' (Cost=17 Card=975 Bytes=7800)


SQL>

PUSH_SUBQ is nothing but PUSH SUB-QUERY first. If a sub-query is to return less number of rows and for all performance gains use this hint.


1 select /*+push_subq*/test_1.object_name,test_3.owner
2 from test_1,test_3
3 where test_1.owner=test_3.owner
4 and test_1.object_type=(select object_type
5* from test_2 where sl_no=4)
SQL> /

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=22 Card=968 Bytes=40656)

1 0 FILTER
2 1 HASH JOIN (Cost=20 Card=968 Bytes=40656)
3 2 TABLE ACCESS (FULL) OF 'TEST_3' (Cost=2 Card=30 Bytes=180)
4 2 TABLE ACCESS (FULL) OF 'TEST_1' (Cost=17 Card=968 Bytes=34848)
5 1 TABLE ACCESS (FULL) OF 'TEST_2' (Cost=2 Card=1 Bytes=12)

CAVEAT
This hint has no effect if the sub query is applied to a remote table or one that is joined using a merge join.


NO_PUSH_SUBQ

1 select /*+ NO_PUSH_SUBQ */test_1.object_name,test_3.owner
2 from test_1,test_3
3 where test_1.owner=test_3.owner
4 and test_1.object_type=(select object_type
5* from test_2 where sl_no=4)
SQL> /

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=20 Card=968 Bytes=40656)
1 0 HASH JOIN (Cost=20 Card=968 Bytes=40656)
2 1 TABLE ACCESS (FULL) OF 'TEST_3' (Cost=2 Card=30 Bytes=180)
3 1 TABLE ACCESS (FULL) OF 'TEST_1' (Cost=17 Card=968 Bytes=34848)
4 3 TABLE ACCESS (FULL) OF 'TEST_2' (Cost=2 Card=1 Bytes=12)

ORDERED_PRIDICATES

The ORDERED_PREDICATES hint forces the optimizer to preserve the order of predicate evaluation, except for predicates used as index keys. Use this hint in the WHERE clause of SELECT statements.

If you do not use the ORDERED_PREDICATES hint, then Oracle evaluates all predicates in the following order:

Predicates without user-defined functions, type methods, or sub-queries are evaluated first, in the order specified in the WHERE clause.

Predicates with user-defined functions and type methods that have user-computed costs are evaluated next, in increasing order of their cost.

Predicates with user-defined functions and type methods without user-computed costs are evaluated next, in the order specified in the WHERE clause.

Predicates not specified in the WHERE clause (for example, predicates transitively generated by the optimizer) are evaluated next.

Predicates with sub-queries are evaluated last, in the order specified in the WHERE clause.

Note: Never heard any guy claiming to have been benefited nor never I could prove this hint.

CURSOR_SHARING_EXACT

This is controlled with the CURSOR_SHARING startup parameter. The CURSOR_SHARING_EXACT hint causes this behavior to be switched off. In other words, Oracle executes the SQL statement without any attempt to replace literals by bind variables.
The initialization parameter CURSOR_SHARING is introduced in 8.1. The value is set to EXACT in those two releases 8.1.6 and 8.1.7

This parameter should only be set to FORCE in 8i when the risk of sub-optimal plans is outweighed by the improvements in cursor sharing.

Oracle Metalink explains in One Document as under:

If the answer YES to these 2 questions then CURSOR_SHARING=FORCE may be worth considering:

1. Are there a large number of statements in the shared pool that differ only in the values of their literals?
And
2. Is the response time low due to a very high number of library cache misses? (i.e.: hard parses and library cache latch contention)

The default value of EXACT maintains <pre 8.1.6> behavior for literal SQL statements in that literals are not replaced with bind variables and the literal values can be used by CBO.


Note #1:

Forcing cursor sharing among similar (but not identical) statements can have unexpected results in some DSS applications and in applications using stored outlines.

Note #2:

Setting CURSOR_SHARING to FORCE causes an increase in the maximum lengths (as returned by DESCRIBE) of any selected expressions that contain literals (in a SELECT statement).

However, the actual length of the data returned will not change.

In Oracle 9i this parameter enables to have options

FORCE
Forces statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect the meaning of the statement.

SIMILAR
Causes statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect either the meaning of the statement or the degree to which the plan is optimized.

EXACT
Only allows statements with identical text to share the same cursor.

NOTE
Forcing cursor sharing among similar (but not identical) statements can have unexpected results in some DSS applications, or applications that use stored outlines

DYNAMIC_SAMPLING


Oracle Docs says so.

The DYNAMIC_SAMPLING hint lets you control dynamic sampling to improve server performance by determining more accurate selectivity and cardinality estimates. You can set the value of DYNAMIC_SAMPLING to a value from 0 to 10. The higher the level, the more effort the compiler puts into dynamic sampling and the more broadly it is applied. Sampling defaults to cursor level unless you specify a table.

The hint is enabled when the following conditions are met.

01.There is more than one table in the query.
02.Some table has not been analyzed and has no indexes.
03.The optimizer determines that a relatively expensive table scan would be required for this unanalyzed table.

The sampling levels are as follows if the dynamic sampling level used is from a cursor hint or from the optimizer_dynamic_sampling parameter:

Level 0: Do not use dynamic sampling.

Level 1: Sample all unanalyzed tables if the following criteria are met:

(1) There is at least 1 unanalyzed table in the query;
(2) This unanalyzed table is joined to another table or appears in a sub-query or non-merge able view;
(3) This unanalyzed table has no indexes;
(4) This unanalyzed table has more blocks than the number of blocks that would be used for dynamic sampling of this table. The number of blocks sampled is the default number of dynamic sampling blocks (32).

Level 2: Apply dynamic sampling to all unanalyzed tables. The number of blocks sampled is the default number of dynamic sampling blocks.

Level 3: Apply dynamic sampling to all tables that meet Level 2 criteria, plus all tables for which standard selectivity estimation used a guess for some predicate that is a potential dynamic sampling predicate. The number of blocks sampled is the default number of dynamic sampling blocks.

Level 4: Apply dynamic sampling to all tables that meet Level 3 criteria, plus all tables that have single-table predicates that reference 2 or more columns. The number of blocks sampled is the default number of dynamic sampling blocks.

Level 5: Apply dynamic sampling to all tables that meet the Level 4 criteria using 2 times the default number of dynamic sampling blocks.

Level 6: Apply dynamic sampling to all tables that meet the Level 5 criteria using 4 times the default number of dynamic sampling blocks.

Level 7: Apply dynamic sampling to all tables that meet the Level 6 criteria using 8 times the default number of dynamic sampling blocks.

Level 8: Apply dynamic sampling to all tables that meet the Level 7 criteria using 32 times the default number of dynamic sampling blocks.

Level 9: Apply dynamic sampling to all tables that meet the Level 8 criteria using 128 times the default number of dynamic sampling blocks.
Level 10: Apply dynamic sampling to all tables that meet the Level 9 criteria using all blocks in the table.

The sampling levels are as follows if the dynamic sampling level used is from a table hint:

Level 0: Do not use dynamic sampling.

Level 1: The number of blocks sampled is the default number of dynamic sampling blocks (32).

Level 2: The number of blocks sampled is 2 times the default number of dynamic sampling blocks.

Level 3: The number of blocks sampled is 4 times the default number of dynamic sampling blocks.

Level 4: The number of blocks sampled is 8 times the default number of dynamic sampling blocks.

Level 5: The number of blocks sampled is 16 times the default number of dynamic sampling blocks.

Level 6: The number of blocks sampled is 32 times the default number of dynamic sampling blocks.

Level 7: The number of blocks sampled is 64 times the default number of dynamic sampling blocks.

Level 8: The number of blocks sampled is 128 times the default number of dynamic sampling blocks.

Level 9: The number of blocks sampled is 256 times the default number of dynamic sampling blocks.

Level 10: Read all blocks in the table.

The sampling levels are as follows if the dynamic sampling level used is from a table hint:
Level 0: Do not use dynamic sampling.

Level 1: The number of blocks sampled is the default number of dynamic sampling blocks (32).

Level 2: The number of blocks sampled is 2 times the default number of dynamic sampling blocks. (64)

Level 3: The number of blocks sampled is 4 times the default number of dynamic sampling blocks. (128)

Level 4: The number of blocks sampled is 8 times the default number of dynamic sampling blocks. (256)

Level 5: The number of blocks sampled is 16 times the default number of dynamic sampling blocks. (512)

Level 6: The number of blocks sampled is 32 times the default number of dynamic sampling blocks. (1024)

Level 7: The number of blocks sampled is 64 times the default number of dynamic sampling blocks. (2048)

Level 8: The number of blocks sampled is 128 times the default number of dynamic sampling blocks. (4096)

Level 9: The number of blocks sampled is 256 times the default number of dynamic sampling blocks. (8192)

Level 10: Read all blocks in the table. (Full Table)

The number of blocks that are set for dynamic sampling can be decided by accessing dba_extents or dba_segments table.

select owner,
table_name,
blocks
from dba_segments

or

select owner,
table_name,
blocks
from dba_extents

When a table is very large and that table is not analyzed, but is part of the query or constitutes the only table for the query, then depending upon the number of blocks set the sample. It works well.

Please do not forget to use an alias to the table, This applies to the usage hints in general where ever it is required to mention the name of the table with the hint.

Ex:
SQL> SELECT /*+ dynamic_sampling (t 1) */ count(*)
2 FROM test.test_1 t;
3
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=1)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TEST_1' (Cost=17 Card=20744)

SQL> SELECT /*+ dynamic_sampling (t 5) */ count(*)
2 FROM test.test_1 t;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=1)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TEST_1' (Cost=17 Card=29047)

SQL> SELECT SUM(BLOCKS)
2 FROM USER_EXTENTS
3 WHERE SEGMENT_NAME='TEST_1'
4 group by segment_name;

SUM(BLOCKS)
-----------
256

Rule is

Level 4: The number of blocks sampled is 8 times the default number of dynamic sampling blocks. (256)

SQL> SELECT /*+ dynamic_sampling (t 4)*/ count(*)
2 FROM test.test_1 t;

COUNT(*)
----------
29047


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=1)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TEST_1' (Cost=17 Card=29047)


The cardinality statistic is used, if it exists. If there is a predicate, dynamic sampling is done with a table hint and cardinality is not estimated.

To force cardinality estimation even for an analyzed table, you can use a further hint, dynamic_sampling_est_cdn, as in the following example:

SELECT /*+ dynamic_sampling (e 1) dynamic_sampling_est_cdn(t) */ count(*) FROM scott.emp e;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1)
1 0 SORT (AGGREGATE)
2 1 INDEX (FULL SCAN) OF 'PK_EMP' (UNIQUE) (Cost=1 Card=14)

This forces cardinality estimation for emp, even if the table is analyzed. The following query does both selectivity and cardinality estimation for emp:

SELECT /*+ dynamic_sampling (e 1) dynamic_sampling_est_cdn(e) */ count(*) FROM scott.emp e WHERE sal >= 3000;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=3)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=3 Bytes=9)

PARALLEL EXECUTION HINTS ON VIEWS

SQL> SELECT /*+ INDEX(v2.v1.emp emp_empno) FULL(v2.dept) */ *
2 FROM v2
3 WHERE deptno = 20;

no rows selected

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=5 Card=1 Bytes=39)
1 0 HASH JOIN (Cost=5 Card=1 Bytes=39)
2 1 TABLE ACCESS (BY INDEX ROWID) OF 'EMP' (Cost=2 Card=1 Bytes=26)
3 2 BITMAP CONVERSION (TO ROWIDS)
4 3 BITMAP INDEX (SINGLE VALUE) OF 'BMI_DEPTNO'
5 1 TABLE ACCESS (FULL) OF 'DEPT' (Cost=2 Card=3 Bytes=39)

PARALLEL, NOPARALLEL, PARALLEL_INDEX, and NOPARALLEL_INDEX hints on views are applied recursively to all the tables in the referenced view. Parallel execution hints in a top-level query override such hints inside a referenced view.

PARALLEL EXECUTION HINTS INSIDE VIEWS

PARALLEL, NOPARALLEL, PARALLEL_INDEX, and NOPARALLEL_INDEX hints inside views are preserved when the view is merged with the top-level query. Parallel execution hints on the view in a top-level query override such hints inside a referenced view.

PARALLEL
The PARALLEL hint must use the table alias, if an alias is specified in the query. The hint can then take two values, separated by commas after the table name. The first value specifies the degree of parallelism for the given table, and the second value specifies how the table is to be split among the Oracle Real Application Clusters instances. Specifying DEFAULT or no value signifies that the query coordinator should examine the settings of the initialization parameters to determine the default degree of parallelism.

The test database is not an OPS/ORAC database, parallel parameter is set to false.
Here are the parameters:


parallel_adaptive_multi_user boolean FALSE
parallel_automatic_tuning boolean FALSE
parallel_execution_message_size integer 2148
parallel_instance_group string
parallel_max_servers integer 5
parallel_min_percent integer 0
parallel_min_servers integer 0
parallel_server boolean FALSE
parallel_server_instances integer 1

SELECT /*+ FULL(scott_emp) PARALLEL(scott_emp, DEFAULT,DEFAULT) */ *
FROM scott.emp scott_emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=14 Bytes=448)
1 0 TABLE ACCESS* (FULL) OF 'EMP' (Cost=1 Card=14 Bytes=448) :Q6000
1 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND ROWID(A1) */ A1."EMPNO" ,A1."ENAME",A1."JOB",A1."MGR",A1."HI

SQL> select /*+ parallel(emp,5) */ * from emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=327 Bytes=28449)
1 0 TABLE ACCESS* (FULL) OF 'EMP' (Cost=1 Card=327 Bytes=28449 :Q8000)
1 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND ROWID(A1) */ A1."EMPNO"
,A1."ENAME",A1."JOB",A1."MGR",A1."HI

SQL> select /*+ parallel(emp,1,5) */ * from emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=327 Bytes=28449)
1 0 TABLE ACCESS* (FULL) OF 'EMP' (Cost=1 Card=327 Bytes=28449 :Q10000)
1 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND ROWID(A1) */ A1."EMPNO"
,A1."ENAME",A1."JOB",A1."MGR",A1."HI

WHY THIS BEHAVIOR? ORACLE METALINK DOCS say:

Note that the affect of setting a degree/instances of parallelism on an index has changed. Prior to 8i this would have had no affect. Post 8i a parallel plan will be investigated. The physical presence of multiple instances (i.e. a multi instance Oracle Parallel Server) is irrelevant. If instances is set (to an integer value > 1 or to ‘DEFAULT’) on an object then this will mean a parallel plan is considered. Remember that the optimizer works on a cost basis so just because a parallel plan may be considered, does not mean that it will be chosen.

SQL> select degree, instances from user_tables where table_name= 'EMP';

DEGREE INSTANCES
---------- ----------
1 1

It is true that there are no other instances configured.

With no hint the execution plan is
SQL> select * from emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 TABLE ACCESS (FULL) OF 'EMP'

Alter the table structure now.
alter table emp parallel(degree 5,instances 5)
*
ERROR at line 1:
ORA-12818: invalid option in PARALLEL clause

Elapsed: 00:00:00.00
SQL> ed
Wrote file afiedt.buf
1* alter table emp parallel(degree 5)
SQL> /
Table altered.

Elapsed: 00:00:00.00
SQL> alter table emp parallel(instances 5);

Table altered.

Elapsed: 00:00:00.00
SQL>

SQL> select /*+ parallel(emp,5) */ * from emp;


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=327 Bytes=28449)
1 0 TABLE ACCESS* (FULL) OF 'EMP' (Cost=1 Card=327 Bytes=28449 :Q12000)
1 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND ROWID(A1) */ A1."EMPNO"
,A1."ENAME",A1."JOB",A1."MGR",A1."HI
SQL> select /*+ rule */ * from emp;

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=HINT: RULE (Cost=1 Card=327 Bytes=28449)
1 0 TABLE ACCESS* (FULL) OF 'EMP' (Cost=1 Card=327 Bytes=28449 :Q14000)
1 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND ROWID(A1) */ A1."EMPNO"
,A1."ENAME",A1."JOB",A1."MGR",A1."HI

SQL>

NOPARALLEL

This hint has no impact on table that is configured parallel.

SQL> select degree,instances
2 from user_tables
3 where table_name='EMP';

DEGREE INSTANCES
---------- ----------
5 5

SQL> SELECT /*+ NOPARALLEL(emp) */ ename
2 FROM emp emp;
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=327 Bytes=28449)
1 0 TABLE ACCESS* (FULL) OF 'EMP' (Cost=1 Card=327 Bytes=28449 :Q21000)
1 PARALLEL_TO_SERIAL SELECT /*+ NO_EXPAND ROWID(A1) */ A1."EMPNO"
,A1."ENAME",A1."JOB",A1."MGR",A1."HI

Performance Tuning Part III

What Optimizer Can Do?

Oracle has a four-tiered structure to give out put to the queries submitted by the users.

The first structure is PARSER.

Parser parses the SQL statement submitted by the User and validates the syntax, rights and privileges to access the data. Parser spits out the error messages for all the syntactical errors giving leads to the user on the source of syntactical error. For all the security violations, Parser may sometimes spits direct messages on the lack of rights and privileges or spits out an indirect message which may confuse the User like object does not exist, object requires declaration etc.

Examine one SQL statement with ‘WHERE’ clause and with NO ‘WHERE’ clause. This enables understand which part of SQL is parsed first.

FROM clause…. whether it is there or not.

If the table cannot be accessed by the user who has issued the SQL statement for the simple reason he has no access rights granted to him on that table, the error message spit out is ORA-00942 table or view does not exist. This means that such and such table does not exist within the user schema and PARSER has not checked other schemas, as there is no meaningful informative instruction to the PARSER to check other schemas.

If the table has no identifier prefixed to it, by default it is checked in the user schema who has issued the SQL statement.

The instruction to PARSER can be issued by using an identifier for the table to be selected.

SQL> select count(*) from srv;
select count(*) from srv
*
ERROR at line 1:
ORA-00942: table or view does not exist


SQL> select count(*) from sysslp.srv;

COUNT(*)
----------
1

SQL>

In case the table exists in the schema of the identifier for the table, but the user has NOT been granted any access rights to that object then also the same error message is spit out.
If the identifier is not properly marked, the following error is spit.

ORA-01747: invalid user.table.column, table.column, or column specification

(2) then validates the WHERE clause, if any.
(3) then validates the SUBJECT selected columns with the functions, if any and passes on the validated SQL to the OPTIMIZER.

Study these examples to understand how parser works:

SQL> select ENAME,emp_no >> wrong column
2 FROM EMPL; >>>> wrong table name
FROM EMPL
*
ERROR at line 2:
ORA-00942: table or view does not exist


SQL> select ENAME-EMP_NO
2 FROM EMP;
select ENAME-EMP_NO
*
ERROR at line 1:
ORA-00904: "EMP_NO": invalid identifier

When there is a where clause:

SQL> select ename,emp_no
2 from empl
3 where enames='SMITH'; >>> wrong column
from empl
*
ERROR at line 2:
ORA-00942: table or view does not exist

1 select ename,emp_no
2 from emp
3* where enames='SMITH'
SQL> /
where enames='SMITH'
*
ERROR at line 3:
ORA-00904: "ENAMES": invalid identifier

1 select ename,emp_no
2 from emp
3* where ename='SMITH'
SQL> /
select ename,emp_no
*
ERROR at line 1:
ORA-00904: "EMP_NO": invalid identifier

1 select ename,empno
2 from emp
3* where ename='SMITH'
SQL> /

ENAME EMPNO
---------- ----------
SMITH 7369


The second structure is OPTIMIZER.

Optimizer looks for the availability of the statistics for the objects to be accessed. If they are available prefers CBO (cost Based Optimization) else chooses RBO (Rule Based Optimization). This is a broader statement on the role of the OPTIMIZER.

The third structure is RESOURCE GENERATOR.

On receiving from the OPTIMIZER the most efficient access path plan outputs the execution plan for the SQL submitted by the User. If the resources are busy and the Resource Generator is not able to generate the required Resource like Locks, Latches, CPU time may spit some times error messages like obtain the resource with NOWAIT like messages. Some times, depending upon the contention may be hanging on until is able to acquire the resources. If there is a time out for any reason, error message is spit out. The user may have to retry the SQL. There are ways and methods to tune these waits and contentions. But, if the resource crunch / contention is due to the non-availability of the resources (hardware) either they are to be provided or live with the low performance.

The fourth structure is SQL EXECUTION ENGINE, which produces the result to the User who submitted the query.

The SQL EXECUTION ENGINE executes the SQL and returns the output to the user. The output display is dependent on the Hardware STANDARD OUTPUT DEVICES. The role is limited to Oracle. Part of this output generation is tunable for performance. Like using DBMS_OUTPUT package and it limits. Using UTL_FILE package and its limits etc.

For the sake of this paper ’What OPTIMIZER can do?’ is discussed.

Optimizer basing the setting in the initialization parameters that are set at the time of creating the database depending upon the nature of the application that the database is to run chooses the cost based or rule based. If the cost based optimization of the SQL is to be done Optimizer has to be provided with the statistics based on the selectivity, cardinality and cost. For these three different measures the statistics are required. If the rule based optimization is chosen the optimizer does so.
But there certain oracle features which can be used only when Optimizer is set to Cost Based Optimization. They are:

Partitioned tables and indexes
Index-organized tables
Reverse key indexes
Function-based indexes
SAMPLE clauses in a SELECT statement
Parallel query and parallel DML
Star transformations and star joins
Extensible optimizer
Query rewrite with materialized views
Enterprise Manager progress meter
Hash joins
Bitmap indexes and bitmap join indexes
Index skip scans

NOTE:

Oracle Corporation strongly advises the use of cost-based optimization. The rule-based optimization is available for backward compatibility with legacy applications and will be deprecated in a future release.

How to Collect Statistics for Optimizer?

Oracle has provided ANALYZE command and DBMS_STATS Utility. Oracle recommends to use DBMS_STATS Utility to collect the stats as that has some performance gains as well as additional abilities to collect stats.

DBMS_STATS is product of Oracle 8i. This Utility is suggested by Oracle against ANALYZE command for collecting statistics for the following reasons.

An article on dbasupport.com reads as under:

The DBMS_STATS package is a godsend for the Oracle DBA in managing database statistics only for the cost based optimizer.
The package itself allows us to create, modify, view and delete statistics from a standard, well-defined set of package procedures. The statistics can be gathered on tables, indexes, columns, partitions and schemas, but note that it does not generate statistics for clusters.
Some of its many features are:
· Statistics can be computed or estimated from a random sample
· Can gather statistics for sub-partitions or partitions
· Whenever possible, dbms_stats routines will run via parallel query or operate serially
· Statistics can be generated to a statistics table and can be imported or exported between databases and re-loaded into the data dictionary at any time.
This allows the DBA to experiment with various statistics. It is recommended over the analyze command for CBO statistics generation because:
· analyze always runs serially
· analyze cant overwrite or delete some statistics generated by dbms_stats
· statistic generation via analyse for partitions is not optimal, namely for global partition statistics.
Even so, dbms_stats does not generate information about chained rows and the structural integrity of segments. It should also be mentioned that in 8i, certain DDL commands (ie. create index), statistics are automatically generated, therefore eliminating the need to generate statistics explicitly after DDL command. Oracle will determine if statistics are automatically generated based on the expected overhead.

What are the Steps involved in CBO?

When the query after the syntactical and security check by PARSER is passed on to OPTIMIZER, as has been said in the beginning of this Document, and as a first step the query is transformed by the QUERY TRANSFORMER and then are passed on to ESTIMATOR who uses the collected statistics to find the cost of each query plan and from ESTIMATOR to PLAN GENERATOR.

QUERY TRANSFORMER

The input to QUERY TRANSFORMER is the output of the PARSER after the syntactical and security check is done.

Oracle says:

The input query ‘ is represented by a set of query blocks. The query blocks are nested or interrelated to each other. The form of the query determines how the query blocks are interrelated to each other. The main objective of the query transformer is to determine if it is advantageous to change the form of the query so that it enables generation of a better query plan’.

Query Transformation Techniques Employed by QUERY TRANSFORMER are:

View Merging
Predicate Pushing
Subquery Unnesting
Query Rewrite with Materialized Views

Any combination of these transformations can be applied to a given query.

View Merging

The Parser expands each view referenced in a query into a separate query block. The query block essentially represents the view definition, and therefore the result of a view.

One option for the optimizer is to analyze the view query block separately and generate a view sub-plan. The optimizer then processes the rest of the query by using the view sub-plan in the generation of an overall query plan.

This technique usually leads to a sub-optimal query plan, because the view is optimized separately from rest of the query.

The query transformer then removes the potentially sub-optimal plan by merging the view query block into the query block that contains the view. Most types of views are merged. When a view is merged, the query block representing the view is merged into the containing query block. Generating a sub-plan is no longer necessary, because the view query block is eliminated.

Predicate Pushing

For those views that are not merged, the query transformer can push the relevant predicates from the containing query block into the view query block. This technique improves the sub-plan of the non-merged view, because the pushed-in predicates can be used either to access indexes or to act as filters.

Subquery Un-nesting

A separate query block, like a view, represents a sub-query. Because a sub-query is nested within the main query or another sub-query, the plan generator is constrained in trying out different possible plans before it finds a plan with the lowest cost. For this reason, the query plan produced might not be the optimal one. The restrictions due to the nesting of sub-queries can be removed by un-nesting the sub-queries and converting them into joins. Most sub-queries are un-nested by the query transformer. For those sub-queries that are not un-nested, separate sub-plans are generated. To improve execution speed of the overall query plan, the sub-plans are ordered in an efficient manner.

Query Rewrite with Materialized Views

A materialized view is like a query with a result that is materialized and stored in a table. When a user query is found compatible with the query associated with a materialized view, the user query can be rewritten in terms of the materialized view.

This technique improves the execution of the user query, because most of the query result has been precomputed. The query transformer looks for any materialized views that are compatible with the user query and selects one or more materialized views to rewrite the user query.

The use of materialized views to rewrite a query is cost-based. That is, the query is not rewritten if the plan generated without the materialized views has a lower cost than the plan generated with the materialized views.

After the QUERY TRANSFORMER has done its job the query is handed over to ESTIMATOR.

ESTIMATOR

The ESTIMATOR does the cost measurements. The end task of the estimator is to estimate the cost of each query. While doing so, ESTIMATOR does the following depends on three criteria.

01. Selectivity
02. Cardinality
03. Cost

SELECTIVITY

SELECTIVITY represents a fraction of rows from a ROWSET.

ROWSET is can be (1) a table or (2) a view or (3) the result of a join operator or (4) the result of a GROUP BY operator.

It is clear that SELECTIVITY is tied to ROWSET, which in turn is the result of PREDICATE.

A predicate acts as a FILTER that filters a certain number of rows from a row set. Therefore, the selectivity of a predicate indicates how many rows from a row set will pass the predicate test. Selectivity lies in a VALUE RANGE from 0.0 to 1.0. A selectivity of 0.0 means that no rows will be selected from a row set, and a selectivity of 1.0 means that all rows will be selected.

The measures shall me more close to accuracy if the statistics are available.

The estimator uses an INTERNAL DEFAULT VALUE for selectivity, if NO STATISTICS are available. Different internal defaults are used, depending on the predicate type.

When statistics are available, the estimator uses them to estimate selectivity. For example, for an equality predicate (col_3 = 'ABCDEF'), selectivity is set to the reciprocal of the number ‘n’ of distinct values of col_3, because the query selects rows that all contain one out of ‘n’ distinct values. If a histogram is available on the col_3 column, then the estimator uses it instead of the number of distinct values. The histogram captures the distribution of different values in a column, so it yields better selectivity estimates. Having histograms on columns that contain skewed data (in other words, values with large variations in number of duplicates) greatly helps the CBO generate good selectivity estimates.

Access these data dictionary views/tables/synonyms DBA_TAB_HISTOGRAMS, DBA_TAB_COLUMNS, DBA_TAB_COL_STATISTICS, for the data on each column values to find out the selectivity aspects of the data. Also access the LAST_ANALYZED date so that a reliability of the statistics is also assessed.

CARDINALITY

Cardinality represents the number of rows in a row set. Here, the row set can be a base table, a view, or the result of a join or GROUP BY operator.

There are various cardinalities that are required to measure the cardinality by the estimator are as under:

BASE CARDINALITY

This represents the number of rows in the base table. This data is collected when a table is analyzed. This is represented by NUM_ROWS in DBA_TABLES. The value of LAST_ANALYZED column helps understand the reliability of the data.

If the table is not analyzed, ESTIMATOR uses the number of extents occupied by the table to estimate the base cardinality.

Example:

This query gives the base cardinality if the table/s is/are analyzed

select table_name,
num_rows base_cardinality,
last_analyzed
from dba_tables

If the table is not analyzed this query helps you know the extents for that table.
To know the oracle block size

select name,value
from v$parameter
where name ='db_block_size'

To find the blocks for the segment/object

select owner,
segment_name,
blocks
from dba_extents
where segment_name=’<table_name>’
and owner=’<owner_name>’

EFFECTIVE CARDINALITY

This represents the number of rows selected by the query. The effective cardinality depends on the predicates specified on different columns of a base table, with each predicate acting as a successive filter on the rows of the base table. The effective cardinality is computed as the product of the base cardinality and combined selectivity of all predicates specified on a table. When there is no predicate on a table, its effective cardinality equals its base cardinality.

JOIN CARDINALITY

This represents the number of rows produced when two row sets are joined together. A join is a Cartesian product of two row sets, with the join predicate applied as a filter to the result. Therefore, the join cardinality is the product of the cardinalities of two row sets, multiplied by the selectivity of the join predicate.

DISTINCT CARDINALITY

This represents the number of distinct values in a column of a row set. The distinct cardinality of a row set is based on the data in the column.

GROUP CARDINALITY

This represents the number of rows produced from a row set after the GROUP BY operator is applied. The effect of the GROUP BY operator is to decrease the number of rows in a row set. The group cardinality depends on the distinct cardinality of each of the grouping columns and on the number of rows in the row set.

COST

Cost represents the units of resources used. These resources are identified as I/O, CPU Usage and Memory Usage. The cost of a query plan is the number of work units that are to be incurred when the query is executed and its result produced.

The cost of full table scan or full fast index scan depends upon the number of blocks to be scanned and the multi-block read count value.

The access path represents the number of units of work required to get data from a base table

Access paths are ways in which data is retrieved from the database. Any row in any table can be located and retrieved by one of the following methods:

01. Full Table Scan
02. Index Scan
03. ROWID Scan

FULL TABLE SCAN

When Oracle performs a full table scan, the blocks are read sequentially. Because the blocks are adjacent, I/O calls larger than a single block can be used to speed up the process. The size of the read calls range from one block to the number of blocks indicated by the initialization parameter DB_FILE_MULTIBLOCK_READ_COUNT. Using multi-block reads means a full table scan can be performed very efficiently. Each block is read only once.

SQL> select * from emp
2 where upper(ename)='ADAMS';

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=37)
1 0 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 Bytes=37)

This data, better be accessed using an index on that column.

SQL> create index ind_ename on emp(ename);

Index created.

SQL> select * from emp
2 where upper(ename)='ADAMS';

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=2 Card=1 Bytes=37)
1 0 TABLE ACCESS (FULL) OF 'EMP' (Cost=2 Card=1 Bytes=37)

SQL>

The optimizer is unable to use the index because there is a function on the indexed column.

This means that there are some factors that influence the optimizer to choose FULL TABLE SCANS.

WHAT FACTORS INFLUENCE THE OPTIMIZER TO CHOOSE FULL TABLE SCAN?

(1) If the query is unable to use any existing indexes

SQL> select count(*)
2 from test_1;

COUNT(*)
----------
29723

SQL> desc test_1
Name Null? Type
----------------------------------------- -------- ----------------------------
OWNER NOT NULL VARCHAR2(30)
OBJECT_TYPE VARCHAR2(18)
OBJECT_NAME NOT NULL VARCHAR2(30)
OBJ_NUM NOT NULL NUMBER

1 select index_name
2 from dba_indexes
3* where table_name='TEST_1'
SQL> /

INDEX_NAME
------------------------------
PK_TEST_1
BMI_OWNER
IND_OBJ_NAME

SQL> select *
2 from test_1
3 where upper(owner)='TEST'; >>> this is the cause of full table scan even though there are indexes that can be used


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=17 Card=290 Bytes=12180)
1 0 TABLE ACCESS (FULL) OF 'TEST_1' (Cost=17 Card=290 Bytes=12180)

(2) If a table contains less than DB_FILE_MULTIBLOCK_READ_COUNT blocks under the high water mark, which can be read in a single I/O call, then a full table scan might be cheaper than an index range scan, regardless of the fraction of tables being accessed or indexes present.


SQL> select count(*)
2 from test_1
3 ;

COUNT(*)
----------
29723


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TEST_1'

(3) If the table has not been analyzed since it was created, and if it has less than DB_FILE_MULTIBLOCK_READ_COUNT blocks under the high water mark, then the optimizer thinks that the table is small and uses a full table scan. Look at the LAST_ANALYZED and BLOCKS columns in the DBA_TABLES table to see what the statistics reflect.
SQL> select count(*)
2 from test_1;

COUNT(*)
----------
29723

Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'TEST_1'

select segment_name,count(block_id)
from dba_extents
where segment_name='TEST_1'
group by segment_name

TEST_1 17

SQL> select nvl(blocks,0)
2 from dba_tables
3 where table_name='TEST_1'
4 /

NVL(BLOCKS,0)
-------------
0

1 row selected.

SQL> sho parameter DB_FILE_MULTIBLOCK_READ_COUNT

NAME TYPE VALUE
------------------------------------ ----------- ---------------
db_file_multiblock_read_count integer 32
SQL>


(4) A high degree of parallelism for a table skews the optimizer toward full table scans over range scans. Examine the DEGREE column in ALL_TABLES for the table to determine the degree of parallelism.


SQL> alter table test_1 parallel(degree 5);

Table altered.

SQL> alter table test_1 parallel(instances 5);

Table altered.

select count(*) from test_1
Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1)
1 0 SORT (AGGREGATE)
2 1 SORT* (AGGREGATE): Q34000
3 2 TABLE ACCESS* (FULL) OF 'TEST_1' (Cost=1 Card=19930) :Q34000

2 PARALLEL_TO_SERIAL SELECT /*+ PIV_SSF */ SYS_OP_MSR(COUNT(*)) FROM (SELECT /*+ NO_EXPAND ROWID(A2)

3 PARALLEL_COMBINED_WITH_PARENT

SQL>


To enable the usage of Indexes Oracle documentation lays down a few principles:

The optimizer uses the following criteria when determining which index to use(per Oracle Docs):

Number of rows in the index (cardinality).

Number of distinct keys. These define the selectivity of the index.

Level or height of the index. This indicates how deeply the data probe must search in order to find the data.

Number of leaf blocks in the index. This is the number of I/Os needed to find the desired rows of data.

Clustering factor (CF). This is the collocation amount of the index block relative to data blocks. The higher the CF, the less likely the optimizer is to select this index.

Average leaf blocks for each key (ALFBKEY). Average number of leaf blocks in which each distinct value in the index appears, rounded to the nearest integer. For indexes that enforce UNIQUE and PRIMARY KEY constraints, this value is always one.

These stats can be viewed by issuing the following statement:

select index_name,
num_rows,
distinct_keys,
leaf_blocks,
clustering_factor ,
blevel,
avg_leaf_blocks_per_key
from dba_indexes
where owner = '<owner_of_the _index>'
order by index_name;

these stats are available only when the objects are analyzed. That is a basic requirement. The gathering of the stats for the objects/schema/database has been already been discussed earlier.

To determine if the right index has been chosen or not

select column_name,
num_distinct,
num_nulls,
num_buckets,
density
from dba_tab_col_statistics
where table_name =’<TABLE_NAME TO BE ACCESSED>’
order by column_name;

If there multiple tables that are to be accessed when more than one table is joined in the SQL statement,

asas
where table_name =’<TABLE_NAME TO BE ACCESSED>’

either issue for each table or use IN parameter.

where table_name IN (‘<TABLE_NAME TO BE ACCESSED>’
,‘<TABLE_NAME TO BE ACCESSED>’
, ……………………………………………..
)

 _name,
num_distinct,
num_nulls,
num_buckets,
density
from dba_tab_col_statistics
where table_name =’<TABLE_NAME TO BE ACCESSED>’
order by column_name;

If there multiple tables that are to be accessed when more than one table is joined in the SQL statement,

where table_name =’<TABLE_NAME TO BE ACCESSED>’

either issue for each table or use IN parameter.

where table_name IN (‘<TABLE_NAME TO BE ACCESSED>’
,‘<TABLE_NAME TO BE ACCESSED>’
, ……………………………………………..
)

No comments:

Post a Comment