What is EM
Express?
Oracle Enterprise Manager Database Express, or simply EM
Express is embedded inside the database and is auto-configured at the time of
installation. With only a 20 MB disk footprint, there is no resource usage when
EM Express is not invoked or used. Oracle Enterprise Manager Database Express
can manage both single instance and Oracle Real Application Clusters (Oracle
RAC) databases. The tool also has built-in support for container databases
(CDBs). Along with in-depth support for performance management features, Oracle
Enterprise Manager Database Express can be used for configuration management,
storage management and security management.Question: Does each pluggable database (PDB) have its own EM Express? How do you make sure EM Express is not installed with every 12c database?
Yes. You can setup EM Express for each pluggable database. EM Express is installed by default. If you do not want to use it then do not set the XDB http/https port. Read the introduction to EM Express.
Question: How do you use EM Express to monitor pluggable databases?
EM Express can be invoked at the root container or inside a PDB. So either a container database administrator (CDBA) or a pluggable database administrator (PDBA) can use EM Express to monitor and manage a pluggable database. Read the introduction to EM Express.
Question: Do you use EM Express and EM cloud control at same time? What's the difference?
You can use both EM Express and EM Cloud Control on the same database. EM Express is built inside the database while EM Cloud Control uses an agent to monitor the database. Read the documentation.
Question: In order to use EM Express functionality in Oracle Database 12c, do you need to have the database packs installed?
Although the basic administration features (configuration, security, storage management) are included in EM Express, the performance management features such as, Performance Hub and SQL Tuning Advisor are licensed separately by Oracle Diagnostics Pack and Oracle Tuning Pack.
Question: Is EM Express available for Oracle Database Standard Edition?
Yes. EM Express is available for Oracle Database 12c Standard Edition. Read more.
Question: Can you do metering and chargeback in EM Express?
No. You cannot do metering and chargeback in EM Express. Metering and chargeback functionally is built into Oracle Cloud Management Pack for Oracle Database and can be used with delivering database as a service (DBaaS). Read the Delivering Database as a Service whitepaper for more details.
Question: Will EM Express be able to manage Data Guard?
No. There are no Data Guard management capabilities in EM Express.
Question: Can we create reports using metric extensions in EM Express?
No. You cannot create reports using metric extensions in EM Express.
Question: Can we use our existing Oracle Enterprise Manager 11g to monitor and manage Oracle Database 12c?
No. You cannot use Oracle Enterprise Manager 11g to manage Oracle Database 12c. In order to take advantage of all the new capabilities you will need to upgrade to Oracle Enterprise Manager 12c (Release 3) and use database plug-in 12.1.0.4.
Question: Which previous database versions are compatible with Oracle Enterprise Manager 12c?
Please check the support matrix in My Oracle Support as the information is updated regularly.
Question: Which versions of Oracle Database can be used with the Consolidated Database Replay feature?
You can use the Consolidated Database Replay on Oracle Database 11.2.0.2.0 and above. Please see My Oracle Support Note 1453789.1.
Question: We are using both Oracle Database 10g and 11g, we are trying to consolidate onto Oracle Database 12c, what is your recommendation?
We recommend using Oracle Real Application Testing to thoroughly test your consolidation. Learn more about using SQL Performance Analyzer to do unit testing of your SQLs' for response time and consider using Consolidated Database Replay to test your consolidation on Oracle Database 12c for throughput and scalability. You can capture workload on Oracle Database 10g and 11g and use Consolidated Database Replay on Oracle Database 12c. For more information on upgrading, be sure to visit the Oracle Database 12c upgrade page on the Oracle Technology Network.
Question: Do we still need the management server and agent running on every box if we upgrade to Oracle Database 12c?
Yes. The architecture used by Oracle Enterprise Manager 12c remains the same and is independent of your upgrade to Oracle Database 12c.
Question: When using Oracle Enterprise Manager 12c (not EM Express), is a separate agent still needed to monitor and collect stats on the instances and hosts?
Yes.
Question: Are there Oracle Enterprise Manager 12c agents available for Oracle Solaris?
Yes. Read more.
Question: Is Enhanced Real-time ADDM only for Oracle Database 12c or can you use it for previous versions?
Enhanced Real-Time ADDM is only available with Oracle Database 12c. Read the data sheet.
Question: Is ADDM integrated into the database or is it a database option?
For ADDM you need to license Oracle Diagnostics Pack.
Question: Several of our applications deal with spikes of activity as a norm, for example, we would get a file every 15 minutes and then process it. So spikes in our systems are normal. How does the new Enhanced Real-time ADDM work to spot the performance spikes?
Enhanced Real-Time ADDM uses a built-in algorithm to detect performance spikes. The current state of the database is evaluated against 9 different criteria (CPU, Memory, I/O, etc.) to determine whether there is a performance problem with your database.
Question: What's the difference between DBAs, PDBAs, and CDBAs?
The database administrator in the multitenant world has two new roles. The Container Database Administrator (CDBA) is responsible for the entire database which holds or contains pluggable databases. The Pluggable Database Administrator (PDBA) is responsible for individual pluggable databases servicing a specific application. For more information read the Oracle Multitenant whitepaper.
Question: To monitor batch jobs, where multiple sessions could be created, how is the batch job identified for monitoring?
Real-Time Database Operations monitoring doesn't’ support multi session yet. However if your batch job uses a single session you can set this job for monitoring via bracketing (i.e. begin operation and end operation).
Question: For the new multitenant architecture, how are resources (i.e. CPU, I/O, memory) handled among pluggable databases? Will there be memory set aside or capped for each PDB? How do you isolate PDBs for one another?
You can allocate CPU resources across pluggable databases via resource management plans. Memory management across PDBs is not yet supported. Read the documentation.
Question: For pluggable databases (PDBs), do you apply database patches to specific PDBs or do you apply the patches to the entire container database?
Database patches should be applied to the entire container database. Read the Oracle Multitenant whitepaper and watch the Pluggable Lifecycle Management demo for more information.
Oracle Database 12c: Enterprise Manager Database Express
Posted
on June 27, 2013
Oracle Database
12c says goodbye to a tool being around after the 10gR1: the Database Console.OC4J for the 10g and weblogic for the 11g, both have had a non-negligible overhead on the systems, especially with many configured instances.
In many cases I’ve decided to switch to Grid/Cloud Control for the main reason to avoid too many db consoles, in other cases I’ve just disabled at all web management.
The new 12c brings a new tool called Database Express (indeed, very similar to its predecessors).
Where’s my emca/emctl?
The DB Express runs entirely with pl/sql code within the XDB schema. It’s XDB that leverages its features to enable a web-based console, and it’s embedded by default in the database.
To enable it, it’s necessary to check that the parameter dispatchers is enabled for XDB:
1
2
3
4
5
|
SQL>showparameterdispatchers
NAME TYPE
VALUE
----------- ------ -----------------------------------
dispatchersstring(PROTOCOL=TCP)(SERVICE=CLASSICXDB)
|
1
2
3
4
5
|
SQL>execdbms_xdb_config.sethttpsport(5502);
PL/SQLproceduresuccessfullycompleted.
SQL>
|
1
2
3
4
5
|
SQL>selectdbms_xdb_config.gethttpsport()fromdual;
DBMS_XDB_CONFIG.GETHTTPSPORT()
------------------------------
5502
|
1
|
https://yourserver:5502/em
|
Lower footprint, less features
From one side DB Express is thin (but not always fast on my laptop…), from the other it has clearly fewer features comparing to the DB Console.
It’s not clear to me if Oracle wants to make it more powerful in future releases or if it’s a move to force everybody to use something else (SQLDeveloper or EM12c Cloud Control). However the DBA management plugin of the current release of SQL Developer is fully compatible with the 12c, including the ability to manage pluggable databases:
So is the EM 12c Cloud Control, so you have plenty of choice to manage your 12c databases from graphical interfaces.
What is ASH
Identify the root cause of performance issues at a precise
point in the past—even when the sessions have disconnected.
In
my previous two articles on the topic of performance tuning (bit.ly/SEzR6t and bit.ly/U7U5IJ), you learned how to
identify the cause of performance bottlenecks occurring in real time and view
the summary of what happened in the past. Although these processes provide
information to help resolve performance issues in many cases, there are times when
you would like to know precisely what happened at a specific point in
the past. In this article, you will learn how to identify the precise sequence
of events in the past in a historical—not current—session.
To
run the examples in this article, execute the setup.sqlscript. The script creates a user called ARUP
and creates three tables: TEST1, TEST2, and TEST3. In the very rare event that
a user called ARUP already exists, modify the script to create another user
with a different name and replace all occurrences of ARUP with that name. The
sample database created by setup.sql will take up less than 200 MB of space in
your database.
After
running the setup.sql script, open four SQL*Plus sessions, connected as ARUP.
In three of these sessions, execute the test1.sql, test2.sql, and upd1.sql
scripts, respectively. Here is an example of one session running upd1.sql on a
UNIX-based system:
# sqlplusarup/arup
SQL> @upd1.sql
In the fourth session, execute upd1.sql again. This last session will hang, because the third session has already executed upd1.sql, which updates one row of the TEST1 table but does not commit and therefore keeps the row locked. The fourth session executes the same script, which attempts to update the same row. But the row is locked by the third session, so the fourth session waits until the lock is gone and the row is available for locking.
The
test1.sql and test2.sql scripts are designed to consume significant amounts of
CPU, so the first two sessions will be slow due to CPU contention. These
sessions will run for a very long time.
After
all the sessions have waited for about 10 minutes, stop the execution in
sessions 1 and 2 by pressing Control-C to exit each SQL*Plus session. In
session 3, enter commit and press Enter. This will
release the lock on the row, and you will see session 4 show “1 row updated.”
Exit SQL*Plus sessions 3 and 4. Now all the sessions connected as ARUP are
closed—and history.
Past Sessions
In
the setup, I showed you how to simulate performance issues in three of the four
sessions. If these were typical business applications, the applications would
have showed signs of slowdown, inviting the ire of the respective application
owners and users and bringing unwanted attention to you, the DBA. Now that you
have that attention, what should you do next?
In
my previous two performance tuning articles, you learned how to find the clues
to the sources of performance issues in sessions. Unfortunately, those
techniques will be of no help in this case. The V$SESSION view shows the reason
for a performance issue in a session, but the session that caused the issue in
this article is no longer present. The other important view, V$SESSION_EVENT,
shows all the wait events waited for by the session, but, as with the V$SESSION
view, it will show the data only if the session is still active in the
database. The view V$SESSTAT shows the resource usage for a session and can
offer clues to which sessions have consumed what amount of a resource such as
redo or memory. However, because all the sessions that experienced performance
issues are historical, looking into these views will not help resolve the
issues. You need to determine the specific events that occurred at a point in
time in historical sessions. If you had enabled tracing, the trace files would
have shown the events, but you did not expect these issues in advance and
didn’t enable tracing, because tracing would have degraded performance. So how
do you now identify the cause of the performance issues suffered by
these old sessions?
Active Session History
Fortunately,
looking at performance issues in old sessions is easy with an Oracle Database
feature called Active Session History. Note that the use of Active Session
History requires Oracle Diagnostics Pack, a licensed option of Oracle Database
available since Oracle Database 10g Release 1.
Every
second, Active Session History polls the database to identify the active
sessions and dumps relevant information about each of them—such as the user ID,
state, the machine it is connected from, and the SQL it is executing—into a
special area in the system global area (SGA) of the database instance called
the ASH buffer. So even though a session is no longer present in the database
instance, the ASH buffer has captured its information. In addition, because
Active Session History records activities every second, it can show a
second-by-second snapshot of the activities in a session. In other words,
Active Session History can show a movie of the activities of the session
instead of a single picture. (Note that when the ASH buffer is filled, the data
is written to disk and the snapshots are taken every 10 seconds rather than
every second.)
You
can examine the contents of the ASH buffer in a view named
V$ACTIVE_SESSION_HISTORY. Here are a few of the important columns of the
V$ACTIVE_SESSION_HISTORY view:
SAMPLE_ID. The unique identifier of the Active
Session History record.
SAMPLE_TIME. When Active Session History captured
this data on all active sessions.
USER_ID. The numerical user ID (not the
username) of the database user who created this session.
SESSION_ID. The session ID (SID) of the session.
SESSION_STATE. The state the session was in when Active
Session History took the sample. It shows WAITING if the session was waiting
for something; otherwise, it shows ON CPU to indicate that the session was
doing productive work.
EVENT. If the session was in a WAITING state
(in the SESSION_STATE column), this column will show the wait event the session
was waiting for.
TIME_WAITED. If the session was in a WAITING state,
this column will show how long it had been waiting when Active Session History
took the sample.
WAIT_TIME. If the session is doing productive
work—not in a WAITING state—this column will show how long the session waited
for the last wait event.
SQL_ID. The ID of the SQL statement the
session was executing at the time the sample was taken.
SQL_CHILD_NUMBER. The child number of
the cursor. If this is the only version of the cursor, the child number will be
0.
Knowing
the meaning of the V$ACTIVE_SESSION_HISTORY columns, you can identify what it
was that historical sessions were waiting for. To begin the identification, you
need to pose two questions to the application owners or users executing the SQL
statements that experienced slow performance:
- Which username was used to connect to the database?
- What was the time interval (start and end times) of the period when the performance issues occurred?
Because
you ran the setup script as the user ARUP, the answer to the first question is
ARUP. Next, you need to find out the USER_ID of the ARUP user by issuing the
following SQL:
select user_id
from dba_users
where username = 'ARUP';
USER_ID
—————————————
92
Now suppose the user told you that the performance issues occurred between 4:55 p.m. and 5:05 p.m. on September 29. With this information, you can query the V$ACTIVE_SESSION_HISTORY view to find out the activities of the ARUP sessions (with USER_ID 92) within that period, as shown in Listing 1. (The output has been truncated to fit the space available.) Because Active Session History collects information on all active sessions, you need to order the output by SID, which identifies a session (shown under SESSION_ID), and then by the collection time (shown under SAMPLE_TIME).
Let’s
examine the first row of the output. It shows that the session identified by
SESSION_ID 39 was waiting for the “enq: TX - row lock contention” event on
29-SEP-12 at 04.55.02.379 PM. Because the session was in a WAITING state, the
value of the WAIT_TIME column is irrelevant, so it shows up as 0. Because the
session was still in a WAITING state when Active Session History captured the
state, the TIME_WAITED column shows 0. When the session finally got the lock,
it could do what it had to do and stopped waiting. At that point, the total
time the session had waited was updated in Active Session History, shown in the
first boldface line in the Listing 1 output (sample time
29-SEP-12 05.16.52.078): 1,310,761,160 microseconds (shown in the TIME_WAITED
column), or about 22 minutes. This is such an important property of Active
Session History that I repeat: Seeing 0 in the WAIT_TIME column does not mean
that the session didn’t wait at all. It simply means that the session was waiting
for some event for more than one second because the previous WAIT_TIME and
TIME_WAITED column values showed 0. You should look at the last occurrence of
the wait event (the EVENT column value) for that session in Active Session
History to determine what the total wait time really was.
When
you explain to your user that the cause of the delay was an unavailable lock
during the period 04.55.02.379 PM to 05.16.52.078 PM, that person might ask you
what SQL statement the session was executing at that time. That’s extremely
easy to get from the Listing 1 output: the session
with SESSION_ID 39 was executing the SQL statement with SQL_ID fx60htyzmz6wv
and child number (CH#) 0.
You
can get that SQL statement text with this query:
select SQL_TEXT
from v$sql
where sql_id =
'fx60htyzmz6wv';
SQL_TEXT
————————————————————————————————————————————————
update test1 set status =
'D' where object_id = :b1
The SQL statement includes an UPDATE that had to lock the row. Because the row had already been locked by another session, it was not possible for this session (SESSION_ID 39) to succeed and it therefore had to wait. The next logical questions from the user would perhaps be which specific row on which table the SQL statement and the session were waiting for and which session had locked the row. These are also very easy to find with the query shown in Listing 2. The BLOCKING_SESSION column shows the session that was holding the lock: the session with SESSION_ID 43. The Listing 2 output also shows the object ID (CURRENT_OBJ#) of the table whose row was locked and other information necessary to get the row information. With the data in the Listing 2 output, you can get the ROWID of the locked row by using the query shown in Listing 3. The row with ROWID AAAdvSAAHAAABGPAAw had been locked by the session with SESSION_ID 43 and was being requested to be updated by the session with SESSION_ID 39. You now have the information on why the session (with SESSION_ID 39) was slow—it was waiting for a lock for 22 minutes—what SQL statement it was executing, and what specific row it was looking to lock. You have successfully uncovered the root cause of the performance issue in the session with SESSION_ID 39.
Code Listing 3: Getting specific row information
select
owner||'.'||object_name||':'||nvl(subobject_name,'-') obj_name,
dbms_rowid.rowid_create (
1,
o.data_object_id,
row_wait_file#,
row_wait_block#,
row_wait_row#
) row_id
from v$session s,
dba_objects o
where sid = &sid
and o.data_object_id =
s.row_wait_obj#
OBJ_NAME ROW_ID
————————————— ———————————————————
ARUP.TEST1:- AAAdvSAAHAAABGPAAw
Resource Contention
After
uncovering the root cause of the slowness in session 39, you now turn your
attention to session 44. Revisiting the output in Listing 1, you can see that
session 44 switched between waiting (shown under SESSION_STATE as WAITING) and
doing productive work (shown as ON CPU). Note the very first line in the output
for the session with SESSION_ID 44 and SAMPLE_TIME 29-SEP-12 04.55.34.419 PM.
The SESSION_STATE column shows WAITING, which means that the session was
waiting at that time. The EVENT and TIME_WAITED columns show “resmgr:cpu
quantum” and “109984,” respectively. This means that the session had already
waited for 109,984 microseconds, or about 0.11 seconds, at that time for the
“resmgr:cpu quantum” event. The next line in Listing 1, sampled about a
second later, shows the SESSION_STATE column value as ON CPU, which means that
the session was doing productive work at that time—not waiting. You need to
know why the session was intermittently waiting for this wait event and
therefore slowing down.
The
“resmgr:cpu quantum” event is due to Oracle Database’s Database Resource
Management feature. Database Resource Management acts as a resource governor:
it limits CPU consumption of individual sessions when the total CPU demand from
all sessions goes up to more than 100 percent, and it enables more-important
sessions to get the CPU they need. Because the output shows the session
waiting, you can conclude that the CPU consumption by the session with
SESSION_ID 44 was high enough at that time for Database Resource Management to
limit its CPU usage. Well, the application owner counters, this application is
very important and the resource usage should not have been constrained.
In
that case, you may suspect that the session was under a consumer group that has
a more restrictive CPU allocation than expected. So your next stop is to find
out what consumer group was active for the session at that time—not now.
Fortunately, the process for finding this information is straightforward.
Active Session History records the consumer group that was active for a session
at the time of sampling and displays that information in the CONSUMER_GROUP_ID
column in the V$ACTIVE_SESSION_HISTORY view. You can see that information by
using the query shown in Listing 4.
Code Listing 4: Listing consumer groups
select sample_time,
session_state, event, consumer_group_id
from
v$active_session_history
where user_id = 92
and sample_time between
to_date('29-SEP-12
04.55.02 PM','dd-MON-yyhh:mi:ss PM')
and
to_date('29-SEP-12
05.05.02 PM','dd-MON-yyhh:mi:ss PM')
and session_id = 44
order by 1;
SESSION
SAMPLE_TIME _STATE EVENT CONSUMER_GROUP_ID
————————————————————————— ———————
——————————————————
—————————————————
29-SEP-12 04.55.34.419
PM WAITING resmgr:cpu quantum 12166
29-SEP-12 04.55.35.419
PM ON CPU 12166
29-SEP-12 04.55.36.419
PM WAITING resmgr:cpu quantum 12166
29-SEP-12 04.55.37.419
PM ON CPU 12166
29-SEP-12 04.55.38.419
PM WAITING resmgr:cpu quantum 12166
29-SEP-12 04.55.39.419
PM WAITING resmgr:cpu quantum 12166
29-SEP-12 04.55.40.419
PM ON CPU 12166
… output truncated …
29-SEP-12 04.55.37.419
PM ON CPU 12162
29-SEP-12 04.55.38.419
PM ON CPU 12166
29-SEP-12 04.55.39.419
PM ON CPU 12162
29-SEP-12 04.55.40.419
PM ON CPU 12162
Because
the session may have been under different consumer groups, it’s wise to select
the consumer groups for all of the sampled data in Active Session History for
that session, rather than just one sample. In this case, the session was under
consumer group 12166 until it changed to 12162 on 29-SEP-12 at 04.55.37.419 PM.
To find out the names of the consumer groups, use the following query:
select name
from
v$rsrc_consumer_group
where id in
(12166,12162);
ID
NAME
—————— ————————————
12166 OTHER_GROUPS
12162 APP_GROUP
From the output of Listing 4, you can see that from 04.55.34.419 PM, the session was under consumer_group_id 12166, which is OTHER_GROUPS (as shown in the query against v$rsrc_consumer_group above). The CONSUMER_GROUP_ID changed from 12166 to 12162 at 04.55.37.419 PM. This could have happened due to one of the three most likely reasons: a DBA manually activated a different resource plan, a different plan was activated automatically by a scheduling mechanism, or a DBA changed the consumer group of the session from OTHER_GROUPS to APP_GROUP (also shown in the query against v$rsrc_consumer_group above) online. Whatever the reason, with the APP_GROUP consumer group in effect, the session was doing more-productive work (as shown by the SESSION_STATE value ON CPU) than waiting with the “resmgr:cpu quantum” event. This observation may lead to many conclusions, but the most obvious is perhaps that the APP_GROUP consumer group is less restrictive in terms of CPU allocation than OTHER_GROUPS. In that case, you should examine why the OTHER_GROUPS consumer group was activated earlier and, perhaps more importantly, whether this restriction was necessary or just a mistake. In any case, you have now found the root cause of the wait.
The
next obvious question is why session 44 consumed so much CPU that it had to be
constrained by Database Resource Management. The answer lies in the SQL
statement that session 44 was executing at that time (not now). The SQL_ID
column in the output of Listing 1 was fngb4y81xr57x.
You can get the text of that SQL statement with the following query:
SQL> select SQL_TEXT
from v$sql
where sql_id =
'fngb4y81xr57x';
SQL_TEXT
———————————————————————————————————
SELECT MAX(TEST1.OWNER)
FROM TEST1,
TEST2, TEST2, TEST2,
TEST2, TEST2,
TEST2, TEST2, TEST2,
TEST2, TEST2,
TEST2, TEST2, TEST2,
TEST2, TEST2,
TEST2, TEST2, TEST2,
TEST2, TEST2,
TEST2, TEST2, TEST2,
TEST2
You can immediately see from the output that the query was performing a Cartesian join (joining all rows of a table with all rows of another without a join condition) on a TEST2 table several times. Even if the TEST2 table has a relatively small number of rows, the Cartesian join will produce a lot of logical I/Os. Because logical I/O requires CPU cycles to execute, it’s hardly surprising that the session consumed so much CPU that Database Resource Management limited it. To resolve this issue, you will need to rewrite the query to eliminate or reduce Cartesian joins.
More Uses
In
the previous sections, you saw how to find issues that occurred at a specific
point in the past in Active Session History. Hopefully, the content gave you an
idea of how powerful Active Session History is and how it can be used in many
circumstances. Here is another example of the power and usage of Active Session
History: suppose a user complains that things seemed to have been slow from a
specific client machine—prolaps01—between 4:55 p.m. and 5:05 p.m. on September
29. Because Active Session History also records the machine name, you can use
the query shown in Listing 5 to display all the different events experienced by
all sessions from the prolaps01 machine and how often each event occurred. You
can see from the Listing 5 output that during that time, the sessions connected
from the prolaps01 client machine experienced locking and Resource
Manager–related waits many times. With this information, you can dig further
inside the V$ACTIVE_SESSION_HISTORY view to identify specific sessions and what
they were doing earlier to have experienced these waits.
Code Listing 5: Checking all events from a
machine
select event, count(1)
from
v$active_session_history
where machine =
'prolaps01'
and sample_time between
to_date('29-SEP-12
04.55.00 PM','dd-MON-yyhh:mi:ss PM')
and
to_date('29-SEP-12
05.05.00 PM','dd-MON-yyhh:mi:ss PM')
group by event
order by event;
EVENT COUNT(1)
———————————————————————————— ————————
… output truncated …
db file scattered
read 93
db file parallel
write 127
log file parallel
write 134
db file sequential
read 293
control file parallel
write 612
control file sequential
read 948
enq: TX - row lock
contention 1309
resmgr:cpu quantum 1371
Although pulling specific data from Active Session History is great for spot analysis, sometimes you may want to take a look at the entire collection of Active Session History data within a time frame. Active Session History reports for a specific time period are great for that. You can generate an Active Session History report from Oracle Enterprise Manager or from the command line. For the latter, connect to the database as a DBA user and execute the following script at the SQL*Plus prompt: @$ORACLE_HOME/rdbms/admin/ashrpt.sql.
Next Steps
LEARN
more about performance tuning
Oracle Database 2 Day + Performance Tuning Guide 11g Release 2 (11.2) generating and using Active Session History reports |
Active Session History Archive
Active
Session History collects information on active sessions from the database
instance every second. Depending on the database activity, that will lead to a
lot of data collected inside the ASH buffer, but because the ASH buffer is a
memory-resident structure, it has only a finite amount of space. In addition,
when the instance goes down, the instance’s memory vanishes with it. Therefore,
Oracle Database archives the information from the ASH buffer to a database
table to make it persistent. This archived table data is visible in a view
called DBA_HIST_ACTIVE_SESS_HISTORY. If you don’t find the data in the
V$ACTIVE_SESSION_HISTORY view, check for it in the DBA_HIST_ACTIVE_SESS_HISTORY
view, as shown in Listing 6. The output shows
that the session was experiencing row lock waits—you can get the lock and row
information from the DBA_HIST_ACTIVE_SESS_HISTORY view by using the query shown
in Listing 7.
Code Listing 7: Getting row lock information from the
Active Session History archive
select sample_time,
session_state, blocking_session,
owner||'.'||object_name||':'||nvl(subobject_name,'-')
obj_name,
dbms_ROWID.ROWID_create (
1,
o.data_object_id,
current_file#,
current_block#,
current_row#
) row_id
from
dba_hist_active_sess_history s, dba_objects o
where user_id = 92
and sample_time between
to_date('29-SEP-12
04.55.02 PM','dd-MON-yyhh:mi:ss PM')
and
to_date('29-SEP-12
05.05.02 PM','dd-MON-yyhh:mi:ss PM')
and event = 'enq: TX -
row lock contention'
and o.data_object_id =
s.current_obj#
order by 1,2;
Conclusion
Active
Session History is a very powerful facility inside Oracle Database that records
the information on all active sessions in the database instance in a buffer every
second and exposes that data to you in a view called V$ACTIVE_SESSION_HISTORY
and subsequently to a persistent table visible in the
DBA_HIST_ACTIVE_SESS_HISTORY view. Active Session History offers a
second-by-second record of the activities inside the session—even when that
session has disconnected or ceased that activity—enabling you to go back in
time and identify what was ailing a specific session at a certain point in the
past.
REM
This to for setting up the test users, tables, etc.
REM
REM
Create the user. If this user exists, use a different name
REM
create
user arup identified by arup
/
grant
create session, unlimited tablespace, create table to arup
/
connect
arup/arup
REM
REM
create all test tables
REM
drop
table test1
/
drop
table test2
/
drop
table test3
/
create
table test1
as
select
* from all_objects
/
create
table test2
as
select
* from test1
/
create
table test3
as
select
rownum col1, created col2
from
test2
/
Script: test1.sql
declare
l_dummy_1 varchar2(2000);
l_dummy_2 varchar2(2000);
l_stmt varchar2(2000);
begin
for i in 1..71540 loop
l_stmt
:=
'select to_char(col2,''mm/dd/yyyy
hh24:mi:ss'')'||
' from test3'||
' where col1 = to_char('||i||')';
dbms_output.put_line('l_stmt='||l_stmt);
execute immediate l_stmt into
l_dummy_1;
l_stmt
:=
'select col1 '||
'from test3 '||
'where col2 = to_date('''||
l_dummy_1||
''',''mm/dd/yyyy hh24:mi:ss'')'||
' and col1 =
'''||to_char(i)||'''';
dbms_output.put_line('l_stmt='||l_stmt);
execute immediate l_stmt into
l_dummy_2;
end loop;
end;
/
Script: test2.sql
declare
l_dummy
varchar2(200);
begin
select max(test1.owner)
into l_dummy
from test1, test2, test2, test2, test2,
test2, test2, test2, test2,
test2, test2, test2, test2,
test2, test2, test2, test2,
test2, test2, test2, test2,
test2, test2, test2, test2;
dbms_lock.sleep(120);
end;
/
Script: upd1.sql
update
test1 set status = 'D' where object_id = 2
/
|
ASH
Jump
to: navigation, search
ASH (Active Session History)
is a series of snapshots of the v$sessions and v$session_wait views over time -
used to track session activity and simplify performance tuning. Snapshot are taken every second and stored in memory (v$active_session_history) for about 30 minutes. After that the data is flushed to the AWR (dba_hist_active_sess_history table).
Contents |
History
ASH was first introduced in Oracle 10g.Licensing
ASH may not be used unless Enterprise Manager Diagnostic Pack is licensed.Sample queries
Top CPU consumers (last 5 minutes):SELECT session_id, count(*)
FROM v$active_session_history
WHERE session_state= 'ON CPU'
AND SAMPLE_TIME >sysdate - (5/(24*60))
GROUP BY session_id
ORDER BY count(*) desc;
Top waiting
sessions (last 5 minutes): SELECT session_id, count(*)
FROM v$active_session_history
WHERE session_state= 'WAITING'
AND SAMPLE_TIME >sysdate - (5/(24*60))
GROUP BY session_id
ORDER BY count(*) desc;
What is db time
Oracle Metric DB time
Oracle Tips by Burleson Consulting
|
ADDM’s goal is to improve the value of a statistic called db time. Db time is a value that Oracle calculates to indicate the cumulative time that is spent processing user requests. ADDM’s goal is to reduce the overall db time value, and it does not target individual users or user response times, and thus the db time it uses is an aggregate value of overall system CPU and wait times. You can see the current value of db time for the entire system by querying the V$SYS_TIME_MODEL or you can see it for a given session by using the V$SESS_TIME_MODEL view as seen here:
select sum(value) "DB time" from v$sess_time_model
where stat_name='DB time';
DB time
----------
109797
Managing SQL Profiles
This chapter contains the following topics:22.1 About SQL Profiles
A SQL profile is a database object that contains auxiliary statistics specific to a SQL statement. Conceptually, a SQL profile is to a SQL statement what object-level statistics are to a table or index. SQL profiles are created when a DBA invokes SQL Tuning Advisor (see "About SQL Tuning Advisor").This section contains the following topics:
22.1.1 Purpose of SQL Profiles
When profiling a SQL statement, SQL Tuning Advisor uses a specific set of bind values as input, and then compares the optimizer estimate with values obtained by executing fragments of the statement on a data sample. When significant variances are found, SQL Tuning Advisor bundles corrective actions together in a SQL profile, and then recommends its acceptance.The corrected statistics in a SQL profile can improve optimizer cardinality estimates, which in turn leads the optimizer to select better plans. SQL profiles provide the following benefits over other techniques for improving plans:
·
Unlike
hints and stored outlines, SQL profiles do not tie
the optimizer to a specific plan or subplan. SQL profiles fix incorrect
estimates while giving the optimizer the flexibility to pick the best plan in
different situations.
·
Unlike
hints, no changes to application source code are necessary when using SQL
profiles. The use of SQL profiles by the database is transparent to the user.
See
Also:
22.1.2 Concepts for SQL Profiles
A SQL profile is a collection of auxiliary statistics on a query, including all tables and columns referenced in the query. The profile stores this information in the data dictionary. The optimizer uses this information at optimization time to determine the correct plan.
Note:
The
SQL profile contains supplemental statistics for the entire statement,
not individual plans. The profile does not itself
determine a specific plan.
A SQL profile
contains, among other statistics, a set of cardinality adjustments. The
cardinality measure is based on sampling the WHERE
clause rather than on
statistical projection. A profile uses parts of the query to determine whether
the estimated cardinalities are close to the actual cardinalities and, if a
mismatch exists, uses the corrected cardinalities. For example, if a SQL
profile exists for SELECT * FROM t WHERE x=5 AND y=10
, then the profile stores
the actual number of rows returned.When choosing plans, the optimizer has the following sources of information:
·
The
environment, which contains the database configuration, bind variable values, optimizer statistics, data set, and so
on
·
The
supplemental statistics in the SQL profile
Figure
22-1 shows the relationship between a SQL statement and the SQL profile for
this statement. The optimizer uses the SQL profile and the environment to
generate an execution plan. In this example, the plan is in the SQL plan baseline for the statement.If either the optimizer environment or SQL profile change, then the optimizer can create a new plan. As tables grow, or as indexes are created or dropped, the plan for a SQL profile can change. The profile continues to be relevant even if the data distribution or access path of the corresponding statement changes. In general, you do not need to refresh SQL profiles.
Over time, profile content can become outdated. In this case, performance of the SQL statement may degrade. The statement may appear as high-load or top SQL. In this case, the Automatic SQL Tuning task again captures the statement as high-load SQL. You can implement a new SQL profile for the statement.
Internally, a SQL profile is implemented using hints that address different types of problems. These hints do not specify any particular plan. Rather, the hints correct errors in the optimizer estimation algorithm that lead to suboptimal plans. For example, a profile may use the
TABLE_STATS
hint to set object
statistics for tables when the statistics are missing or stale.
See
Also:
22.1.2.1 SQL Profile Recommendations
As explained in "SQL Profiling", SQL Tuning Advisor invokes Automatic Tuning Optimizer to generate SQL profile recommendations. Recommendations to implement SQL profiles occur in a finding, which appears in a separate section of the SQL Tuning Advisor report.When you implement (or accept) a SQL profile, the database creates the profile and stores it persistently in the data dictionary. However, the SQL profile information is not exposed through regular dictionary views.
In this example, the database found a better plan for a
SELECT
statement that uses
several expensive joins. The database recommends running DBMS_SQLTUNE.ACCEPT_SQL_PROFILE
to implement the profile,
which enables the statement to run 98.53% faster.-------------------------------------------------------------------------------
FINDINGS SECTION (2 findings)
-------------------------------------------------------------------------------
1- SQL Profile Finding (see explain plans section below)
--------------------------------------------------------
A potentially better execution plan was found for this statement. Choose
one of the following SQL profiles to implement.
Recommendation (estimated benefit: 99.45%)
------------------------------------------
- Consider accepting the recommended SQL profile.
execute dbms_sqltune.accept_sql_profile(task_name => 'my_task',
object_id => 3, task_owner => 'SH', replace => TRUE);
Validation results
------------------
The SQL profile was tested by executing both its plan and the original plan
and measuring their respective execution statistics. A plan may have been
only partially executed if the other could be run to completion in less time.
Original Plan With SQL Profile % Improved
------------- ---------------- ----------
Completion Status: PARTIAL COMPLETE
Elapsed Time(us): 15467783 226902 98.53 %
CPU Time(us): 15336668 226965 98.52 %
User I/O Time(us): 0 0
Buffer Gets: 3375243 18227 99.45 %
Disk Reads: 0 0
Direct Writes: 0 0
Rows Processed: 0 109
Fetches: 0 109
Executions: 0 1
Notes
-----
1. The SQL profile plan was first executed to warm the buffer cache.
2. Statistics for the SQL profile plan were averaged over next 3 executions.
Sometimes SQL
Tuning Advisor may recommend implementing a profile that uses the Automatic
Degree of Parallelism (Auto DOP) feature. A parallel query profile is only
recommended when the original plan is serial and when parallel execution can
significantly reduce the elapsed time for a long-running query.When it recommends a profile that uses Auto DOP, SQL Tuning Advisor gives details about the performance overhead of using parallel execution for the SQL statement in the report. For parallel execution recommendations, SQL Tuning Advisor may provide two SQL profile recommendations, one using serial execution and one using parallel.
The following example shows a parallel query recommendation. In this example, a degree of parallelism of 7 improves response time significantly at the cost of increasing resource consumption by almost 25%. You must decide whether the reduction in database throughput is worth the increase in response time.
Recommendation (estimated benefit: 99.99%)
------------------------------------------
- Consider accepting the recommended SQL profile to use parallel execution
for this statement.
execute dbms_sqltune.accept_sql_profile(task_name => 'gfk_task',
object_id => 3, task_owner => 'SH', replace => TRUE,
profile_type => DBMS_SQLTUNE.PX_PROFILE);
Executing this query parallel with DOP 7 will improve its response time
82.22% over the SQL profile plan. However, there is some cost in enabling
parallel execution. It will increase the statement's resource consumption by
an estimated 24.43% which may result in a reduction of system throughput.
Also, because these resources are consumed over a much smaller duration, the
response time of concurrent statements might be negatively impacted if
sufficient hardware capacity is not available.
The following data shows some sampled statistics for this SQL from the past
week and projected weekly values when parallel execution is enabled.
Past week sampled statistics for this SQL
-----------------------------------------
Number of executions 0
Percent of total activity .29
Percent of samples with #Active Sessions > 2*CPU 0
Weekly DB time (in sec) 76.51
Projected statistics with Parallel Execution
--------------------------------------------
Weekly DB time (in sec) 95.21
See
Also:
·
Oracle Database VLDB and Partitioning Guide to learn
more about Auto DOP
·
Oracle Database PL/SQL Packages and Types Reference to
learn about the
DBMS_SQLTUNE.ACCEPT_SQL_PROFILE
procedure22.1.2.2 SQL Profiles and SQL Plan Baselines
You can use SQL profiles with or without SQL plan management. No strict relationship exists between the SQL profile and the plan baseline. If a statement has multiple plans in a SQL plan baseline, then a SQL profile is useful because it enables the optimizer to choose the lowest-cost plan in the baseline.
See
Also:
22.1.3 User Interfaces for SQL Profiles
Oracle Enterprise Manager Cloud Control (Cloud Control) usually handles SQL profiles as part of automatic SQL tuning.On the command line, you can manage SQL profiles with the
DBMS_SQLTUNE
package. To use the APIs,
you must have the ADMINISTER SQL MANAGEMENT OBJECT
privilege.
See
Also:
·
Oracle Database PL/SQL Packages and Types Reference for
information about the
DBMS_SQLTUNE
package
·
Oracle Database 2 Day + Performance Tuning Guide to
learn how to manage SQL profiles with Cloud Control
22.1.4 Basic Tasks for SQL Profiles
This section explains the basic tasks involved in managing SQL profiles. Figure 22-2 shows the basic workflow for implementing, altering, and dropping SQL profiles.Typically, you manage SQL profiles in the following sequence:
1.
Implement
a recommended SQL profile.
"Implementing
a SQL Profile" describes this task.
2.
Obtain
information about SQL profiles stored in the database.
"Listing
SQL Profiles" describes this task.
3.
Optionally,
modify the implemented SQL profile.
"Altering
a SQL Profile" describes this task.
4.
Drop
the implemented SQL profile when it is no longer needed.
"Dropping
a SQL Profile" describes this task.
To tune SQL
statements on another database, you can transport both a SQL tuning set and a
SQL profile to a separate database. "Transporting
a SQL Profile" describes this task.
See
Also:
22.2 Implementing a SQL Profile
Implementing (also known as accepting) a SQL profile means storing it persistently in the database. A profile must be implemented before the optimizer can use it as input when generating plans.22.2.1 About SQL Profile Implementation
As a rule of thumb, implement a SQL profile recommended by SQL Tuning Advisor. If the database recommends both an index and a SQL profile, then either use both or use the SQL profile only. If you create an index, then the optimizer may need the profile to pick the new index.In some situations, SQL Tuning Advisor may find an improved serial plan in addition to an even better parallel plan. In this case, the advisor recommends both a standard and a parallel SQL profile, enabling you to choose between the best serial and best parallel plan for the statement. Implement a parallel plan only if the increase in response time is worth the decrease in throughput.
To implement a SQL profile, execute the
DBMS_SQLTUNE.ACCEPT_SQL_PROFILE
procedure. Some important
parameters are as follows:
·
profile_type
Set
this parameter to
REGULAR_PROFILE
for a SQL profile without
a change to parallel execution, or PX_PROFLE
for a SQL profile with a
change to parallel execution.
·
force_match
This
parameter controls statement matching. Typically, an accepted SQL profile is
associated with the SQL statement through a SQL signature that is generated using a
hash function. This hash function changes the SQL statement to upper case and
removes all extra whites spaces before generating the signature. Thus, the same
SQL profile works for all SQL statements in which the only difference is case
and white spaces.
By
setting
force_match
to true
, the SQL profile
additionally targets all SQL statements that have the same text after the
literal values in the WHERE
clause have been replaced by bind variables.
This setting may be useful for applications that use only literal values
because it enables SQL with text differing only in its literal values to share
a SQL profile. If both literal values and bind variables are in the SQL text,
or if force_match
is set to false
(default), then the
literal values in the WHERE
clause are not replaced by bind
variables.
See
Also:
Oracle Database PL/SQL Packages and Types Reference for
information about the
ACCEPT_SQL_PROFILE
procedure22.2.2 Implementing a SQL Profile
This section shows how to use theACCEPT_SQL_PROFILE
procedure to implement a
SQL profile.This tutorial assumes the following:
·
The
SQL Tuning Advisor task
STA_SPECIFIC_EMP_TASK
includes a recommendation
to create a SQL profile.
·
The
name of the SQL profile is
my_sql_profile
.
·
The
PL/SQL block accepts a profile that uses parallel execution (
profile_type
).
·
The
profile uses force matching.
To
implement a SQL profile:
·
Connect
SQL*Plus to the database with the appropriate privileges, and then execute the
ACCEPT_SQL_PROFILE
function.
For
example, execute the following PL/SQL:
DECLARE
my_sqlprofile_name VARCHAR2(30);
BEGIN
my_sqlprofile_name := DBMS_SQLTUNE.ACCEPT_SQL_PROFILE (
task_name => 'STA_SPECIFIC_EMP_TASK'
, name => 'my_sql_profile'
, profile_type => DBMS_SQLTUNE.PX_PROFILE
, force_match => true
);
END;
/
See
Also:
Oracle Database PL/SQL Packages and Types Reference to
learn about the
DBMS_SQLTUNE.ACCEPT_SQL_PROFILE
procedure22.3 Listing SQL Profiles
The data dictionary viewDBA_SQL_PROFILES
stores SQL profiles
persistently in the database. The statistics are in an Oracle internal format,
so you cannot query profiles directly. However, you can list profiles.
To
list SQL profiles:
·
Connect
SQL*Plus to the database with the appropriate privileges, and then query the
DBA_SQL_PROFILES
view.
For
example, execute the following query:
COLUMN category FORMAT a10
COLUMN sql_text FORMAT a20
SELECT NAME, SQL_TEXT, CATEGORY, STATUS
FROM DBA_SQL_PROFILES;
Sample
output appears below:
NAME SQL_TEXT CATEGORY STATUS
------------------------------ -------------------- ---------- --------
SYS_SQLPROF_01285f6d18eb0000 select promo_name, c DEFAULT ENABLED
ount(*) c from promo
tions p, sales s whe
re s.promo_id = p.pr
omo_id and p.promo_c
ategory = 'internet'
group by p.promo_na
me order by c desc
See
Also:
22.4 Altering a SQL Profile
You can alter attributes of an existing SQL profile using theattribute_name
parameter of the ALTER_SQL_PROFILE
procedure.The
CATEGORY
attribute determines which
sessions can apply a profile. View the CATEGORY
attribute by querying DBA_SQL_PROFILES.CATEGORY
. By default, all profiles
are in the DEFAULT
category, which means that all sessions in which the SQLTUNE_CATEGORY
initialization parameter is set to DEFAULT
can use the profile.By altering the category of a SQL profile, you determine which sessions are affected by profile creation. For example, by setting the category to
DEV
, only sessions in which
the SQLTUNE_CATEGORY
initialization parameter
is set to DEV
can use the profile. Other sessions do not have access
to the SQL profile and execution plans for SQL statements are not impacted by
the SQL profile. This technique enables you to test a profile in a restricted
environment before making it available to other sessions.The example in this section assumes that you want to change the category of the SQL profile so it is used only by sessions with the SQL profile category set to
TEST
, run the SQL statement,
and then change the profile category back to DEFAULT
.
To
alter a SQL profile:
1.
Connect
SQL*Plus to the database with the appropriate privileges, and then use the
ALTER_SQL_PROFILE
procedure to set the attribute_name
.
For
example, execute the following code to set the attribute
CATEGORY
to TEST
:VARIABLE pnamemy_sql_profile
BEGIN DBMS_SQLTUNE.ALTER_SQL_PROFILE (
name => :pname
, attribute_name => 'CATEGORY'
, value => 'TEST'
);
END;
2.
Change
the initialization parameter setting in the current database session.
For
example, execute the following SQL:
ALTER SESSION SET SQLTUNE_CATEGORY = 'TEST';
3.
Test
the profiled SQL statement.
4.
Use
the
ALTER_SQL_PROFILE
procedure to set the attribute_name
.
For
example, execute the following code to set the attribute
CATEGORY
to DEFAULT
:VARIABLE pnamemy_sql_profile
BEGIN
DBMS_SQLTUNE.ALTER_SQL_PROFILE (
name => :pname
, attribute_name => 'CATEGORY'
, value => 'DEFAULT'
);
END;
See
Also:
·
Oracle Database PL/SQL Packages and Types Reference to
learn about the
ALTER_SQL_PROFILE
procedure22.5 Dropping a SQL Profile
You can drop a SQL profile with theDROP_SQL_PROFILE
procedure.This section assumes the following:
·
You
want to drop
my_sql_profile
.
·
You
want to ignore errors raised if the name does not exist.
To
drop a SQL profile:
·
Connect
SQL*Plus to the database with the appropriate privileges, call the
DBMS_SQLTUNE.DROP_SQL_PROFILE
procedure.
The
following example drops the profile named
my_sql_profile
:BEGIN
DBMS_SQLTUNE.DROP_SQL_PROFILE (
name => 'my_sql_profile'
);
END;
/
See
Also:
22.6 Transporting a SQL Profile
You can transport SQL profiles. This operation involves exporting the SQL profile from theSYS
schema in one database to
a staging table, and then importing the SQL profile from the staging table into
another database. You can transport a SQL profile to any Oracle database
created in the same release or later.Table 22-1 shows the main procedures and functions for managing SQL profiles.
Procedure
or Function
|
Description
|
CREATE_STGTAB_SQLPROF |
Creates the staging table used for copying SQL profiles from one system to another. |
PACK_STGTAB_SQLPROF |
Moves profile
data out of the SYS schema into the staging table. |
UNPACK_STGTAB_SQLPROF |
Uses the profile data stored in the staging table to create profiles on this system. |
This tutorial assumes the following:
·
You
want to transport
my_profile
from a production database to a test
database.
·
You
want to create the staging table in the
dba1
schema.
To
transport a SQL profile:
1.
Connect
SQL*Plus to the database with the appropriate privileges, and then use the
CREATE_STGTAB_SQLPROF
procedure to create a
staging table to hold the SQL profiles.
The
following example creates
my_staging_table
in the dba1
schema:BEGIN
DBMS_SQLTUNE.CREATE_STGTAB_SQLPROF (
table_name => 'my_staging_table'
, schema_name => 'dba1'
);
END;
/
2.
Use
the
PACK_STGTAB_SQLPROF
procedure to export SQL
profiles into the staging table.
The
following example populates
dba1.my_staging_table
with the SQL profile my_profile
:BEGIN
DBMS_SQLTUNE.PACK_STGTAB_SQLPROF (
profile_name => 'my_profile'
, staging_table_name => 'my_staging_table'
, staging_schema_owner => 'dba1'
);
END;
/
3.
Move
the staging table to the database where you plan to unpack the SQL profiles.
Move
the table using your utility of choice. For example, use Oracle Data Pump or a
database link.
4.
On
the database where you plan to import the SQL profiles, use
UNPACK_STGTAB_SQLPROF
to unpack SQL profiles
from the staging table.
The
following example shows how to unpack SQL profiles in the staging table:
BEGIN
DBMS_SQLTUNE.UNPACK_STGTAB_SQLPROF(
replace => true
, staging_table_name => 'my_staging_table'
);
END;
/
Sep/1025
What are SQL Profiles and why do we need them ?
If you use DBMS_XPLAN package to analyze execution plans for your SQL statements (and you really should these days), you might have noticed that at times the following line might be displayed along with your execution plan:
Note
-----
-SQL profile "SYS_SQLPROF_012ad8267d9c0000" used FOR this statement
This seems to mean
that:-----
-SQL profile "SYS_SQLPROF_012ad8267d9c0000" used FOR this statement
- An external ‘helper’ object was used during evaluation of your SQL statement
- This ‘helper’ object changed (or, at least, influenced) its execution plan
The problem with ORACLE optimizer
Traditionally (if we forget about ancient rule based days), ORACLE optimizer relies on data statistics to generate execution plans.Statistics, in a nutshell, are very simplified descriptions of the real data. I.e. column statistics for the very large multi Gb table include just a few items, such as i.e. total number of values, number of distinct values, min/max values etc. In other words, statistics capture the overall shape of the data, but many low level details are lost.
To compensate for this loss of details and still provide reasonably accurate estimations, optimizer employs a number of assumptions about the data. In particular, optimizer usually believes that:
- Data values are distributed uniformly (in other words, value: 2 is as frequent as value: 5)
- Records are distributed uniformly (in other words, there is no physical clustering or ordering of data)
- The range of values is continuous (in other words, there are no “holes” in the range)
- Etc
Unfortunately, statistics are simply too crude an instrument to record this level of details.
Paraphrasing Stephen Hawking:
Statistics have no hair
and that means that the optimizer will likely miss out on a potentially better execution plans for this exceptional data.
Finally, there are cases where all the statistics in the world will NOT help optimizer make a better decision. For example, up until ORACLE 11g, optimizer could not properly estimate the number of records that are coming out of PL/SQL table function and, instead, took a wild guess, essentially creating execution plans that had no basis in reality whatsoever.
So, the question becomes, what can be done to correct that ? To make optimizer work efficiently not just for “generic” but “special” data as well?
What are SQL Profiles
The answer that ORACLE came up with was twofold:- Create and record individual execution plans for “special case” SQLs
- Use actual runtime statistics to help design these individual plans
First of all, you need to tell ORACLE which SQLs hit “special data” and can be potentially executed better. This can be done either explicitly (specifying SQLs by hand in dbms_sqltune.create_tuning_task) or implicitly by letting ORACLE pick “most active” SQLs for analysis (this is done by one of 11g auto tasks).
Second, you need to prove to the optimizer that its estimations are “out of whack” and “it can do better”. The way to do it is to run dbms_sqltune.execute_tuning_task (or let auto task do it for you). This is the “meat” of the process and it takes 3 major steps:
- Step 1: Sample the data and get “the real numbers” (most importantly, Cardinality)
- Step 2: Supply these numbers to the optimizer and let it reevaluate execution plan
- Step 3: If optimizer comes up with a different execution plan, run SQL with original and a new plan (multiple times to be sure) and compare performance
- a) The optimizer came up with a new execution plan
- b) Performance while using this new plan is substantially better
How can we save the new (“real cardinality”) numbers ? Well, we can’t “update” regular statistics as, again, they are too crude for this level of details. So, instead, we will use a separate object (“SQL Profile”) that will attach to our SQL and store this kind of information.
How exactly are the new cardinality values stored ? Well, there is no black magic here: they are stored in the form of Hints. You might be thinking of a tried and true /*+ cardinality */ hint that can be used to specify a fixed cardinality number, but ORACLE actually uses a new (and slightly more advanced) /*+ opt_param */ hint that scales default cardinality UP or DOWN instead, i.e.
/*+ opt_estimate(table, t, scale_rows=100) */
-- OR
/*+ opt_estimate(index_scan, t, t_n_idx, scale_rows=0.001) */
So, now, if SQL Profiles are used, estimated default cardinality will be
multiplied by these numbers and, as a result, optimizer will operate with
(hopefully) a more realistic view of our data during SQL evaluation.-- OR
/*+ opt_estimate(index_scan, t, t_n_idx, scale_rows=0.001) */
This is, in a nutshell, how SQL Profiles work in theory. Let’s now see if we can use this knowledge to fight some real SQL problems …
A very real case of mishandled cardinality
This problem that we are going to “fight” today is, although somewhat artificial, yet is also something that can be seen fairly frequently. It is known as optimizer’s “range estimation deficiency” and it goes a bit something like this:Let’s say that we have a table with a DATE column that registers, say “orders shipped” …
CREATETABLE orders (ord_no, shipped)
ASSELECT level,CAST(sysdate-level/24ASDATE)
FROM dual
CONNECTBY level <=500000
/
We also have an
index on the shipped
column:ASSELECT level,CAST(sysdate-level/24ASDATE)
FROM dual
CONNECTBY level <=500000
/
CREATEINDEXorders_shipped_idxON orders (shipped);
so that we can,
very quickly, get to the latest orders by running this
simple SQL:
SELECT*FROM orders WHERE
shipped >=trunc(sysdate,'DD');
Except, our data
has one caveat in it: some of our orders are not yet shipped. To mark them as such
developers decided to use a “special” date far out in the future: January 1st
9999 as, clearly, we do not expect that ‘real’ orders will ever ship on that
date …Let’s say that we have just one order that is unshipped as of right now:
INSERTINTO orders VALUES(-1,DATE'9999-01-01');
COMMIT;
Finally to provide
optimizer with fresh information about the data, we are going to collect
statistics:COMMIT;
EXECdbms_stats.gather_table_stats(USER,'orders', cascade =>TRUE);
We are now ready
to request the latest orders. Since we have 500k records in our table and only
up to ~ 24 of them were shipped today (just see how the table was created),
obviously the most efficient way to get to the data is to use the index:
SELECT*FROM orders WHERE
shipped >=trunc(sysdate,'DD');
----------------------------------------------------------------------------
| Id | Operation | Name |ROWS | Bytes | Cost (%CPU)|TIME |
----------------------------------------------------------------------------
| 0|SELECT STATEMENT | | 496K| 6302K| 452 (24)| 00:00:06 |
|* 1| TABLE ACCESS FULL| ORDERS | 496K| 6302K| 452 (24)| 00:00:06 |
----------------------------------------------------------------------------
Predicate Information (IDENTIFIEDBY operation id):
---------------------------------------------------
1-FILTER("SHIPPED">=TRUNC(SYSDATE@!,'fmdd'))
Well, it didn’t
work – we hit a FULL TABLE SCAN instead. And, if you
look at AUTOTRACE output the reason becomes clear as
our cardinality estimation (how many records we expect to be returned) is
obviously “out of whack”: 496K (instead of what we know should be true “<=
24″)
! ----------------------------------------------------------------------------
| Id | Operation | Name |ROWS | Bytes | Cost (%CPU)|TIME |
----------------------------------------------------------------------------
| 0|SELECT STATEMENT | | 496K| 6302K| 452 (24)| 00:00:06 |
|* 1| TABLE ACCESS FULL| ORDERS | 496K| 6302K| 452 (24)| 00:00:06 |
----------------------------------------------------------------------------
Predicate Information (IDENTIFIEDBY operation id):
---------------------------------------------------
1-FILTER("SHIPPED">=TRUNC(SYSDATE@!,'fmdd'))
Why did we get this obviously miscalculated grotesque cardinality ? The answer is simple: Optimizer’s “Continuous Range” assumption.
This is what is going on:
We know that the data in the SHIPPED column really looks like this:
But to the optimizer, it actually looks like this:
Optimizer does not care that our “unshipped” records are special, it simply sees a (much) bigger range. And just like that, just one ‘innocent’ record and optimizer’s precision (some would say: sanity) went completely down the toilet.
Of course, the root cause here is a bad data design and the correct way to fix it is to mark “unshipped” records differently (i.e. by adding a separate column that would record shipping status).
But, suppose we are stuck with it, so, let’s see if we can “fix” this problem by requesting ORACLE to create SQL Profile for this “very special” SQL.
Increasing optimizer precision by SQL Profiles
The first thing we need to do is: tell ORACLE that we have a potentially special SQL on our hands that requires better precision from the optimizer:
vartask_name
varchar2(30)
EXEC :task_name :=dbms_sqltune.create_tuning_task(
sql_text=>'SELECT * FROM orders WHERE shipped >= trunc(sysdate, ''DD'')'
);
Next, we will let ORACLE
analyze the SQL. To see, what exactly is going on, we will also
enable SQL trace:EXEC :task_name :=dbms_sqltune.create_tuning_task(
sql_text=>'SELECT * FROM orders WHERE shipped >= trunc(sysdate, ''DD'')'
);
ALTERSESSIONSET EVENTS '10046
trace name context forever, level 8';
EXECdbms_sqltune.execute_tuning_task(:task_name);
Once tuning task
is complete, let’s see
if ORACLE found a better execution plan for our SQL (you can see the full listing here):EXECdbms_sqltune.execute_tuning_task(:task_name);
SETlinesize180
SETlongchunksize180
SETpagesize900
SET long 1000000
SELECTdbms_sqltune.report_tuning_task(:task_name)FROM dual;
-------------------------------------------------------------------------------
FINDINGS SECTION(1 finding)
-------------------------------------------------------------------------------
1-SQL Profile Finding (see EXPLAIN plans SECTION below)
--------------------------------------------------------
A potentially better execution plan was found FOR this statement.
Recommendation (estimated benefit: 99.53%)
------------------------------------------
- Consider accepting the recommended SQL profile.
EXECUTEdbms_sqltune.accept_sql_profile(task_name=>'TASK_910',
task_owner=>'MAXYM1',REPLACE=>TRUE);
Validation results
------------------
The SQL profile was tested BY executing BOTH its plan AND the original plan
AND measuring their respective execution statistics. A plan may have been
ONLY partially executed IF the other could be run TO completion IN less TIME.
Original Plan WITHSQL Profile % Improved
------------- ---------------- ----------
Completion STATUS: COMPLETE COMPLETE
Elapsed TIME(us): 234447 444 99.81 %
CPU TIME(us): 236666 0 100 %
USER I/O TIME(us): 0 0
Buffer Gets: 838 4 99.52 %
Physical READ Requests: 0 0
Physical WRITE Requests: 0 0
Physical READ Bytes: 0 0
Physical WRITE Bytes: 0 0
ROWS Processed: 13 13
Fetches: 13 13
Executions: 1 1
Notes
-----
1. The original plan was FIRST executed TO warm the buffer cache.
2. Statistics FOR original plan were averaged OVERNEXT2 executions.
3. The SQL profile plan was FIRST executed TO warm the buffer cache.
4. Statistics FOR the SQL profile plan were averaged OVERNEXT9 executions.
And, if we look
further at the EXPLAIN PLANS section:SETlongchunksize180
SETpagesize900
SET long 1000000
SELECTdbms_sqltune.report_tuning_task(:task_name)FROM dual;
-------------------------------------------------------------------------------
FINDINGS SECTION(1 finding)
-------------------------------------------------------------------------------
1-SQL Profile Finding (see EXPLAIN plans SECTION below)
--------------------------------------------------------
A potentially better execution plan was found FOR this statement.
Recommendation (estimated benefit: 99.53%)
------------------------------------------
- Consider accepting the recommended SQL profile.
EXECUTEdbms_sqltune.accept_sql_profile(task_name=>'TASK_910',
task_owner=>'MAXYM1',REPLACE=>TRUE);
Validation results
------------------
The SQL profile was tested BY executing BOTH its plan AND the original plan
AND measuring their respective execution statistics. A plan may have been
ONLY partially executed IF the other could be run TO completion IN less TIME.
Original Plan WITHSQL Profile % Improved
------------- ---------------- ----------
Completion STATUS: COMPLETE COMPLETE
Elapsed TIME(us): 234447 444 99.81 %
CPU TIME(us): 236666 0 100 %
USER I/O TIME(us): 0 0
Buffer Gets: 838 4 99.52 %
Physical READ Requests: 0 0
Physical WRITE Requests: 0 0
Physical READ Bytes: 0 0
Physical WRITE Bytes: 0 0
ROWS Processed: 13 13
Fetches: 13 13
Executions: 1 1
Notes
-----
1. The original plan was FIRST executed TO warm the buffer cache.
2. Statistics FOR original plan were averaged OVERNEXT2 executions.
3. The SQL profile plan was FIRST executed TO warm the buffer cache.
4. Statistics FOR the SQL profile plan were averaged OVERNEXT9 executions.
2-USINGSQL Profile
--------------------
Plan hash VALUE: 1899245095
--------------------------------------------------------------------------------------------------
| Id | Operation | Name |ROWS | Bytes | Cost (%CPU)|TIME |
--------------------------------------------------------------------------------------------------
| 0|SELECT STATEMENT | | 21| 273| 4 (0)| 00:00:01 |
| 1| TABLE ACCESS BYINDEX ROWID| ORDERS | 21| 273| 4 (0)| 00:00:01 |
|* 2| INDEX RANGE SCAN | ORDERS_SHIPPED_IDX | 13| | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------
Predicate Information (IDENTIFIEDBY operation id):
---------------------------------------------------
2- access("SHIPPED">=TRUNC(SYSDATE@!,'fmdd'))
We can not only
see the correct
cardinality
but also the INDEX SCAN that we wanted to achieve.--------------------
Plan hash VALUE: 1899245095
--------------------------------------------------------------------------------------------------
| Id | Operation | Name |ROWS | Bytes | Cost (%CPU)|TIME |
--------------------------------------------------------------------------------------------------
| 0|SELECT STATEMENT | | 21| 273| 4 (0)| 00:00:01 |
| 1| TABLE ACCESS BYINDEX ROWID| ORDERS | 21| 273| 4 (0)| 00:00:01 |
|* 2| INDEX RANGE SCAN | ORDERS_SHIPPED_IDX | 13| | 3 (0)| 00:00:01 |
--------------------------------------------------------------------------------------------------
Predicate Information (IDENTIFIEDBY operation id):
---------------------------------------------------
2- access("SHIPPED">=TRUNC(SYSDATE@!,'fmdd'))
How exactly ORACLE came up with these correct numbers ? Let’s look at ORACLE SQL trace to find out:
--
Step 1: Sample the data (this is very similar to what dynamic sampling does)
-- Sample all data in the column
SELECT/* DS_SVC */SUM(C1)FROM(
SELECT/*+ NO_INDEX_FFS( "ORDERS") */1AS C1
FROM"ORDERS" SAMPLE BLOCK(62.5,8) SEED(1)
)
-- Sample NOT NULL data in the column
SELECT/* DS_SVC */SUM(C1)FROM(
SELECT/*+ INDEX_FFS( "ORDERS" "ORDERS_SHIPPED_IDX") */1AS C1
FROM"ORDERS" SAMPLE BLOCK(58.7803,8) SEED(1)
WHERE("ORDERS".SHIPPED ISNOTNULL)
)
-- Sample "Relevant" data in the column
SELECT/* DS_SVC */SUM(C1)FROM(
SELECT/*+ NO_INDEX_FFS( "ORDERS") */1AS C1
FROM"ORDERS" SAMPLE BLOCK(62.5,8) SEED(1)
WHERE("ORDERS"."SHIPPED">=TRUNC(SYSDATE@!,'fmdd'))
)
-- Step 2: "Scale" rows based on the results of "sampling"
SELECT/* DS_SVC */
/*+ OPT_ESTIMATE(TABLE, "ORDERS", SCALE_ROWS=4.189686512e-05) */ C1, C2, C3
FROM(
SELECT/*+ INDEX( "ORDERS" "ORDERS_SHIPPED_IDX") */COUNT(*)AS C1,
4294967295AS C2,COUNT(*)AS C3
FROM"ORDERS"
WHERE("ORDERS"."SHIPPED">=TRUNC(SYSDATE@!,'fmdd'))
)
-- Step 3: Verify whether the new plan is indeed better
-- Execute with the ORIGINAL plan: 3 times
/* SQL Analyze(144,0) */SELECT*FROM orders WHERE shipped >=trunc(sysdate,'DD')
-- Then, execute with the NEW plan 10 times
/* SQL Analyze(144,0) */SELECT*FROM orders WHERE shipped >=trunc(sysdate,'DD')
So, ORACLE sampled the data, came up with the new plan and
verified that the new plan is better than the old. It then saved the results of
this work in SQL Profile. -- Sample all data in the column
SELECT/* DS_SVC */SUM(C1)FROM(
SELECT/*+ NO_INDEX_FFS( "ORDERS") */1AS C1
FROM"ORDERS" SAMPLE BLOCK(62.5,8) SEED(1)
)
-- Sample NOT NULL data in the column
SELECT/* DS_SVC */SUM(C1)FROM(
SELECT/*+ INDEX_FFS( "ORDERS" "ORDERS_SHIPPED_IDX") */1AS C1
FROM"ORDERS" SAMPLE BLOCK(58.7803,8) SEED(1)
WHERE("ORDERS".SHIPPED ISNOTNULL)
)
-- Sample "Relevant" data in the column
SELECT/* DS_SVC */SUM(C1)FROM(
SELECT/*+ NO_INDEX_FFS( "ORDERS") */1AS C1
FROM"ORDERS" SAMPLE BLOCK(62.5,8) SEED(1)
WHERE("ORDERS"."SHIPPED">=TRUNC(SYSDATE@!,'fmdd'))
)
-- Step 2: "Scale" rows based on the results of "sampling"
SELECT/* DS_SVC */
/*+ OPT_ESTIMATE(TABLE, "ORDERS", SCALE_ROWS=4.189686512e-05) */ C1, C2, C3
FROM(
SELECT/*+ INDEX( "ORDERS" "ORDERS_SHIPPED_IDX") */COUNT(*)AS C1,
4294967295AS C2,COUNT(*)AS C3
FROM"ORDERS"
WHERE("ORDERS"."SHIPPED">=TRUNC(SYSDATE@!,'fmdd'))
)
-- Step 3: Verify whether the new plan is indeed better
-- Execute with the ORIGINAL plan: 3 times
/* SQL Analyze(144,0) */SELECT*FROM orders WHERE shipped >=trunc(sysdate,'DD')
-- Then, execute with the NEW plan 10 times
/* SQL Analyze(144,0) */SELECT*FROM orders WHERE shipped >=trunc(sysdate,'DD')
The final step for us is to accept the profile so that all the next runs of the same SQL can reuse this superior execution plan.
EXECdbms_sqltune.accept_sql_profile(:task_name);
Conclusion
SQL Profiles are NOT panacea to fix all performance problems, but they can be a big help with SQLs that hit the data that “does not fit” into optimizer’s simplistic expectations.One might think that these “special” SQLs are rare, but, surprisingly, we have found that not to be the case. Even on our mostly OLTP systems, we can find a fair number of SQL Profile suggestions and much more of those in environments with lots of ad-hoc reporting.
And considering that SQL Profiles can only be created if performance is proven better, it is no wonder that SQLs where profiles are created and accepted usually perform better.
Do SQL Profiles have any downsides ? It is too early to tell, but there are a couple of minor ones, I suppose: you’ll need to pay attention to them and make sure that “special” data that caused them to exist in the first place did not change.
In addition to that, there is also a licensing issue: you
can only use SQL Profiles if you license EM Tuning
(and, by extension EM Diagnostics) pack … But that is a completely different
story
Oracle SQL ProfilesExpert Oracle Tips by Burleson ConsultingMarch 28, 2013 |
Automatic SQL Tuning
The Oracle SQL optimizer has always been a key component of execution (explain) plan management. The optimizer was charged with the responsibility of making decisions about execution plans. However, the optimizer has not always been given sufficient time to gather the information necessary for the best possible decision. Automatic SQL tuning and SQL Profiles have made this process easier for the optimizer.
The evolution of the internal aids to the optimizer, including SQL Profiles, is as follows:
- Stored Outlines: Also called Optimizer Plan Stability, stored outlines were cumbersome to manage. Additionally it was difficult to swap out execution plans when using plan stability.
- SQL Profiles - A SQL Profile is a bundle of improved optimizer statistics that is stored in the data dictionary. SQL Profiles are create by running Automatic SQL Tuning. The SQL Profile approach first appeared in Oracle 10g. Allowed to run in tuning mode, the optimizer can gather additional information for making tuning recommendations on specific SQL statements. Tuning mode also allows the optimizer to recommend the gathering of statistics on objects with stale or missing statistics. These additional statistics are stored in an SQL Profile. A SQL tuning Set (STS) could be tested as a workload, and Oracle would allow the DBA to implement changes to execution plans.
- SQL Plan management - Starting in 11g, there is an even easier to use approach to locking-down SQL execution plans. The 10g SQL Profile approach is deprecated, and uses only two parameters, optimizer_capture_sql_plan_baselines and optimizer_use_sql_plan_baselines. Also see dba_sql_plan_baselines tips.
What is ADDM
Automatic Database Diagnostic Monitor (ADDM) in Oracle Database 10g
This article describes several methods for producing reports from the Automatic Database Diagnostic Monitor (ADDM) in Oracle 10g.- Overview
- Enterprise Manager
- addmrpt.sql Script
- DBMS_ADVISOR
- Related Views
- SQL Developer and ADDM Reports
- Automatic Workload Repository (AWR) in Oracle Database 10g
- AWR Baseline Enhancements in Oracle Database 11g Release 1
- Active Session History (ASH)
Overview
The Automatic Database Diagnostic Monitor (ADDM) analyzes data in the Automatic Workload Repository (AWR) to identify potential performance bottlenecks. For each of the identified issues it locates the root cause and provides recommendations for correcting the problem. An ADDM analysis task is performed and its findings and recommendations stored in the database every time an AWR snapshot is taken provided theSTATISTICS_LEVEL
parameter is set to TYPICAL
or ALL
. The ADDM analysis
includes the following.- CPU load
- Memory usage
- I/O usage
- Resource intensive SQL
- Resource intensive PL/SQL and Java
- RAC issues
- Application issues
- Database configuration issues
- Concurrency issues
- Object contention
FINDING 1: 59% impact (944 seconds)
-----------------------------------
The buffer cache was undersized causing significant additional read I/O.
RECOMMENDATION 1: DB Configuration, 59% benefit (944 seconds)
ACTION: Increase SGA target size by increasing the value of parameter
"sga_target" by 28 M.
SYMPTOMS THAT LED TO THE FINDING:
Wait class "User I/O" was consuming significant database time. (83%
impact [1336 seconds])
The
recommendations may include:- Hardware changes
- Database configuration changes
- Schema changes
- Application changes
- Using other advisors
DBIO_EXPECTED
parameter which should be
set to the average time (in microseconds) it takes to read a single database
block from disk. Typical values range from 5000 to 20000 microsoconds. The
parameter can be set using the following.EXECUTE DBMS_ADVISOR.set_default_task_parameter('ADDM', 'DBIO_EXPECTED', 8000);
Enterprise Manager
The obvious place to start viewing ADDM reports is Enterprise Manager. The "Performance Analysis" section on the "Home" page is a list of the top five findings from the last ADDM analysis task.Specific reports can be produced by clicking on the "Advisor Central" link, then the "ADDM" link. The resulting page allows you to select a start and end snapshot, create an ADDM task and display the resulting report by clicking on a few links.
addmrpt.sql Script
Theaddmrpt.sql
script can be used to
create an ADDM report from SQL*Plus. The script is called as follows.-- UNIX
@/u01/app/oracle/product/10.1.0/db_1/rdbms/admin/addmrpt.sql
-- Windows
@d:\oracle\product\10.1.0\db_1\rdbms\admin\addmrpt.sql
It then lists all
available snapshots and prompts you to enter the start and end snapshot along
with the report name.An example of the ADDM report can be seen here.
DBMS_ADVISOR
TheDBMS_ADVISOR
package can be used to
create and execute any advisor tasks, including ADDM tasks. The following
example shows how it is used to create, execute and display a typical ADDM
report.BEGIN
-- Create an ADDM task.
DBMS_ADVISOR.create_task (
advisor_name => 'ADDM',
task_name => '970_1032_AWR_SNAPSHOT',
task_desc => 'Advisor for snapshots 970 to 1032.');
-- Set the start and end snapshots.
DBMS_ADVISOR.set_task_parameter (
task_name => '970_1032_AWR_SNAPSHOT',
parameter => 'START_SNAPSHOT',
value => 970);
DBMS_ADVISOR.set_task_parameter (
task_name => '970_1032_AWR_SNAPSHOT',
parameter => 'END_SNAPSHOT',
value => 1032);
-- Execute the task.
DBMS_ADVISOR.execute_task(task_name => '970_1032_AWR_SNAPSHOT');
END;
/
-- Display the report.
SET LONG 100000
SET PAGESIZE 50000
SELECT DBMS_ADVISOR.get_task_report('970_1032_AWR_SNAPSHOT') AS report
FROM dual;
SET PAGESIZE 24
The value for the SET LONG
command should be adjusted
to allow the whole report to be displayed.The relevant AWR snapshots can be identified using the
DBA_HIST_SNAPSHOT
view.Related Views
The following views can be used to display the ADDM output without using Enterprise Manager or theGET_TASK_REPORT
function.DBA_ADVISOR_TASKS
- Basic information about existing tasks.DBA_ADVISOR_LOG
- Status information about existing tasks.DBA_ADVISOR_FINDINGS
- Findings identified for an existing task.DBA_ADVISOR_RECOMMENDATIONS
- Recommendations for the problems identified by an existing task.
SQL Developer and ADDM Reports
If you are using SQL Developer 4 onward, you can view ADDM reports directly from SQL Developer. If it is not already showing, open the DBA pane "View > DBA", expand the connection of interest, then expand the "Performance" node. The ADDM reports are available from the "Automatic Database Diagnostics Monitor" node.
What is Resource
Manager?
Introduction
One
key to a healthy database is maintaining a healthy CPU load. Excessive CPU load
can
destabilize the server and expose operating system bugs. Excessive CPU load can
also
prevent critical Oracle background processes from running in a timely manner,
resulting
in failures such as database instance evictions on a RAC database. At such
times,
your response time may be so poor that you may be unable to debug the source of
the
excessive CPU load, for example by identifying and fixing a faulty SQL plan.
Using
Oracle Database Resource Manager, you can ensure that your database’s CPU
load
is always healthy, thus avoiding all of these problems. Resource Manager
manages
your
CPU load so that all CPUs are fully utilized yet there is no thrashing since no
(or
very
few) Oracle processes are waiting in the operating system run queue. Using a
database
feature called Instance Caging, you can configure Resource Manager to
manage
the CPU load to even lower levels.
You
can also use Resource Manager to explicitly allocate CPU resources to the
multiple
workloads
or applications that are sharing the database. For example, you can prioritize
“sys”
over regular users and OLTP transactions overreports and batch jobs. Or you can
configure
CPU allocations and limits for each application in a consolidated database.
WHAT IS ADDM
This chapter describes Oracle Database automatic features for performance diagnosing and tuning.This chapter contains the following topics:
See
Also:
Oracle Database 2 Day + Performance Tuning Guide for
information about using Oracle Enterprise Manager to diagnose and tune the
database with the Automatic Database Diagnostic Monitor
6.1 Overview of the Automatic Database Diagnostic Monitor
When problems occur with a system, it is important to perform accurate and timely diagnosis of the problem before making any changes to a system. Oftentimes, a database administrator (DBA) simply looks at the symptoms and immediately starts changing the system to fix those symptoms. However, an accurate diagnosis of the actual problem in the initial stage significantly increases the probability of success in resolving the problem.With Oracle Database, the statistical data needed for accurate diagnosis of a problem is stored in the Automatic Workload Repository (AWR). The Automatic Database Diagnostic Monitor (ADDM):
·
Analyzes
the AWR data on a regular basis
·
Diagnoses
the root causes of performance problems
·
Provides
recommendations for correcting any problems
·
Identifies
non-problem areas of the system
Because AWR is a
repository of historical performance data, ADDM can analyze performance issues
after the event, often saving time and resources in reproducing a problem. For
information about the AWR, see "Overview
of the Automatic Workload Repository".In most cases, ADDM output should be the first place that a DBA looks when notified of a performance problem. ADDM provides the following benefits:
·
Automatic
performance diagnostic report every hour by default
·
Problem
diagnosis based on decades of tuning expertise
·
Time-based
quantification of problem impacts and recommendation benefits
·
Identification
of root cause, not symptoms
·
Recommendations
for treating the root causes of problems
·
Identification
of non-problem areas of the system
·
Minimal
overhead to the system during the diagnostic process
It is important to
realize that tuning is an iterative process, and fixing one problem can cause
the bottleneck to shift to another part of the system. Even with the benefit of
ADDM analysis, it can take multiple tuning cycles to reach acceptable system
performance. ADDM benefits apply beyond production systems; on development and
test systems, ADDM can provide an early warning of performance issues.This section contains the following topics:
6.1.1 ADDM Analysis
An ADDM analysis can be performed on a pair of AWR snapshots and a set of instances from the same database. The pair of AWR snapshots define the time period for analysis, and the set of instances define the target for analysis.If you are using Oracle Real Application Clusters (Oracle RAC), ADDM has three analysis modes:
·
Database
In
Database mode, ADDM analyzes all instances of the database.
·
Instance
In
Instance mode, ADDM analyzes a particular instance of the database.
·
Partial
In
Partial mode, ADDM analyzes a subset of all database instances.
If you are not
using Oracle RAC, ADDM can only function in Instance mode because there is only
one instance of the database.An ADDM analysis is performed each time an AWR snapshot is taken and the results are saved in the database. The time period analyzed by ADDM is defined by the last two snapshots (the last hour by default). ADDM will always analyze the specified instance in Instance mode. For non-Oracle RAC or single instance environments, the analysis performed in the Instance mode is the same as a database-wide analysis. If you are using Oracle RAC, ADDM will also analyze the entire database in Database mode, as described in "Using ADDM with Oracle Real Application Clusters". After an ADDM completes its analysis, you can view the results using Oracle Enterprise Manager, or by viewing a report in a SQL*Plus session.
ADDM analysis is performed top down, first identifying symptoms, and then refining them to reach the root causes of performance problems. The goal of the analysis is to reduce a single throughput metric called
DBtime
. DBtime
is the cumulative time
spent by the database in processing user requests. It includes wait time and
CPU time of all non-idle user sessions. DBtime
is displayed in the V$SESS_TIME_MODEL
and V$SYS_TIME_MODEL
views.
See
Also:
·
Oracle Database Concepts for information about server
processes
By
reducing DBtime
, the database is able to support more user requests
using the same resources, which increases throughput. The problems reported by
ADDM are sorted by the amount of DBtime
they are responsible for.
System areas that are not responsible for a significant portion of DBtime
are reported as
non-problem areas.The types of problems that ADDM considers include the following:
·
CPU
bottlenecks - Is the system CPU bound by Oracle Database or some other
application?
·
Undersized
Memory Structures - Are the Oracle Database memory structures, such as the SGA,
PGA, and buffer cache, adequately sized?
·
I/O
capacity issues - Is the I/O subsystem performing as expected?
·
High
load SQL statements - Are there any SQL statements which are consuming
excessive system resources?
·
High
load PL/SQL execution and compilation, and high-load Java usage
·
Oracle
RAC specific issues - What are the global cache hot blocks and objects; are
there any interconnect latency issues?
·
Sub-optimal
use of Oracle Database by the application - Are there problems with poor
connection management, excessive parsing, or application level lock contention?
·
Database
configuration issues - Is there evidence of incorrect sizing of log files,
archiving issues, excessive checkpoints, or sub-optimal parameter settings?
·
Concurrency
issues - Are there buffer busy problems?
·
Hot
objects and top SQL for various problem areas
Note:
This
is not a comprehensive list of all problem types that ADDM considers in its
analysis.
ADDM also
documents the non-problem areas of the system. For example, wait event classes
that are not significantly impacting the performance of the system are identified
and removed from the tuning consideration at an early stage, saving time and
effort that would be spent on items that do not impact overall system
performance.6.1.2 Using ADDM with Oracle Real Application Clusters
If you are using Oracle RAC, you can run ADDM in Database analysis mode to analyze the throughput performance of all instances of the database. In Database mode, ADDM considers DB time as the sum of the database time for all database instances. Using the Database analysis mode enables you to view all findings that are significant to the entire database in a single report, instead of reviewing a separate report for each instance.The Database mode report includes findings about database resources (such as I/O and interconnect). The report also aggregates findings from the various instances if they are significant to the entire database. For example, if the CPU load on a single instance is high enough to affect the entire database, the finding will appear in the Database mode analysis, which will point to the particular instance responsible for the problem.
See
Also:
Oracle Database 2 Day + Real Application Clusters Guide
for information about using ADDM with Oracle RAC
6.1.3 ADDM Analysis Results
In addition to problem diagnostics, ADDM recommends possible solutions. ADDM analysis results are represented as a set of findings. See Example 6-1 for an example of ADDM analysis result. Each ADDM finding can belong to one of the following types:
·
Problem
findings describe the root cause of a database performance problem.
·
Symptom
findings contain information that often lead to one or more problem findings.
·
Information
findings are used for reporting information that are relevant to understanding
the performance of the database, but do not constitute a performance problem
(such as non-problem areas of the database and the activity of automatic
database maintenance).
·
Warning
findings contain information about problems that may affect the completeness or
accuracy of the ADDM analysis (such as missing data in the AWR).
Each problem
finding is quantified by an impact that is an estimate of the portion of DBtime
caused by the finding's
performance issue. A problem finding can be associated with a list of
recommendations for reducing the impact of the performance problem. The types
of recommendations include:
·
Hardware
changes: adding CPUs or changing the I/O subsystem configuration
·
Database
configuration: changing initialization parameter settings
·
Schema
changes: hash partitioning a table or index, or using automatic segment-space
management (ASSM)
·
Application
changes: using the cache option for sequences or using bind variables
·
Using
other advisors: running SQL Tuning Advisor on high-load SQL or running the
Segment Advisor on hot objects
A list of
recommendations can contain various alternatives for solving the same problem;
you do not have to apply all the recommendations to solve a specific problem.
Each recommendation has a benefit which is an estimate of the portion of DBtime
that can be saved if the
recommendation is implemented. Recommendations are
composed of actions and rationales. You must apply all the actions of a
recommendation to gain the estimated benefit. The rationales are used for
explaining why the set of actions were recommended and to provide additional
information to implement the suggested recommendation.6.1.4 Reviewing ADDM Analysis Results: Example
Consider the following section of an ADDM report in Example 6-1.FINDING 1: 31% impact (7798 seconds)
------------------------------------
SQL statements were not shared due to the usage of literals. This resulted in
additional hard parses which were consuming significant database time.
RECOMMENDATION 1: Application Analysis, 31% benefit (7798 seconds)
ACTION: Investigate application logic for possible use of bind variables
instead of literals. Alternatively, you may set the parameter
"cursor_sharing" to "force".
RATIONALE: SQL statements with PLAN_HASH_VALUE 3106087033 were found to be
using literals. Look in V$SQL for examples of such SQL statements.
In Example
6-1, the finding points to a particular root cause, the usage of literals
in SQL statements, which is estimated to have an impact of about 31% of total DBtime
in the analysis period.The finding has a recommendation associated with it, composed of one action and one rationale. The action specifies a solution to the problem found and is estimated to have a maximum benefit of up to 31%
DBtime
in the analysis period. Note that the
benefit is given as a portion of the total DBtime
and not as a portion of
the finding's impact. The rationale provides additional information on tracking
potential SQL statements that were using literals and causing this performance
issue. Using the specified plan hash value of SQL statements that could be a
problem, a DBA could quickly examine a few sample statements.When a specific problem has multiple causes, the ADDM may report multiple problem and symptom findings. In this case, the impacts of these multiple findings can contain the same portion of
DBtime
. Because the performance issues of
findings can overlap, the sum of the impacts of the findings can exceed 100% of
DBtime
. For example, if a system
performs many reads, then ADDM might report a SQL statement responsible for 50%
of DBtime
due to I/O activity as one
finding, and an undersized buffer cache responsible for 75% of DBtime
as another finding.When multiple recommendations are associated with a problem finding, the recommendations may contain alternatives for solving the problem. In this case, the sum of the recommendations' benefits may be higher than the finding's impact.
When appropriate, an ADDM action may have multiple solutions for you to choose from. In the example, the most effective solution is to use bind variables. However, it is often difficult to modify the application. Changing the value of the
CURSOR_SHARING
initialization parameter
is much easier to implement and can provide significant improvement.6.2 Setting Up ADDM
Automatic database diagnostic monitoring is enabled by default and is controlled by theCONTROL_MANAGEMENT_PACK_ACCESS
and the STATISTICS_LEVEL
initialization parameters.The
CONTROL_MANAGEMENT_PACK_ACCESS
parameter should be set to
DIAGNOSTIC
or DIAGNOSTIC+TUNING
to enable automatic
database diagnostic monitoring. The default setting is DIAGNOSTIC+TUNING
. Setting CONTROL_MANAGEMENT_PACK_ACCESS
to NONE
disables ADDM.The
STATISTICS_LEVEL
parameter should be set to
the TYPICAL
or ALL
to enable automatic
database diagnostic monitoring. The default setting is TYPICAL
. Setting STATISTICS_LEVEL
to BASIC
disables many Oracle
Database features, including ADDM, and is strongly discouraged.
See
Also:
Oracle Database Reference for information about the
ADDM analysis of
I/O performance partially depends on a single argument, CONTROL_MANAGEMENT_PACK_ACCESS
and STATISTICS_LEVEL
initialization parametersDBIO_EXPECTED
, that describes the
expected performance of the I/O subsystem. The value of DBIO_EXPECTED
is the average time it
takes to read a single database block in microseconds. Oracle Database uses the
default value of 10 milliseconds, which is an appropriate value for most modern
hard drives. If your hardware is significantly different, such as very old
hardware or very fast RAM disks, consider using a different value.To determine the correct setting for
DBIO_EXPECTED
parameter:
1.
Measure
the average read time of a single database block read for your hardware. Note
that this measurement is for random I/O, which includes seek time if you use
standard hard drives. Typical values for hard drives are between 5000 and 20000
microseconds.
2. Set the value one time for
all subsequent ADDM executions. For example, if the measured value if 8000
microseconds, you should execute the following command as SYS user:
3. EXECUTE DBMS_ADVISOR.SET_DEFAULT_TASK_PARAMETER(
4. 'ADDM', 'DBIO_EXPECTED', 8000);
6.3 Diagnosing Database Performance Problems with ADDM
To diagnose database performance problems, first review the ADDM analysis results that are automatically created each time an AWR snapshot is taken. If a different analysis is required (such as a longer analysis period, using a differentDBIO_EXPECTED
setting, or changing the
analysis mode), you can run ADDM manually as described in this section.ADDM can analyze any two AWR snapshots (on the same database), as long as both snapshots are still stored in the AWR (have not been purged). ADDM can only analyze instances that are started before the beginning snapshot and remain running until the ending snapshot. Additionally, ADDM will not analyze instances that experience significant errors when generating the AWR snapshots. In such cases, ADDM will analyze the largest subset of instances that did not experience these problems.
The primary interface for diagnostic monitoring is Oracle Enterprise Manager. Whenever possible, you should run ADDM using Oracle Enterprise Manager, as described in Oracle Database 2 Day + Performance Tuning Guide. If Oracle Enterprise Manager is unavailable, you can run ADDM using the
DBMS_ADDM
package. In order to run the DBMS_ADDM
APIs, the user must be
granted the ADVISOR
privilege.This section contains the following topics:
See
Also:
6.3.1 Running ADDM in Database Mode
For Oracle RAC configurations, you can run ADDM in Database mode to analyze all instances of the databases. For single-instance configurations, you can still run ADDM in Database mode; ADDM will simply behave as if running in Instance mode.To run ADDM in Database mode, use the
DBMS_ADDM
.ANALYZE_DB
procedure:BEGIN
DBMS_ADDM.ANALYZE_DB (
task_name IN OUT VARCHAR2,
begin_snapshot IN NUMBER,
end_snapshot IN NUMBER,
db_id IN NUMBER := NULL);
END;
/
The task_name
parameter specifies the
name of the analysis task that will be created. The begin_snapshot
parameter specifies the
snapshot number of the beginning snapshot in the analysis period. The end_snapshot
parameter specifies the
snapshot number of the ending snapshot in the analysis period. The db_id
parameter specifies the
database identifier of the database that will be analyzed. If unspecified, this
parameter defaults to the database identifier of the database to which you are
currently connected.The following example creates an ADDM task in database analysis mode, and executes it to diagnose the performance of the entire database during the time period defined by snapshots 137 and 145:
VAR tname VARCHAR2(30);
BEGIN
:tname := 'ADDM for 7PM to 9PM';
DBMS_ADDM.ANALYZE_DB(:tname, 137, 145);
END;
/
6.3.2 Running ADDM in Instance Mode
To analyze a particular instance of the database, you can run ADDM in Instance mode. To run ADDM in Instance mode, use theDBMS_ADDM
.ANALYZE_INST
procedure:BEGIN
DBMS_ADDM.ANALYZE_INST (
task_name IN OUT VARCHAR2,
begin_snapshot IN NUMBER,
end_snapshot IN NUMBER,
instance_number IN NUMBER := NULL,
db_id IN NUMBER := NULL);
END;
/
The task_name
parameter specifies the
name of the analysis task that will be created. The begin_snapshot
parameter specifies the
snapshot number of the beginning snapshot in the analysis period. The end_snapshot
parameter specifies the
snapshot number of the ending snapshot in the analysis period. The instance_number
parameter specifies the
instance number of the instance that will be analyzed. If unspecified, this
parameter defaults to the instance number of the instance to which you are
currently connected. The db_id
parameter specifies the database
identifier of the database that will be analyzed. If unspecified, this
parameter defaults to the database identifier of the database to which you are
currently connected.The following example creates an ADDM task in instance analysis mode, and executes it to diagnose the performance of instance number 1 during the time period defined by snapshots 137 and 145:
VAR tname VARCHAR2(30);
BEGIN
:tname := 'my ADDM for 7PM to 9PM';
DBMS_ADDM.ANALYZE_INST(:tname, 137, 145, 1);
END;
/
6.3.3 Running ADDM in Partial Mode
To analyze a subset of all database instances, you can run ADDM in Partial mode. To run ADDM in Partial mode, use theDBMS_ADDM
.ANALYZE_PARTIAL
procedure:BEGIN
DBMS_ADDM.ANALYZE_PARTIAL (
task_name IN OUT VARCHAR2,
instance_numbers IN VARCHAR2,
begin_snapshot IN NUMBER,
end_snapshot IN NUMBER,
db_id IN NUMBER := NULL);
END;
/
The task_name
parameter specifies the
name of the analysis task that will be created. The instance_numbers
parameter specifies a
comma-delimited list of instance numbers of instances that will be analyzed.
The begin_snapshot
parameter specifies the
snapshot number of the beginning snapshot in the analysis period. The end_snapshot
parameter specifies the
snapshot number of the ending snapshot in the analysis period. The db_id
parameter specifies the
database identifier of the database that will be analyzed. If unspecified, this
parameter defaults to the database identifier of the database to which you are
currently connected.The following example creates an ADDM task in partial analysis mode, and executes it to diagnose the performance of instance numbers 1, 2, and 4, during the time period defined by snapshots 137 and 145:
VAR tname VARCHAR2(30);
BEGIN
:tname := 'my ADDM for 7PM to 9PM';
DBMS_ADDM.ANALYZE_PARTIAL(:tname, '1,2,4', 137, 145);
END;
/
6.3.4 Displaying an ADDM Report
To display a text report of an executed ADDM task, use theDBMS_ADDM
.GET_REPORT
function:DBMS_ADDM.GET_REPORT (
task_name IN VARCHAR2
RETURN CLOB);
The following
example displays a text report of the ADDM task specified by its task name
using the tname
variable:SET LONG 1000000 PAGESIZE 0;
SELECT DBMS_ADDM.GET_REPORT(:tname) FROM DUAL;
Note that the
return type of a report is a CLOB
, formatted to fit line size of 80.
For information about reviewing the ADDM analysis results in an ADDM report,
see "ADDM
Analysis Results".6.4 Views with ADDM Information
Typically, you should view output and information from ADDM using Oracle Enterprise Manager or ADDM reports.However, you can display ADDM information through the
DBA_ADVISOR
views. This group of views
includes:
·
DBA_ADVISOR_FINDINGS
This
view displays all the findings discovered by all advisors. Each finding is
displayed with an associated finding ID, name, and type. For tasks with
multiple executions, the name of each task execution associated with each
finding is also listed.
·
DBA_ADDM_FINDINGS
This
view contains a subset of the findings displayed in the related
DBA_ADVISOR_FINDINGS
view. This view only
displays the ADDM findings discovered by all advisors. Each ADDM finding is
displayed with an associated finding ID, name, and type.
·
DBA_ADVISOR_FINDING_NAMES
List
of all finding names registered with the advisor framework.
·
DBA_ADVISOR_RECOMMENDATIONS
This
view displays the results of completed diagnostic tasks with recommendations
for the problems identified in each execution. The recommendations should be
reviewed in the order of the
RANK
column, as this relays the magnitude
of the problem for the recommendation. The BENEFIT
column displays the
benefit to the system you can expect after the recommendation is performed. For
tasks with multiple executions, the name of each task execution associated with
each advisor task is also listed.
·
DBA_ADVISOR_TASKS
This
view provides basic information about existing tasks, such as the task ID, task
name, and when the task was created. For tasks with multiple executions, the
name and type of the last or current execution associated with each advisor
task is also listed.
What is Oracle Exadata IO Resource Manager Limit Clause
The Exadata IO Resource Manager (IORM) is a fundamental piece of the exadata database machine since the beginning. The function of the IORM is to guarantee a database and/or category (set via the database resource manager) gets the share of IO as defined in the resource plan at each storage server. The “share” here means the minimal amount of IO. This is widely known and documented in the Exadata documentation. Having a minimal amount of IO means you can guarantee a service level agreement (SLA).But what if you want to define the maximal amount of IO a database may get? The case of a maximal amount of IO a database can get is that you can use a database on Exadata, and do not (or very limited) change the performance once more databases get onto the same database machine. So instead of having the minimal amount of IO a database must get guaranteed (which is what is documented in the exadata documentation), having a limit on the maximal amount a database can get?
This is possible through the ‘limit’ clause of the database resource plan (this option is not available to the category plan clause). This clause is not found in the documentation available to me, but is listed when calling the help function in cellcli for the ‘alter iormplan’ command.
How does such a plan look like?
CellCLI> list iormplan
detail
name: dm01cel01_IORMPLAN
catPlan:
dbPlan:
name=vxone,level=1,allocation=50,limit=10
name=other,level=1,allocation=50
status: active
This is the IO resource manager plan on one of the cells in a database machine.
What we see here is a plan which says we have a database plan, which says the database ‘vxone’ must always get 50% of all IO capacity at a minimum, and all others (name=other) also get 50%. But now the purpose of this blog: I’ve limited the database ‘vxone’ to only get 10%!
Does it really work? A good practice is to check all things you are told. Especially if it’s something which is widely unknown, and not in the documentation.
Let’s scan a large table without any limit set:
SQL> select count(*)
from cg_var;
COUNT(*)
----------
1596587000
Elapsed:
00:00:45.97
This is the same SQL, but now with the limit set to 10% (as seen in the iormplan listed above):
SQL> select count(*)
from cg_var;
COUNT(*)
----------
1596587000
Elapsed:
00:02:27.28
It is clear the limit clause limits the IO capability of this database: scanning the same database using the same SQL on the same table differs! I’ve ran the tests a few times to see if (both) times are consistent, and they are!
What is Parallel Query FAQ
Jump
to: navigation,
search
Oracle Parallel
Query (OPQ) FAQ. Oracle Parallel Query was previously called Parallel Query
Option. What is Oracle Parallel Query?
Oracle Parallel Query (formerly Oracle Parallel Query Option or PQO) allows one to break-up a given SQL statement so that its parts can run simultaneously on different processors in a multi-processor machine. Typical operations that can run in parallel are: full table scans, sorts, sub-queries, data loading etc.Parallel Query can improve performance of certain types of operations dramatically and is commonly used in Decision Support and Data Warehousing applications.
What is the difference between Parallel Query and Parallel Server?
Parallel Query allows one to break a SELECT or DML statements into multiple smaller chunks and have PQ slaves execute those smaller chunks on separate CPU's in a single box.Oracle Real Application Clusters (RAC) allows one to take advantage of a multi-node clustered environment for availability and performance reasons. It is commonly used to access a very large database from different nodes of a cluster. More information about Oracle RAC can be obtained from the RAC FAQ.
If both RAC and OPQ are available one can split operations across multiple CPUs and multiple nodes in a cluster for even further performance improvements.
How does one invoke Parallel Query?
After setting the INIT.ORA parameters necessary for Parallel Query to work, do the following:- Alter the table (or index) to indicate that Oracle should try to parallelize operations performed against it
ALTER TABLE table_name PARALLEL (DEGREE 8);
- Put hints in SQL statements to indicate that Oracle should try to execute them in parallel:
SELECT --+ PARALLEL(table_alias, degree, nodes)
*
FROM table_name ...
SELECT /*+PARALLEL(table_alias, degree, nodes)*/ *
FROM table_name ...
How does one disable Parallel Query?
Per table, execute the following DDL commands:ALTER TABLE table_name PARALLEL (DEGREE 1 INSTANCES 1);
ALTER TABLE table_name NOPARALLEL;
ALTER INDEX index_name PARALLEL (DEGREE 1 INSTANCES 1);
ALTER INDEX IND_XXX NOPARALLEL;
You can also
remove the INIT.ORA parameters that allow Parallel Query to work. What parameters can be set to control Parallel Query?
The following INIT.ORA initialization parameters can be set to control Parallel Query execution:- PARALLEL_MIN_SERVERS
Minimum number of parallel server processes
- PARALLEL_MAX_SERVERS
Maximum number of parallel server processes
- PARALLEL_AUTOMATIC_TUNING = TRUE
To enable intelligent defaults for parallel execution
parameters (8i and above)
How does one monitor Parallel Query Execution?
Use the following Oracle data dictionary views to monitor parallel queries:Prior to Oracle 8i:
SELECT * FROM sys.v$pq_sysstat;
Oracle 8i and
above: SELECT * from v$pq_sysstat;
SELECT * FROM v$px_process;
SELECT * FROM v$px_sesstat;
SELECT * FROM v$px_process_sysstat;
Can single-processor systems benefit from Parallel Query?
The more CPU's you have the bigger the benefit you can expect to get from using Parallel Query. Nevertheless, even a single-CPU system can benefit from using it. The reason for this is that the system can continue processing a second thread while waiting for I/O or network activity in another thread.Before enabling it on a single CPU system, ensure the system is not too busy and start with a low value for PARALLEL_MAX_SERVERS (i.e. 2 or 4).
Can Parallel Query operations also cause performance problems?
If not carefully watched over, Parallel Query can completely saturate a machine. It is not difficult to bring some of the most powerful machines to their knees. One needs to carefully balance the number of people executing Parallel Query Operations and the degree of parallelism with the number of CPUs in the system.Also note that executing things in parallel is not always faster. In fact, you need to test all queries to ensure that they will benefit from the increased parallelism before permanently enabling it.
What is Oracle Parallel Server
This
chapter introduces parallel processing and the parallel database technology
features of Oracle Parallel Server. Oracle Parallel Server offers significant
advantages for Online Transaction Processing (OLTP), Electronic Commerce,
Decision Support Systems (DSS), and hybrid system types. With the functionality
of Oracle Parallel Server, such systems can effectively exploit the redundancy
of parallel environments.
You
can also use Oracle Parallel Server to deliver high performance, throughput,
and high availability. Whatever your goal, your challenge is to successfully
deploy these technologies to take full advantage of their multiprocessing
powers. To do this, you must understand how Oracle Parallel Server works, what
resources it requires, and how to effectively use it.
Oracle
Parallel Server is not the same as the parallel execution feature. Oracle Parallel
Server refers to multiple computers accessing a shared database while
parallel execution refers to multiple processes executing an operation in
parallel. Parallel execution is available on both single-instance and Oracle
Parallel Server installations.
|
Oracle8i
Conceptsfor additional information on Oracle Parallel Server.
|
What Is Oracle Parallel Server?
Oracle
Parallel Server is a robust computing environment that harnesses the processing
power of multiple, interconnected computers. Oracle Parallel Server software
and a collection of hardware known as a "cluster", unites the
processing power of each component to become a single, robust computing
environment. A cluster generally comprises two or more computers, or
"nodes".
In
Oracle Parallel Server environments, all nodes concurrently execute
transactions against the same database. Oracle Parallel Server coordinates each
node's access to the shared data to provide consistency and integrity.
Harnessing
the power of multiple nodes offers obvious advantages. If you divide a large
task into sub-tasks and distribute the sub-tasks among multiple nodes, you can
complete the task faster than if only one node did the work. This type of
parallel processing is clearly more efficient than sequential processing. It
also provides increased performance to process larger workloads and accommodate
growing user populations.
If
you establish high node-to-data affinity with accurate partitioning, you can
effectively scale your applications to meet increasing data processing demands.
As you add resources, Oracle Parallel Server can exploit them and extend their
processing powers beyond the limits of the individual components.
You
can use Oracle Parallel Server for many system types. For example, data
warehousing applications accessing read-only data are prime candidates for
Oracle Parallel Server. In addition, Oracle Parallel Server successfully
manages increasing numbers of online transaction processing systems as well as
hybrid systems that combine the characteristics of both read-only and
read/write applications.
Oracle
Parallel Server also serves as an important component of robust High
Availability solutions. A properly configured Oracle Parallel Server environment
can tolerate failures with minimal or no downtime.
Benefits of Oracle Parallel Server
Some
of the most important benefits beyond the obvious advantages of parallel
processing are described in the following sections. These benefits include
improved throughput and scalability over single-instance systems and improved
response time. An Oracle Parallel Server also provides an ideal High
Availability solution by resolving node failure in a clustered environment.
Oracle
Parallel Server environments are functionally transparent when compared to
single-instance environments because they are functionally identical to
single-instance environments.
Scalability
Scalability
is the ability to add additional nodes to properly deployed Oracle Parallel
Server applications and achieve markedly improved performance. Oracle Parallel
Server can take advantage of additional equipment and harness the processing
power of multiple systems.
High Availability
High
availability refers to systems with redundant components that provide
consistent, uninterrupted service, even in the event of hardware or software
failures. In most high availability configurations, nodes are isolated from
each other so a failure at one node does not affect the entire system. In such
a case, surviving nodes recover the failed node and the system continues to
provide data access to users. This means data is consistently available, more
so than it would be with a single node upon node failure. High availability
also implies increased database availability.
Transparency
Transparency
is the functional equivalent of single-instance exclusive Oracle and shared
configurations that use Oracle Parallel Server. Applications that run on single
instance Oracle execute with the same results using Oracle Parallel Server. An
Oracle database can be configured to execute in three different modes:
Installation
of the Oracle Parallel Server option is required if you want to execute
transactions from multiple nodes in shared mode. Oracle Parallel Server offers
many performance features beyond those available in a single instance
environment.
High Performance Features of Oracle Parallel Server
Oracle
Parallel Server takes advantage of the parallel processing in a computer
cluster without sacrificing Oracle's inherent transaction processing features.
The following sections discuss certain features in Oracle, both in exclusive
and shared modes, that result in improved application performance when these
applications run using Oracle Parallel Server.
Buffer Cache Management
Within
a single instance, Oracle stores resources, such as data block and lock
information, in a buffer cache that resides in memory. Storing this information
locally reduces the amount of disk I/O necessary for database operations. Since
each node in the Parallel Server has its own memory that is not shared with
other nodes, Oracle Parallel Server must coordinate the buffer caches of
different nodes while minimizing additional disk I/O that could reduce
performance. The Oracle parallel cache management technology maintains the
high-performance features of Oracle while coordinating multiple buffer caches.
Oracle8i
Concepts for detailed information about the buffer cache.
|
Fast Commits, Group Commits, and Deferred Writes
Fast
commits, group commits, and deferred writes operate on a per-instance basis in
Oracle and work the same whether in exclusive or shared mode.
Oracle
only reads data blocks from disk if they are not already in the buffer cache of
the instance requesting the data. Because data block writes are deferred, they
often contain modifications from multiple transactions.
- When the blocks have not been used recently and new data requires buffer cache space (in shared or exclusive mode)
- During checkpoints (shared or exclusive mode)
- When another instance needs the blocks (only in shared mode)
Row Locking and Multiversion Read Consistency
Oracle's
row locking feature allows multiple transactions from separate nodes to lock
and update different rows of the same data block. This is done without any of
the transactions waiting for the others to commit. If a row has been modified
but not yet committed, the original row values are available to all instances
for read access. This is called multiversion read consistency.
Online Backup and Archiving
Oracle
Parallel Server supports all Oracle backup features that are available in
exclusive mode, including both online and offline backups of either an entire
database or individual tablespaces.
If
you operate Oracle in ARCHIVELOG mode, online redo log files are archived
before they are overwritten. In Oracle Parallel Server, each instance can
automatically archive its own redo log files or one or more instances can
manually archive the redo log files for all instances.
In
ARCHIVELOG mode, you can make both online and offline backups. If you operate
Oracle in NOARCHIVELOG mode, you can only make offline backups. If you cannot
afford any data loss, Oracle strongly recommends that you operate your
production databases in ARCHIVELOG mode
What isParallel Server and Oracle
RAC
2010-2011
| Oracle RAC Basic | Author: GiteshTrivedi
This article
explains basic differences and comparison between Oracle Parallel database
server and Oracle RAC database server. We are getting too many requests for
publishing this article as soon as possible. What is CREATE_PENDING_AREA Procedure
All
changes to the plan schema must be done within a pending area. The pending area
can be thought of as a "scratch" area for plan schema changes. The
administrator creates this pending area, makes changes as necessary, possibly
validates these changes, and only when the submit is completed do these changes
become active.
You
may, at any time while the pending area is active, view the current plan schema
with your changes by selecting from the appropriate user views.
At
any time, you may clear the pending area if you want to stop the current
changes. You may also call the
VALIDATE
procedure to confirm
whether the changes you has made are valid. You do not have to do your changes
in a given order to maintain a consistent group of entries. These checks are
also implicitly done when the pending area is submitted. What is Transparent Data Encryption (TDE)
Oracle has many security features available within the database, but until now there has been no "out-of-the-box" method for protecting the data at the operating system level. The Transparent Data Encryption (TDE) feature introduced in Oracle 10g Database Release 2 allows sensitive data to be encrypted within the datafiles to prevent access to it from the operating system. This article presents some basic examples of its use.Related articles.
- Tablespace Encryption in Oracle 11g Database Release 1
- Data Encryption - DBMS_OBFUSCATION_TOOLKIT
- Database Security Enhancements in Oracle Database 10g - DBMS_CRYPTO
- SecureFiles in Oracle 11g Database Release 1 - LOB Encryption
Setup
In order to show the encryption working we need to open a datafile in a HEX editor. Rather than trying to open a huge datafile, it makes sense to create a small file for this test.CONN sys/password AS SYSDBA
CREATE TABLESPACE tde_test
DATAFILE '/u01/oradata/DB10G/tde_test.dbf' SIZE 128K
AUTOEXTEND ON NEXT 64K;
Next, create a
user with with a quota on the new tablespace.CREATE USER test IDENTIFIED BY test DEFAULT TABLESPACE tde_test;
ALTER USER test QUOTA UNLIMITED ON tde_test;
GRANT CONNECT TO test;
GRANT CREATE TABLE TO test;
Normal Column
First we will prove that the data from a normal column can be seen from the OS. To do this create a test table and insert some data.CONN test/test
CREATE TABLE tde_test (
id NUMBER(10),
data VARCHAR2(50)
)
TABLESPACE tde_test;
INSERT INTO tde_test (id, data) VALUES (1, 'This is a secret!');
COMMIT;
Then flush the
buffer cache to make sure the data is written to the datafile.CONN sys/password AS SYSDBA
ALTER SYSTEM FLUSH BUFFER_CACHE;
Open the datafile
using a HEX editor (like UltraEdit) and
the sentence "This is a secret!" is clearly visible amongst all the
non-printable characters.Encrypted Column
Before attempting to create a table with encrypted columns, a wallet must be created to hold the encryption key. The search order for finding the wallet is as follows:- If present, the
location specified by the
ENCRYPTION_WALLET_LOCATION
parameter in the sqlnet.ora file. - If present, the
location specified by the
WALLET_LOCATION
parameter in the sqlnet.ora file. - The default location for the wallet ($ORACLE_BASE/admin/$ORACLE_SID/wallet).
ENCRYPTION_WALLET_LOCATION
parameter in the
sqlnet.ora file. To accomplish this we add the following entry into the
sqlnet.ora file on the server and make sure the specified directory has been
created.ENCRYPTION_WALLET_LOCATION=
(SOURCE=(METHOD=FILE)(METHOD_DATA=
(DIRECTORY=/u01/app/oracle/admin/DB10G/encryption_wallet/)))
The following
command creates and opens the wallet.CONN sys/password AS SYSDBA
-- 10g version
ALTER SYSTEM SET ENCRYPTION KEY AUTHENTICATED BY "myPassword";
-- 11g version
ALTER SYSTEM SET ENCRYPTION KEY IDENTIFIED BY "myPassword";
Wallets must be
reopened after an instance restart and can be closed to prevent access to
encrypted columns.-- 10g version
ALTER SYSTEM SET ENCRYPTION WALLET OPEN AUTHENTICATED BY "myPassword";
-- 11g version
ALTER SYSTEM SET ENCRYPTION WALLET OPEN IDENTIFIED BY "myPassword";
ALTER SYSTEM SET ENCRYPTION WALLET CLOSE;
Create a test
table with an encrypted column and insert some data. Using the ENCRYPT
clause on its own is the
same as using the ENCRYPT USING 'AES192'
clause, as AES192 is the
default encryption method.CONN test/test
DROP TABLE tde_test;
PURGE RECYCLEBIN;
CREATE TABLE tde_test (
id NUMBER(10),
data VARCHAR2(50) ENCRYPT
)
TABLESPACE tde_test;
INSERT INTO tde_test (id, data) VALUES (1, 'This is a secret!');
COMMIT;
Flush the buffer
cache to make sure the data is written to the datafile.CONN sys/password AS SYSDBA
ALTER SYSTEM FLUSH BUFFER_CACHE;
When the file is
opened using a HEX editor only non-printable characters are present. The test
sentence cannot be seen anywhere, but the data is still clearly visible from a
database connection.SELECT * FROM tde_test;
ID DATA
---------- --------------------------------------------------
1 This is a secret!
1 row selected.
Performance
There is a performance overhead associated with the encryption/decryption process. The following tables are used in a performance comparison.CONN test/test
CREATE TABLE tde_test_1 (
id NUMBER(10),
data VARCHAR2(50)
)
TABLESPACE tde_test;
CREATE TABLE tde_test_2 (
id NUMBER(10),
data VARCHAR2(50) ENCRYPT
)
TABLESPACE tde_test;
The following
script uses these tables to compare the speed of regular and encrypted inserts
and regular and decrypted queries. Each test repeats 1000 times, with the
timings reported in 100ths of a second.SET SERVEROUTPUT ON SIZE UNLIMITED
DECLARE
l_loops NUMBER := 1000;
l_data VARCHAR2(50);
l_start NUMBER;
BEGIN
EXECUTE IMMEDIATE 'TRUNCATE TABLE tde_test_1';
EXECUTE IMMEDIATE 'TRUNCATE TABLE tde_test_2';
l_start := DBMS_UTILITY.get_time;
FOR i IN 1 .. l_loops LOOP
INSERT INTO tde_test_1 (id, data)
VALUES (i, 'Data for ' || i);
END LOOP;
DBMS_OUTPUT.put_line('Normal Insert : ' || (DBMS_UTILITY.get_time - l_start));
l_start := DBMS_UTILITY.get_time;
FOR i IN 1 .. l_loops LOOP
INSERT INTO tde_test_2 (id, data)
VALUES (i, 'Data for ' || i);
END LOOP;
DBMS_OUTPUT.put_line('Encrypted Insert: ' || (DBMS_UTILITY.get_time - l_start));
l_start := DBMS_UTILITY.get_time;
FOR i IN 1 .. l_loops LOOP
SELECT data
INTO l_data
FROM tde_test_1
WHERE id = i;
END LOOP;
DBMS_OUTPUT.put_line('Normal Query : ' || (DBMS_UTILITY.get_time - l_start));
l_start := DBMS_UTILITY.get_time;
FOR i IN 1 .. l_loops LOOP
SELECT data
INTO l_data
FROM tde_test_2
WHERE id = i;
END LOOP;
DBMS_OUTPUT.put_line('Decrypted Query : ' || (DBMS_UTILITY.get_time - l_start));
END;
/
Normal Insert : 31
Encrypted Insert: 45
Normal Query : 42
Decrypted Query : 58
PL/SQL procedure successfully completed.
SQL>
The results
clearly demonstrate that encrypted inserts and decrypted queries are slower
than their normal counterparts.External Tables
External tables can be encrypted in a similar way to regular tables. First, we make sure the default data pump directory is available to the test user.CONN sys/password AS SYSDBA
GRANT READ, WRITE ON DIRECTORY data_pump_dir TO test;
Next, we create
the external table as a copy of an existing table, using the ENCRYPT
clause.CONN test/test
CREATE TABLE tde_test_1_ext (
id,
data ENCRYPT IDENTIFIED BY "myPassword"
)
ORGANIZATION EXTERNAL
(
TYPE ORACLE_DATAPUMP
DEFAULT DIRECTORY data_pump_dir
location ('tde_test_1_ext.dmp')
)
AS
SELECT id,
data
FROM tde_test_1;
Views
The%_ENCRYPTED_COLUMNS
views are used to display
information about encrypted columns.SET LINESIZE 100
COLUMN owner FORMAT A15
COLUMN tble_name FORMAT A15
COLUMN column_name FORMAT A15
SELECT * FROM dba_encrypted_columns;
OWNER TABLE_NAME COLUMN_NAME ENCRYPTION_ALG SAL
--------------- ------------------------------ --------------- ----------------------------- ---
TEST TDE_TEST_2 DATA AES 192 bits key YES
TEST TDE_TEST_1_EXT DATA AES 192 bits key YES
2 rows selected.
SQL>
Miscellaneous Information
- The master key can be reset in a similar way to how it is set initially. The instructions for this are shown here.
- The wallet password can be changed using the Oracle Wallet Manager. The instructions for this are shown here.
- The encryption
algorithm used for a column can be changed using the
REKEY
command. The instructions for this are shown here.
What Is Oracle Database Vault?
Oracle Database Vault restricts access to specific areas in an Oracle database from any user, including users who have administrative access. For example, you can restrict administrative access to employee salaries, customer medical records, or other sensitive information.This enables you to apply fine-grained access control to your sensitive data in a variety of ways. It hardens your Oracle Database instance and enforces industry standard best practices in terms of separating duties from traditionally powerful users. Most importantly, it protects your data from super-privileged users but still allows them to maintain your Oracle databases. Oracle Database Vault is an integral component of your enterprise.
With Oracle Database Vault, you address the most difficult security problems remaining today: protecting against insider threats, meeting regulatory compliance requirements, and enforcing separation of duty.
You configure Oracle Database Vault to manage the security of an individual Oracle Database instance. You can install Oracle Database Vault on standalone Oracle Database installations, in multiple Oracle homes, and in Oracle Real Application Clusters (Oracle RAC) environments.
For frequently asked questions about Oracle Database Vault, visit
http://www.oracle.com/technetwork/database/focus-areas/security/dbv-faq-083210.html
For Oracle Technology Network (OTN) information specific to Oracle Database Vault, visit
http://www.oracle.com/technetwork/database/options/database-vault/index-085211.html
Components of Oracle Database Vault
Oracle Database Vault has the following components:Oracle Database Vault Access Control Components
Oracle Database Vault enables you to create the following components to manage security for your database instance:
·
Realms. A realm is a functional grouping of
database schemas, objects, and roles that must be secured. For example, you can
group a set of schemas, objects, and roles that are related to accounting,
sales, or human resources. After you have grouped these into a realm, you can
use the realm to control the use of system privileges to specific accounts or
roles. This enables you to provide fine-grained access controls for anyone who
wants to use these schemas, objects, and roles. Chapter
4, "Configuring Realms" discusses realms in detail.
·
Command rules. A command rule is a
special rule that you can create to control how users can execute almost any
SQL statement, including
SELECT
, ALTER SYSTEM
, database definition
language (DDL), and data manipulation language (DML) statements. Command rules
must work with rule sets to determine whether the statement is allowed. Chapter
6, "Configuring Command Rules" discusses command rules in detail.
·
Factors. A factor is a named
variable or attribute, such as a user location, database IP address, or session
user, which Oracle Database Vault can recognize and secure. You can use factors
for activities such as authorizing database accounts to connect to the database
or creating filtering logic to restrict the visibility and manageability of
data. Each factor can have one or more identities. An identity is the actual
value of a factor. A factor can have several identities depending on the factor
retrieval method or its identity mapping logic. Chapter
7, "Configuring Factors" discusses factors in detail.
·
Rule sets. A rule set is a collection
of one or more rules that you can associate with a realm authorization, command
rule, factor assignment, or secure application role. The rule set evaluates to
true or false based on the evaluation of each rule it contains and the
evaluation type (All True or Any
True). The rule within a rule set is a PL/SQL expression that evaluates
to true or false. You can have the same rule in multiple rule sets. Chapter
5, "Configuring Rule Sets" discusses rule sets in detail.
·
Secure application roles. A secure application role
is a special Oracle Database role that can be enabled based on the evaluation
of an Oracle Database Vault rule set. Chapter
8, "Configuring Secure Application Roles for Oracle Database
Vault" discusses secure application roles in detail.
To augment these
components, Oracle Database Vault provides a set of PL/SQL interfaces and
packages. "Oracle
Database Vault PL/SQL Interfaces and Packages" provides an overview.In general, the first step you take is to create a realm composed of the database schemas or database objects that you want to secure. You can further secure the realm by creating rules, command rules, factors, identities, rule sets, and secure application roles. In addition, you can run reports on the activities these components monitor and protect. Chapter 3, "Getting Started with Oracle Database Vault" provides a simple tutorial that will familiarize you with basic Oracle Database Vault functionality. Chapter 17, "Oracle Database Vault Reports" provides more information about how you can run reports to check the configuration and other activities that Oracle Database Vault performs.
Oracle Database Vault Administrator (DVA)
Oracle Database Vault Administrator is a Java application that is built on top of the Oracle Database Vault PL/SQL application programming interfaces (API). This application allows security managers who may not be proficient in PL/SQL to configure the access control policy through a user-friendly interface. Oracle Database Vault Administrator provides an extensive collection of security-related reports that assist in understanding the baseline security configuration. These reports also help point out deviations from this baseline.Chapter 4 through Chapter 9 explain how to use Oracle Database Vault Administrator to configure access control policy defined in realms, command rules, factors, rule sets, secure application roles, and how to integrate Oracle Database Vault with other Oracle products. Chapter 17, "Oracle Database Vault Reports" explains Oracle Database Vault reporting.
Oracle Database Vault Configuration Assistant (DVCA)
To perform maintenance tasks on your Oracle Database Vault installation, use the command-line utility Oracle Database Vault Configuration Assistant (DVCA). For more information, see Appendix C, "Postinstallation Oracle Database Vault Procedures".Oracle Database Vault DVSYS and DVF Schemas
Oracle Database Vault provides a schema,DVSYS
, which stores the database objects
needed to process Oracle data for Oracle Database Vault. This schema contains
the roles, views, accounts, functions, and other database objects that Oracle
Database Vault uses. The DVF
schema contains public functions to
retrieve (at run time) the factor values set in the Oracle Database Vault
access control configuration.Chapter 11, "Oracle Database Vault Objects" describes these schemas in detail.
Oracle Database Vault PL/SQL Interfaces and Packages
Oracle Database Vault provides a collection of PL/SQL interfaces and packages that allow security managers or application developers to configure the access control policy as required. The PL/SQL procedures and functions allow the general database account to operate within the boundaries of access control policy in the context of a given database session.See Chapter 15, "Using the Oracle Database Vault PL/SQL Interfaces" and Chapter 12, "Using the DBMS_MACADM Package" for more information.
Oracle Database Vault and Oracle Label Security PL/SQL APIs
Oracle Database Vault provides access control capabilities that can be integrated with Oracle Label Security. The Oracle Label Security database option is integrated with Oracle Enterprise Manager Database Control, which enables the security manager to define label security policy and apply it to database objects. Oracle Label Security also provides a collection of PL/SQL APIs that can be used by a database application developer to provide label security policy and protections.See "Integrating Oracle Database Vault with Oracle Label Security" for more information on how Oracle Database Vault works with Oracle Label Security. See also Oracle Label Security Administrator's Guide for more information about Oracle Policy Manager.
Oracle Database Vault Reporting and Monitoring Tools
You can generate reports on the various activities that Oracle Database Vault monitors. In addition, you can monitor policy changes, security violation attempts, and database configuration and structural changes.See Chapter 17, "Oracle Database Vault Reports" for more information about the reports that you can generate. Chapter 16, "Monitoring Oracle Database Vault" explains how to monitor Oracle Database Vault.
How Oracle Database Vault Addresses Compliance Regulations
One of the biggest side benefits resulting from regulatory compliance has been security awareness. Historically, the focus of the information technology (IT) department has been on high availability and performance. The focus on regulatory compliance has required everyone to take a step back and look at their IT infrastructure, databases, and applications from a security angle. Common questions include:
·
Who
has access to this information?
·
Where
is the sensitive information stored?
Regulations such
as the Sarbanes-Oxley Act, Health Insurance Portability and Accountability Act
(HIPAA), International Convergence of Capital Measurement and Capital
Standards: a Revised Framework (Basel II), Japan Privacy Law, Payment Card
Industry Data Security Standard (PCI DSS), and the European Union Directive on
Privacy and Electronic Communications have common themes that include internal
controls, separation of duty, and access control.While most changes required by regulations such as Sarbanes-Oxley and HIPAA are procedural, the remainder may require technology investments. A common security requirement found in regulations is stringent internal controls. The degree to which Oracle Database Vault helps an organization achieve compliance varies with the regulation. In general, Oracle Database Vault realms, separation of duty features, command rules, and factors help reduce the overall security risks that regulation provisions worldwide address.
Table 1-1 lists regulations that address potential security threats.
Regulation
|
Potential
Security Threat
|
Sarbanes-Oxley Section 302 | Unauthorized changes to data |
Sarbanes-Oxley Section 404 | Modification to data, unauthorized access |
Sarbanes-Oxley Section 409 | Denial of service, unauthorized access |
Gramm-Leach-Bliley | Unauthorized access, modification, or disclosure |
Health Insurance Portability and Accountability Act (HIPAA) 164.306 | Unauthorized access to data |
HIPAA 164.312 | Unauthorized access to data |
Basel II – Internal Risk Management | Unauthorized access to data |
CFR Part 11 | Unauthorized access to data |
Japan Privacy Law | Unauthorized access to data |
EU Directive on Privacy and Electronic Communications | Unauthorized access to data |
Payment Card Industry Data Security Standard (PCI DSS) | Unauthorized changes to data |
How Oracle Database Vault Addresses Insider Threats
For many years, worms, viruses, and the external intruder (hacker) have been perceived as the biggest threats to computer systems. Unfortunately, what is often overlooked is the potential for trusted users or privileged users to steal or modify data.Oracle Database Vault protects against insider threats by using realms, factors, and command rules. Combined, these provide powerful security tools to help secure access to databases, applications, and sensitive information. You can combine rules and factors to control the conditions under which commands in the database are allowed to execute, and to control access to data protected by a realm. For example, you can create rules and factors to control access to data based on IP addresses, the time of day, and specific programs. These can limit access to only those connections pass these conditions. This can prevent unauthorized access to the application data and access to the database by unauthorized applications.
Oracle Database Vault provides built-in factors that you can use in combination with rules to control access to the database, realm-protected applications, and commands within the database.
You can associate rules and factors with dozens of commands within the database to provide stronger internal controls within the database. You can customize these to meet the operational policies for your site. For example, you could define a rule to limit execution of the
ALTER SYSTEM
statement to a specific IP address
and host name.How Oracle Database Vault Allows for Flexible Security Policies
Oracle Database Vault helps you design flexible security policies for your database. For example, any database user, such asSYSTEM
, who has the DBA
role, can make
modifications to basic parameters in a database. Suppose an inexperienced
administrator who has system privileges decides to start a new redo log file
but does not realize that doing so at a particular time may cause problems for
the database. With Oracle Database Vault, you can create a command rule to
prevent this user from making such modifications by limiting his or her usage
of the ALTER
SYSTEM SWITCH LOGFILE
statement. Furthermore, you can attach rules to the command rule to restrict
activity further, such as limiting the statement's execution in the following
ways:
·
By
time (for example, only during 4 p.m. and 5 p.m. on Friday afternoons)
·
By
local access only, that is, not remotely
·
By
IP address (for example, allowing the action to only a specified range of IP
addresses)
In this way, you
can carefully control and protect your system. You can disable and reenable
command rules when you need to, and easily maintain them from one central
location using Oracle Database Vault Administrator.How Oracle Database Vault Addresses Database Consolidation Concerns
Oracle customers today still have hundreds and even thousands of databases distributed throughout the enterprise and around the world. However, database consolidation will continue as a cost-saving strategy in the coming years. The physical security provided by the distributed database architecture must be available in the consolidated environment. Oracle Database Vault addresses the primary security concerns of database consolidation.Figure 1-1 illustrates how Oracle Database Vault addresses the following database security concerns:
·
Administrative privileged account
access to application data: In this case, Oracle Database Vault prevents the DBA
from accessing the schemas that are protected by the FIN Realm. Although the
DBA is the most powerful and trusted user, the DBA does not need access to
application data residing within the database.
·
Separation of duties for application
data access:
In this case, the FIN Realm Owner, created in Oracle Database Vault, has access
to the FIN Realm schemas.
Database
consolidation can result in multiple powerful user accounts residing in a
single database. This means that in addition to the overall database DBA,
individual application schema owners also may have powerful privileges.
Revoking some privileges may adversely affect existing applications. Using
Oracle Database Vault realms, you can enforce access to applications through a
trusted path, preventing database users who have not been specifically
authorized access from using powerful privileges to look at application data.
For example, a DBA who has the SELECT ANY TABLE
privilege can be prevented
from using that privilege to view application data.
V$BH displays the status
and number of pings for every buffer in the SGA. This is a Real Application
Clusters view.
Column
|
Datatype
|
Description
|
FILE#
|
NUMBER
|
Datafile
identifier number (to find the filename, query DBA_DATA_FILES
or V$DBFILE)
|
BLOCK#
|
NUMBER
|
Block
number
|
CLASS#
|
NUMBER
|
Class
number
|
STATUS
|
VARCHAR2(6)
|
Status
of the buffer:
|
XNC
|
NUMBER
|
Number
of PCM x to null lock conversions due to contention with another instance.
This column is obsolete and maintained for backward compatibility.
|
LOCK_ELEMENT_ADDR
|
RAW(4
| 8)
|
Address
of the lock element that contains the PCM lock that is covering the buffer.
If more than one buffer has the same address, then these buffers are covered
by the same PCM lock.
|
LOCK_ELEMENT_NAME
|
NUMBER
|
The
address of the lock element that contains the PCM lock that is covering the buffer.
If more than one buffer has the same address, then these buffers are covered
by the same PCM lock.
|
LOCK_ELEMENT_CLASS
|
NUMBER
|
The
address of the lock element that contains the PCM lock that is covering the
buffer. If more than one buffer has the same address, then these buffers are
covered by the same PCM lock.
|
FORCED_READS
|
NUMBER
|
Number
of times the block had to be reread from the cache because another instance
has forced it out of this instance's cache by requesting the lock on the
block in exclusive mode
|
FORCED_WRITES
|
NUMBER
|
Number
of times GCS had to write this block to cache because this instance had used
the block and another instance had requested the lock on the block in a
conflicting mode
|
DIRTY
|
VARCHAR2(1)
|
Y
- block modified
|
TEMP
|
VARCHAR2(1)
|
Y
- temporary block
|
PING
|
VARCHAR2(1)
|
Y
- block pinged
|
STALE
|
VARCHAR2(1)
|
Y
- block is stale
|
DIRECT
|
VARCHAR2(1)
|
Y
- direct block
|
NEW
|
VARCHAR2(1)
|
Always
set to N.
This column is obsolete and maintained for backward compatibility.
|
OBJD
|
NUMBER
|
Database
object number of the block that the buffer represents
|
TS#
|
NUMBER
|
Tablespace
number of block
|
What is locked_mode in v$locked_objects?
LMODE
|
NUMBER
|
Lock
mode in which the session holds the lock:
|
DDL statement in alert.log
Enable all DDL Commands to be Logged in Alert.log file - Oracle Database 12c
With
this simple command below you can enable all DDL commands to be logged in alert.log
ALTER SYSTEM SET enable_ddl_logging = TRUE ;
ALTER SYSTEM SET enable_ddl_logging = TRUE ;
What is Endianness
To
transport a tablespace from one platform to another, datafiles on different
platforms must be in the same endian format, byte ordering. The pattern for
byte ordering in native types is called endianness. There are only two
main patterns, big endian and little endian. Big endian means the most
significant byte comes first, and little endian means the least significant
byte comes first. If the source platform and the target platform are of
different endianness, an additional step must be taken on either the source or
target platform to convert the tablespace being transported to the target
format. If they are of the same endianness, no conversion is necessary
and tablespaces can be transported as if they were on the same platform.
However,
there are certain restrictions in using the transportable tablespace:
*
The source and target database must use the same character set and national
character set.
*
A tablespace cannot be transported to a target database in which a tablespace
with the same name already exists. However, the tablespace to be transported or
the destination tablespace can be renamed before the transport operation.
Renaming a tablespace is a new feature in 10g.
*
The set should be self-containing
Transporting Tablespaces between Databases: A General
Procedure
Perform
the following steps to move or copy a set of tablespaces.
1.
Pick a self-contained set of tablespaces. Verify this using the
dbms_tts.transport_set_check package.
2.
Next, generate a transportable tablespace set, using the Export utility.
3.
A transportable tablespace set consists of the set of datafiles for the set of
tablespaces being transported and an Export file containing metadata
information for the set of tablespace and corresponding data files.
4.
Transporting a tablespace set to a platform different from the source platform
will require connection to the Recovery Manager (RMAN) and invoking the CONVERT
command. An alternative is to do the conversion on the target platform after
the tablespacedatafiles have been transported.
5.
Next, the tablespacedatafile set is transported. Copy the datafiles and
the export file containing the metadata to the target database. This can be
accomplished using any facility for copying flat files, such as the operating
system copy utility, ftp, or publishing on CDs or DVDs.
6.
The final step is to plug in the tablespace. Use the Import utility to
plug the set of tablespaces metadata, and hence the tablespaces themselves,
into the target database.
7.
If these tablespaces are being transported to a different platform, the
Recovery Manager CONVERT command can be used to perform the conversion.
Note:
As an alternative to conversion before transport, the CONVERT command can be
used for the conversion on the target platform after the tablespace set has
been transported.
What is Definer's Rights and Invoker's Rights
This chapter contains:About Definer's Rights and Invoker's Rights
Both definer's rights and invoker's rights are a way to control access to the privileges necessary to run a user-created procedure, or program unit. In a definer's rights procedure, the procedure executes with the privileges of the owner. The privileges are bound to the schema in which they were created. An invoker's rights procedure executes with the privileges of the current user, that is, the user who invokes the procedure.For example, suppose user
bixby
creates a procedure that is designed to modify table cust_records
and then he grants the EXECUTE
privilege on this
procedure to user rlayton
. If bixby
had created the procedure
with definer's rights, then the procedure would look for table cust_records
in bixby
's schema. Had the
procedure been created with invoker's rights, then when rlayton
runs it, the procedure
would look for table cust_records
in rlayton
's schema.By default, all procedures are considered definer's rights. You can designate a procedure to be an invoker's rights procedure by using the
AUTHID CURRENT_USER
clause when you create or
modify it, or you can use the AUTHID DEFINER
clause to make it a definer's
rights procedure.
See
Also:
Oracle Database PL/SQL Language Reference for more
details about definer's rights and invoker's rights procedures
How Procedure Privileges Affect Definer's Rights
The owner of a procedure, called the definer, must have all the necessary object privileges for objects that the procedure references. If the procedure owner grants to another user the right to use the procedure, then the privileges of the procedure owner (on the objects the procedure references) apply to the grantee's exercise of the procedure. The privileges of the procedure's definer must be granted directly to the user, not granted through roles. These are called definer's rights.The user of a procedure who is not its owner is called the invoker. Additional privileges on referenced objects are required for an invoker's rights procedure, but not for a definer's rights procedure.
A user of a definer's rights procedure requires only the privilege to execute the procedure and no privileges on the underlying objects that the procedure accesses. This is because a definer's rights procedure operates under the security domain of the user who owns the procedure, regardless of who is executing it. The owner of the procedure must have all the necessary object privileges for referenced objects. Fewer privileges need to be granted to users of a definer's rights procedure. This results in stronger control of database access.
You can use definer's rights procedures to control access to private database objects and add a level of database security. By writing a definer's rights procedure and granting only the
EXECUTE
privilege to a user, this
user can be forced to access the referenced objects only through the procedure.At run time, Oracle Database checks whether the privileges of the owner of a definer's rights procedure allow access to that procedure's referenced objects, before the procedure is executed. If a necessary privilege on a referenced object was revoked from the owner of a definer's rights procedure, then no user, including the owner, can run the procedure.
An example of when you may want to use a definer's rights procedure is as follows: Suppose that you must create an API whose procedures have unrestricted access to its tables, but you want to prevent ordinary users from selecting table data directly, and from changing it with
INSERT
, UPDATE
, and DELETE
statements. To accomplish
this, in a separate, low-privileged schema, create the tables and the
procedures that comprise the API. By default, each procedure is a definer's
rights unit, so you do not need to specify AUTHID DEFINER
when you create it. Then
grant the EXECUTE
privilege to the users who must use this API, but do not
grant any privileges that allow data access. This solution gives you complete
control over your API behavior and how users have access to its underlying
objects.Oracle recommends that you create your definer's rights procedures, and views that access these procedures, in their own schema. Grant this schema very low privileges, or no privileges at all. This way, when other users run these procedures or views, they will not have access to any unnecessarily high privileges from this schema.
Note:
Trigger processing follows
the same patterns as definer's rights procedures. The user runs a SQL
statement, which that user is privileged to run. As a result of the SQL
statement, a trigger is fired. The statements within the triggered action
temporarily execute under the security domain of the user that owns the trigger.
For more information, see "Overview of Triggers" in Oracle Database Concepts.
See
Also:
How Procedure Privileges Affect Invoker's Rights
An invoker's rights procedureexecutes with all of the invoker's privileges. Oracle Database enables the privileges that were granted to the invoker through any of the invoker's enabled roles to take effect, unless a definer's rights procedure calls the invoker's rights procedure directly or indirectly. A user of an invoker's rights procedure must have privileges (granted to the user either directly or through a role) on objects that the procedure accesses through external references that are resolved in the schema of the invoker. When the invoker runs an invoker's rights procedure, this user temporarily has all of the privileges of the definer. (See "Controlling Invoker's Rights Privileges for Procedure Calls and View Access" for more on this aspect of invoker's rights procedures.)The invoker must have privileges at run time to access program references embedded in DML statements or dynamic SQL statements, because they are effectively recompiled at run time.
For all other external references, such as direct PL/SQL function calls, Oracle Database checks the privileges of the owner at compile time, but does not perform a run-time check. Therefore, the user of an invoker's rights procedure does not need privileges on external references outside DML or dynamic SQL statements. Alternatively, the developer of an invoker's rights procedure must only grant privileges on the procedure itself, not on all objects directly referenced by the invoker's rights procedure.
You can create a software bundle that consists of multiple program units, some with definer's rights and others with invoker's rights, and restrict the program entry points (controlled step-in). A user who has the privilege to run an entry-point procedure can also execute internal program units indirectly, but cannot directly call the internal programs. For very precise control over query processing, you can create a PL/SQL package specification with explicit cursors.
When You Should Create Invoker's Rights Procedures
Oracle recommends that you create invoker's rights procedures in these situations:
·
When creating a PL/SQL procedure in a
high-privileged schema.
When lower-privileged users invoke the procedure, then it can do no more than
those users are allowed to do. In other words, the invoker's rights procedure
runs with the privileges of the invoking user.
·
When the PL/SQL procedure contains no
SQL and is available to other users. The
DBMS_OUTPUT
PL/SQL package is an
example of a PL/SQL subprogram that contains no SQL and is available to all
users. The reason you should use an invoker's rights procedure in this
situation is because the unit issues no SQL statements at run time, so the
run-time system does not need to check their privileges. Specifying AUTHIDCURRENT_USER
makes invocations of the
procedure more efficient, because when an invoker's right procedure is pushed
onto, or comes from, the call stack, the values of CURRENT_USER
and CURRENT_SCHEMA
, and the currently enabled
roles do not change. For more information about the differences between
invoker's rights and definer's rights units, see Oracle Database PL/SQL Language Reference.
See
Also:
·
Oracle Database PL/SQL Language Reference for
information about how Oracle Database handles name resolution and privilege
checking at runtime using invoker's and definer's rights
·
Oracle Database PL/SQL Language Reference for
information about defining explicit cursors in the
CREATE
PACKAGE
statementControlling Invoker's Rights Privileges for Procedure Calls and View Access
This section contains:How the Privileges of a Schema Affect the Use of Invoker's Rights Procedures
When a user runs an invoker's rights procedure (or any PL/SQL program unit that has been created with theAUTHID CURRENT_USER
clause), the procedure
temporarily inherits all of the privileges of the invoking user while the
procedure runs. During that time, the procedure owner has, through the
procedure, access to this invoking user's privileges. Consider the following
scenario:
1.
User
ebrown
creates the check_syntax
invoker's rights procedure
and then grants user jward
the EXECUTE
privilege on it.
2.
User
ebrown
, who is a junior
programmer, has only the minimum set of privileges necessary for his job. The check_syntax
procedure resides in ebrown
's schema.
3.
User
jward
, who is a manager, has a
far more powerful set of privileges than user ebrown
.
4.
When
user
jward
runs the check_syntax
invoker's rights
procedure, the procedure inherits user jward
's higher privileges while
it runs.
5.
Because
user
The danger in this
type of situation—in which the lower privileged ebrown
owns the check_syntax
procedure, he has access
to user jward
's privileges whenever jward
runs the check_syntax
procedure.ebrown
's procedure has access to jward
's higher privileges
whenever jward
runs the procedure—lies in the risk that the procedure
owner can misuse the higher privileges of the invoking user. For example, user ebrown
could make use of jward
's higher privileges by
rewriting the check_syntax
procedure to give ebrown
a raise or delete ebrown
's bad performance
appraisal record. Or, ebrown
originally could have created the
procedure as a definer's rights procedure, granted its EXECUTE
privilege to jward
, and then later on change
it to a potentially malicious invoker's rights procedure without letting jward
know. These types of risks
increase when random users, such as application users, have access to a
database that uses invoker's rights procedures.When user
jward
runs ebrown
's invoker's rights procedure,
there is an element of trust involved. He must be assured that ebrown
will not use the check_syntax
procedure in a malicious
way when it accesses jward
's privileges. The INHERIT
PRIVILEGES
and INHERIT
ANY PRIVILEGES
privileges
can help user jward
control whether user ebrown
's procedure can have
access to his (jward
's) privileges. Any user can grant or
revoke the INHERIT PRIVILEGES
privilege on themselves to the user
whose invoker's rights procedures they want to run. SYS
users manage the INHERIT ANY
PRIVILEGES
privilege.How the INHERIT [ANY] PRIVILEGES Privileges Control Privilege Access
TheINHERIT PRIVILEGES
and INHERIT
ANY PRIVILEGES
privileges
regulate the privileges used when a user runs an invoker's rights procedure or
queries a BEQUEATH CURRENT_USER
view that references an invoker's
rights procedure. When a user runs an invoker's rights procedure, Oracle
Database checks it to ensure that the procedure owner has either the INHERIT
PRIVILEGES
privilege on the invoking user, or if the owner has been granted the INHERIT
ANY PRIVILEGES
privilege. If the privilege check fails, then Oracle Database returns anORA-06598:
insufficient INHERIT PRIVILEGES privilege
error.The benefit of these two privileges is that they give invoking users control over who can access their privileges when they run an invoker's rights procedure or query a
BEQUEATH
CURRENT_USER
view.Granting the INHERIT PRIVILEGES Privilege to Other Users
By default, all users are grantedINHERIT
PRIVILEGESON USER
newuserTO PUBLIC
when their accounts are
created or when accounts that were created earlier are upgraded to the current
release. The invoking user can revoke the INHERIT PRIVILEGE
privilege from other users
on himself and then grant it only to users that he trusts.The syntax for the
INHERIT
PRIVILEGES
privilege grant is as follows:GRANT INHERIT PRIVILEGES ON USER invoking_user TO procedure_owner;
In this
specification:
·
invoking_user is the user who runs the invoker's rights procedure.
This user must be a database user account.
·
procedure_owner is the user who owns the invoker's rights procedure.
This value must be a database user account. As an alternative to granting the
The following users or roles must have the INHERIT
PRIVILEGES
privilege to the procedure's owner, you can grant the privilege to a role that
is in turn granted to the procedure.INHERIT
PRIVILEGES
privilege granted to them by users who will run their invoker's rights
procedures:
·
Users
or roles who own the invoker's rights procedures
·
Users
or roles who own
Example
5-1 shows how the invoking user BEQUEATH CURRENT_USER
viewsjward
can grant user ebrown
the INHERIT
PRIVILEGES
privilege.GRANT INHERIT PRIVILEGES ON USER jward TO ebrown;
The statement
enables any invoker's rights procedure that ebrown
writes, or will write in
the future, to access jward
's privileges when jward
runs it.Example 5-2 shows how user
jward
can revoke his privileges from ebrown
.REVOKE INHERIT PRIVILEGES ON USER jward FROM ebrown;
Granting the INHERIT ANY PRIVILEGES Privilege to Other Users
By default, userSYS
has the INHERIT
ANY PRIVILEGES
system privilege and can grant this privilege to other database users or roles.
As with all ANY
privileges, only grant this privilege to trusted users
or roles. Once a user or role has been granted the INHERIT
ANY PRIVILEGES
privilege, then this user's invoker's rights procedures have access to the
privileges of the invoking user. You can find the users who have been granted
the INHERIT
ANY PRIVILEGES
privilege by querying the DBA_SYS_PRIVS
data dictionary view.Example 5-3 shows how to grant the
INHERIT ANY PRIVILEGES
privilege to user ebrown
.GRANT INHERIT ANY PRIVILEGES TO ebrown;
Be careful about revoking the INHERIT ANY
PRIVILEGES
privilege from powerful users. For example, suppose user SYSTEM
has created a set of
invoker's rights procedures. If you revoke INHERIT ANY
PRIVILEGES
from SYSTEM
, then other users cannot
run his procedures, unless they have specifically granted him the INHERIT
PRIVILEGE
privilege.Managing the INHERIT PRIVILEGES and INHERIT ANY PRIVILEGES Privileges
By default,PUBLIC
has the INHERIT
PRIVILEGE
privilege
on all new and upgraded user accounts, and the SYS
user has the INHERIT
ANY PRIVILEGES
privilege. Oracle by default configures a set of grants of INHERIT
PRIVILEGES
that are designed to help protect against misuse of the privileges of various
Oracle-defined users. You can choose to revoke the default grant of INHERIT
PRIVILEGES ON USER
user_nameTO PUBLIC
for a customer-defined
user and grant more specific grants of INHERIT PRIVILEGES
as appropriate for that
particular user. To find the users who have been granted the INHERIT
ANY PRIVILEGES
privilege, query the DBA_SYS_PRIVS
data dictionary view.
1.
Revoke
the
INHERIT
PRIVILEGES
privilege from PUBLIC
.
For
example:
REVOKE INHERIT PRIVILEGES ON invoking_user FROM PUBLIC;
Be
aware that this time, any users who run invoker's rights procedures cannot do
so, due to run-time errors from failed
INHERIT PRIVILEGES
checks.
2.
Selectively
grant the
INHERIT PRIVILEGES
privilege to trusted users or roles.
3.
Similarly,
selectively grant the
You can create an
audit policy audit the granting and revoking of these two privileges, but you
cannot audit run-time errors that
result from failed INHERIT ANY PRIVILEGES
privilege only to trusted
users or roles.INHERIT PRIVILEGES
privilege checks.
See
Also:
·
Oracle Database PL/SQL Language Reference for
information about SQL injection attacks
·
Oracle Database SQL Language Reference for more
information about the
GRANT
statement and default privilegesControlling Definer's Rights and Invoker's Rights in Views
This section contains:About Controlling Definer's Rights and Invoker's Rights in Views
You can configure user-defined views to accommodate invoker's rights functions that are referenced in the view. When a user invokes an identity- or privilege-sensitive SQL function or an invoker's rights PL/SQL or Java function, then current schema, current user, and currently enabled roles within the operation's execution can be inherited from the querying user's environment, rather than being set to the owner of the view.This configuration does not turn the view itself into an invoker's rights object. Name resolution within the view is still handled using the view owner's schema, and privilege checking for the view is done using the view owner's privileges. However, at runtime, the function referenced by view runs under the invoking user's privileges rather than those of the view owner's.
The benefit of this feature is that it enables functions such as
SYS_CONTEXT
andUSERENV
, which must return information accurate for the invoking
user, to return consistent results when these functions are referenced in a
view.Using the BEQUEATH Clause in the CREATE VIEW Statement
To enable an invoker's rights function to be executed using the rights of the user issuing SQL that references the view, in theCREATE VIEW
statement, set the BEQUEATH
clause to CURRENT_USER
.If you plan to issue a SQL query or DML statement against the view, then the view owner must be granted the
INHERITPRIVILEGES
privilege on the invoking
user or the view owner must have the INHERIT ANY
PRIVILEGES
privilege. If not, when a SELECT
or DML statement involves a BEQUEATHCURRENT_USER
view, the run-time system
will raise error ORA-06598: insufficient INHERIT PRIVILEGES
privilege
.Example 5-4 shows how to use
BEQUEATH CURRENT_USER
to set the view's function
to be executed using invoker's rights.CREATE VIEW MY_OBJECTS_VIEW BEQUEATH CURRENT_USER AS
SELECT GET_OBJS_FUNCTION;
If you want the
function within the view to be executed using the view owner's rights, then
either omit the BEQUEATH
clause or set it to DEFINER
. For example:CREATE VIEW my_objects_view BEQUEATH DEFINER AS
SELECT OBJECT_NAME FROM USER_OBJECTS;
See
Also:
·
"Controlling
Invoker's Rights Privileges for Procedure Calls and View Access" for
more information about how the
INHERIT PRIVILEGE
privilege works
·
Oracle Database SQL Language Reference for additional
information about granting the
INHERITPRIVILEGES
and INHERITANYPRIVILEGES
privileges
·
Oracle Database Real Application Security Administrator's and
Developer's Guide for information about how to use
BEQUEATH
CURRENT_USER
views with Oracle Database Real Application Security applicationsFinding the User Name or User ID of the Invoking User
You can use the following functions to find the invoking user based on whether invoker's rights or definer's rights are used:
·
ORA_INVOKING_USER
: Use this function to
return the name of the user who is invoking the current statement or view. This
function treats the intervening views as specified by their BEQUEATH
clauses. If the invoking
user is an Oracle Database Real Application Security-defined user, then this
function returns XS$NULL
.
·
ORA_INVOKING_USERID
: Use this function to
return the identifier (ID) of the user who is invoking the current statement or
view. This function treats the intervening views as specified by their BEQUEATH
clauses. If the invoking
user is an Oracle Database Real Application Security-defined user, then this
function returns an ID that is common to all Real Application Security sessions
but is different from the ID of any database user.
For
example:
CONNECT HR
Enter password: password
SELECT ORA_INVOKING_USER FROM DUAL;
ORA_INVOKING_USER
--------------------
HR
See
Also:
Oracle Database Real Application Security Administrator's and
Developer's Guide for information about similar functions that are
used for Oracle Database Real Application Security applications
Finding BEQUEATH DEFINER and BEQUEATH_CURRENT_USER Views
To find out if a view isBEQUEATHDEFINER
or BEQUEATHCURRENT_USER
, select the BEQUEATH
column of a *_VIEWS
or *_VIEWS_AE
static data dictionary
view for that view. For example:SELECT BEQUEATH FROM USER_VIEWS WHERE VIEW_NAME = 'MY_OBJECTS';
BEQUEATH
------------
CURRENT_USER
See
Also:
Using Code Based Access Control for Definer's Rights and Invoker's Rights
This section contains:About Using Code Based Access Control for Applications
Applications must often run program units in the caller's environment, while requiring elevated privileges. PL/SQL programs traditionally make use of definer's rights to temporarily elevate the privileges of the program. However, definer's rights based program units run in the context of the definer or the owner of the program unit, as opposed to the invoker's context. Also, using definer's rights based programs often leads to the program unit getting more privileges than required.Code based access control (CBAC) provides the solution by enabling you to attach database roles to a PL/SQL function, procedure, or package. These database roles are enabled at run time, enabling the program unit to execute with the required privileges in the calling user's environment.
Who Can Grant Code Based Access Control Roles to a Program Unit?
Code based access control roles can be granted to a program unit if all of the following conditions are met:
·
The
grantor is user
SYS
or owns the program unit.
·
The
roles to be granted are directly granted roles to the owner.
·
The
roles to be granted are standard database roles.
If these three
conditions are not met, then error ORA-28702: Program
unit string is not owned by the grantor
is raised if the first condition is
not met, and error ORA-1924: role 'string' not granted or does
not exist
is raised if the second and third conditions are not met.
See
Also:
How Control Based Access Works with Invoker's Rights Program Units
Consider a scenario where there are two application users,1
and 2
. Application user 2
creates the invoker's
right program unit, grants database role 2
to the invoker's rights
unit, and then grants execute privileges on the invoker's rights unit to
application user 1
.Figure 5-1 shows the database roles
1
and 2
granted to application
users 1
and 2
, and an invoker's right
program unit.The grants are as follows:
·
Application
user
1
is directly granted
database roles 1
and 4
.
·
Application
user
2
is directly granted
database role 2
,
which includes application roles 3
and 4
.
·
The
invoker's right program unit is granted database role
When application
user 2
.1
logs in and executes the
invoker's rights program unit, then the invoker's rights unit executes with the
combined database roles of user 1
and the database roles attached to
the invoker's rights unit.Figure 5-2 shows the security context in which the invoker's rights unit is executed. When application user
1
first logs on, application user 1
has the database PUBLIC
role (by default), and the
database roles 1
and 4
, which have been granted
to it. Application user 1
next executes the invoker's rights
program unit created by application user 2
.The invoker's rights unit executes in application user
1
's context, and has the
additional database role 2
attached to it. Database roles 3
and 4
are included, as they are
a part of database role 2
. After the invoker's rights unit
exits, then application user 1
only has the application roles that
have been granted to it, PUBLIC
, role 1
, and role 4
.How Control Based Access Control Works with Definer's Rights Program Units
Consider a scenario where application user2
creates a definer's rights
program unit, grants role 2
to the definer's rights program unit,
and then grants the EXECUTE
privilege on the definer's rights
program unit to application user 1
.Figure 5-3 shows the database roles granted to application users
1
and 2
, and a definer's rights
program unit.The grants are as follows:
·
Application
user
1
is directly granted
database roles 1
and 4
.
·
Application
user
2
is directly granted
database role2
,
which includes database roles 3
and 4
.
·
The
definer's right program unit is granted database role
When application
user 2
.1
logs in and executes
definer's right program unit, then the definer's rights unit executes with the
combined database roles of application user 2
and the database roles
attached to the definer's rights unit (roles 2
, 3
, and 4
).Figure 5-4 shows the security context in which the definer's right program unit is executed. When application user
1
first logs on, application user 1
has the database PUBLIC
role (by default), and the
database roles 1
and4
, which have been granted
to it. Application user 1
next executes the definer's rights
program unit created by application user 2
.The definer's rights program unit executes in application user
2
's context, and has the
additional database role 2
attached to it. Database roles 3
and 4
are included, as they are
a part of database role 2
. After the definer's rights unit
exits, application user 1
only has the database roles that have
been granted to it (PUBLIC
, role 1
, and role 4
).Granting and Revoking Database Roles to a Program Unit
Use the following syntax to grant or revoke database roles for a PL/SQL function, procedure, or package:GRANT role_list TO code_list
REVOKE {role_list | ALL} FROM code_list
In this
specification:role_list ::= code-based_role_name[, role_list]
code_list ::= {
{FUNCTION [schema.]function_name}
| {PROCEDURE [schema.]procedure_name}
| {PACKAGE [schema.]package_name}
}[, code_list]
For example:GRANT cb1 TO FUNCTION func1, PACKAGE pack1;
GRANT cb2, cb3 TO FUNCTION SCOTT.func2, PACKAGE SYS.pack2;
REVOKE cb1 FROM FUNCTION func1, PACKAGE pack1;
REVOKE ALL FROM FUNCTION SCOTT.func2, PACKAGE SYS.pack2;
See
Also:
Tutorial: Controlling Access to Sensitive Data Using Code Based Access Control
This section contains:About This Tutorial
In this tutorial, you will create a user who must have access to specific employee information for his department. However, the tableHR.EMPLOYEES
contains sensitive
information such as employee salaries, which must not be accessible to the
user. You will implement access control using code based access control. The
employee data will be shown to the user through an invoker's rights procedure.
Instead of granting the SELECT
privilege directly to the user, you
will grant the SELECT
privilege to the invoker's rights
procedure through a database role. In the procedure, you will hide the
sensitive information, such as salaries. Because the procedure is an invoker's
rights procedure, you know the caller's context inside the procedure. In this
case, the caller's context is for the Finance department. The user is named "Finance"
, so that only data for
employees who work in the Finance department is accessible to the user.Step 1: Create the User and Grant HR the CREATE ROLE Privilege
1.
Log
into the database instance as an administrator who has privileges to create user
accounts and roles.
For
example:
sqlplussec_admin
Enter password: password
2.
Create
the
"Finance"
user account.3. GRANT CONNECT TO "Finance" IDENTIFIED BY password;
Ensure
that you enter
"Finance"
in the case shown,
enclosed by double quotation marks. Replace password with a password that is
secure. See "Minimum
Requirements for Passwords" for more information.
4.
Grant
the
CREATE
ROLE
privilege to user HR
.5. GRANT CREATE ROLE TO HR;
Step 2: Create the print_employees Invoker's Rights Procedure
Theprint_employees
invoker's rights procedure
shows employee information in the current user's department. You must create
this procedure as an invoker's rights procedure because you must know who the
caller is when inside the procedure.
1.
Connect
as user
HR
.2. CONNECT HR
3. Enter password: password
4.
Create
the
print_employees
procedure as follows. (You
can copy and paste this text by positioning the cursor at the start of create or
replace
in the first line.)1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
|
create or replace procedure print_employees
authidcurrent_user
as
begin
dbms_output.put_line(rpad('ID', 10) ||
rpad('First Name', 15) ||
rpad('Last Name', 15) ||
rpad('Email', 15) ||
rpad('Phone Number', 20));
for rec in (select e.employee_id, e.first_name, e.last_name,
e.email, e.phone_number
from hr.employees e, hr.departments d
where e.department_id = d.department_id
and d.department_name =
sys_context('userenv', 'current_user'))
loop
dbms_output.put_line(rpad(rec.employee_ID, 10) ||
rpad(rec.first_name, 15) ||
rpad(rec.last_name, 15) ||
rpad(rec.email, 15) ||
rpad(rec.phone_number, 20));
end loop;
end;
/
|
6.
In
this example:
o
Lines 5–9: Prints the table header.
o
Lines 10–15: Finds the employee
information for the caller's department, which for this tutorial is the Finance
department for user
"Finance"
. Had you created a user
named "Marketing"
(which is also listed in
the DEPARTMENT_NAME
column of the HR.EMPLOYEES
table), then the procedure
could capture information for Marketing employees.
o
Lines 16–22: Populates the output with
the employee data from the Finance department.
Step 3: Create the hr_clerk Role and Grant Privileges for It
1.
Create
the
hr_clerk
role.2. CREATE ROLE hr_clerk;
3.
Grant
the
EXECUTE
privilege on the
print_employees procedure to the hr_clerk
role.4. GRANT EXECUTE ON print_employees TO hr_clerk;
5.
Grant
the
hr_clerk
role to "Finance"
.6. GRANT hr_clerk TO "Finance";
Step 4: Test the Code Based Access Control HR.print_employees Procedure
1.
Connect
to the database instance as user
"Finance"
.2. CONNECT "Finance"
3. Enter password: password
4.
Try
to directly query the
HR.EMPLOYEES
table.5. SELECT EMPLOYEE_ID, FIRST_NAME, LAST_NAME, SALARY FROM HR.EMPLOYEES;
The
query fails because user
rblakey
does not have the SELECT
privilege for HR.EMPLOYEES
.ERROR at line 1:
ORA-00942: table or view does not exist
6.
Execute
the
HR.print_employees
procedure.7. EXEC HR.print_employees;
The
query fails because user
"Finance"
does not have the
appropriate privileges.ERROR at line 1:
ORA-00942: table or view does not exist
ORA-06512: at "HR.PRINT_EMPLOYEES", line 13ORA-06512: at line 1
Step 5: Create the view_emp_role Role and Grant Privileges for It
1.
Connect
as user
HR
.2. CONNECT HR
3. Enter password: password
4.
Create
the
view_emp_role
role.5. CREATE ROLE view_emp_role;
6.
Grant
the
SELECT
privilege on HR.EMPLOYEES
and HR.DEPARTMENTS
to the view_emp_role
role.7. GRANT SELECT ON HR.EMPLOYEES TO view_emp_role;
8. GRANT SELECT ON HR.DEPARTMENTS TO view_emp_role;
9.
Grant
the
view_emp_role
role to the HR.print_employees
invoker's rights
procedure.10. GRANT view_emp_role TO PROCEDURE HR.print_employees;
Step 6: Test the HR.print_employees Procedure Again
1.
Connect
as user
"Finance"
.2. CONNECT "Finance"
3. Enter password: password
4.
Set
the server output to display.
5. SET SERVEROUTPUT ON;
6.
Try
to directly query the
HR.EMPLOYEES
table.7. SELECT EMPLOYEE_ID, FIRST_NAME, LAST_NAME, SALARY FROM HR.EMPLOYEES;
The
query fails.
ERROR at line 1:
ORA-00942: table or view does not exist
8.
Execute
the
HR.print_employees
procedure to show the
employee information.9. EXEC HR.print_employees;
The
call succeeds.
ID First Name Last Name Email Phone Number
108 Nancy Greenberg NGREENBE 515.124.4569
109 Daniel Faviet DFAVIET 515.124.4169
110 John Chen JCHEN 515.124.4269
111 Ismael Sciarra ISCIARRA 515.124.4369
112 Jose Manuel Urman JMURMAN 515.124.4469
113 Luis Popp LPOPP 515.124.4567
PL/SQL procedure successfully completed.
Step 7: Remove the Components for This Tutorial
1.
Connect
as a user with administrative privileges.
For
example:
CONNECT sec_admin
Enter password: password
2.
Drop
the user
"Finance"
.3. DROP USER "Finance";
4.
Drop
the
hr_clerk
role.5. DROP ROLE hr_clerk;
6.
Connect
as user
HR
.7. CONNECT HR
8. Enter password: password
9.
Drop
the view_emp_role role and the
HR.print_employees
procedure.10. DROP ROLE view_emp_role;
11. DROP PROCEDURE print_employees;
12.
Connect
as the administrative user.
13. CONNECT sec_admin
14. Enter password: password
15.
Revoke
the
CREATE
ROLE
privilege from HR
.16. REVOKE CREATE ROLE FROM HR;
No comments:
Post a Comment