Thursday, February 27, 2014

Oracle 12c New Features - SQL*Loader Express

Oracle 12c introduces Sql*Loader Express features, which allow users to run sqlldr with minimum configuration.  

The following example shows loading records into table EMPLOYEE from CSV file EMPLOYEE.dat without having to create a control file.


SQL> create table EMPLOYEE (id integer primary key, name varchar2(10));

Table created.

SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

[oracle@rac1 admin]$ more EMPLOYEE.dat
1,Adam
2,Ben
3,Colin
4,Dean
5,Evan
6,Frank
7,Greg
8,Hank
9,Ian
10,Jack
[oracle@rac1 admin]$ sqlldr test/test TABLE=EMPLOYEE

SQL*Loader: Release 12.1.0.1.0 - Production on Fri Jun 28 11:58:11 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Express Mode Load, Table: EMPLOYEE
Path used:      External Table, DEGREE_OF_PARALLELISM=AUTO
SQL*Loader-816: error creating temporary directory object SYS_SQLLDR_XT_TMPDIR_00000 for file EMPLOYEE.dat
ORA-01031: insufficient privileges
SQL*Loader-579: switching to direct path for the load
SQL*Loader-583: ignoring trim setting with direct path, using value of LDRTRIM
SQL*Loader-584: ignoring DEGREE_OF_PARALLELISM setting with direct path, using value of NONE
Express Mode Load, Table: EMPLOYEE
Path used:      Direct

Load completed - logical record count 10.

Table EMPLOYEE:
  10 Rows successfully loaded.

Check the log file:
  EMPLOYEE.log
for more information about the load.

[oracle@rac1 admin]$ ls EMPLOYEE*
EMPLOYEE.dat  EMPLOYEE.log

[oracle@rac1 admin]$ more EMPLOYEE.log

SQL*Loader: Release 12.1.0.1.0 - Production on Fri Jun 28 11:58:11 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Express Mode Load, Table: EMPLOYEE
Data File:      EMPLOYEE.dat
  Bad File:     EMPLOYEE_%p.bad
  Discard File:  none specified

 (Allow all discards)

Number to load: ALL
Number to skip: 0
Errors allowed: 50
Continuation:    none specified
Path used:      External Table

Table EMPLOYEE, loaded from every logical record.
Insert option in effect for this table: APPEND

Column Name                Position   Len   Term Encl Datatype
-------------------------- ---------- ----- ---- ---- ---------
ID                         FIRST      *     ,         CHARACTER
NAME                       NEXT       *     ,         CHARACTER

Generated control file for possible reuse:
OPTIONS(EXTERNAL_TABLE=EXECUTE, TRIM=LRTRIM)
LOAD DATA
INFILE 'EMPLOYEE'
APPEND
INTO TABLE EMPLOYEE
FIELDS TERMINATED BY ","
(
  ID,
  NAME
)
End of generated control file for possible reuse.

SQL*Loader-816: error creating temporary directory object SYS_SQLLDR_XT_TMPDIR_00000 for file EMPLOYEE.dat
ORA-01031: insufficient privileges

----------------------------------------------------------------
SQL*Loader-579: switching to direct path for the load
SQL*Loader-583: ignoring trim setting with direct path, using value of LDRTRIM
SQL*Loader-584: ignoring DEGREE_OF_PARALLELISM setting with direct path, using value of NONE
----------------------------------------------------------------

Express Mode Load, Table: EMPLOYEE
Data File:      EMPLOYEE.dat
  Bad File:     EMPLOYEE.bad
  Discard File:  none specified

 (Allow all discards)

Number to load: ALL
Number to skip: 0
Errors allowed: 50
Continuation:    none specified
Path used:      Direct

Table EMPLOYEE, loaded from every logical record.
Insert option in effect for this table: APPEND

   Column Name        Position   Len   Term Encl Datatype
--------------------- ---------- ----- ---- ---- ----------------
ID                    FIRST      *     ,         CHARACTER
NAME                  NEXT       *     ,         CHARACTER

Generated control file for possible reuse:
OPTIONS(DIRECT=TRUE)
LOAD DATA
INFILE 'EMPLOYEE'
APPEND
INTO TABLE EMPLOYEE
FIELDS TERMINATED BY ","
(
  ID,
  NAME
)
End of generated control file for possible reuse.

The following index(es) on table EMPLOYEE were processed:
index TEST.SYS_C009860 loaded successfully with 10 keys

Table EMPLOYEE:
  10 Rows successfully loaded.
  0 Rows not loaded due to data errors.
  0 Rows not loaded because all WHEN clauses were failed.
  0 Rows not loaded because all fields were null.

Bind array size not used in direct path.
Column array  rows :    5000
Stream buffer bytes:  256000
Read   buffer bytes: 1048576

Total logical records skipped:          0
Total logical records read:            10
Total logical records rejected:         0
Total logical records discarded:        0
Total stream buffers loaded by SQL*Loader main thread:        1
Total stream buffers loaded by SQL*Loader load thread:        0

Run began on Fri Jun 28 11:58:11 2013
Run ended on Fri Jun 28 11:58:12 2013

Elapsed time was:     00:00:01.27
CPU time was:         00:00:00.02


[oracle@rac1 admin]$ sqlplus test/test

SQL*Plus: Release 12.1.0.1.0 Production on Fri Jun 28 12:05:49 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.

Last Successful login time: Fri Jun 28 2013 11:58:11 +08:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics, Real Application Testing
and Unified Auditing options

SQL> select * from employee;

        ID NAME
---------- ----------
         1 Adam
         2 Ben
         3 Colin
         4 Dean
         5 Evan
         6 Frank
         7 Greg
         8 Hank
         9 Ian
        10 Jack


10 rows selected.

RMAN Incremental Backups

RMAN Incremental Backups

RMAN incremental backups back up only datafile blocks that have changed since a specified previous backup. You can make incremental backups of databases, individual tablespaces or datafiles.
The goal of an incremental backup is to back up only those data blocks that have changed since a previous backup.
The primary reasons for making incremental backups part of your strategy are:
  • For use in a strategy based on incrementally updated backups, where these incremental backups are used to periodically roll forward an image copy of the database
  • To reduce the amount of time needed for daily backups
  • To save network bandwidth when backing up over a network
  • To get adequate backup performance when the aggregate tape bandwidth available for tape write I/Os is much less than the aggregate disk bandwidth for disk read I/Os
  • To be able to recover changes to objects created with the NOLOGGING option. For example, direct load inserts do not create redo log entries and their changes cannot be reproduced with media recovery. They do, however, change data blocks and so are captured by incremental backups.
  • To reduce backup sizes for NOARCHIVELOG databases. Instead of making a whole database backup every time, you can make incremental backups.
    As with full backups, if you are in ARCHIVELOG mode, you can make incremental backups if the database is open; if the database is in NOARCHIVELOG mode, then you can only make incremental backups after a consistent shutdown.
    See Also:
    Oracle Database Concepts for more information about NOLOGGING mode
One effective strategy is to make incremental backups to disk, and then back up the resulting backup sets to a media manager with BACKUP AS BACKUPSET. The incremental backups are generally smaller than full backups, which limits the space required to store them until they are moved to tape. Then, when the incremental backups on disk are backed up to tape, it is more likely that tape streaming can be sustained because all blocks of the incremental backup are copied to tape. There is no possibility of delay due to time required for RMAN to locate changed blocks in the datafiles.

4.4.1 Incremental Backup Algorithm

Each data block in a datafile contains a system change number (SCN), which is the SCN at which the most recent change was made to the block. During an incremental backup, RMAN reads the SCN of each data block in the input file and compares it to the checkpoint SCN of the parent incremental backup. If the SCN in the input data block is greater than or equal to the checkpoint SCN of the parent, then RMAN copies the block.
Note that if you enable the block change tracking feature, RMAN can refer to the change tracking file to identify changed blocks in datafiles without scanning the full contents of the datafile. Once enabled, block change tracking does not alter how you take or use incremental backups, other than offering increased performance. See "Improving Incremental Backup Performance: Change Tracking" for more details about enabling block change tracking.

4.4.1.1 Level 0 and Level 1 Incremental Backups

Incremental backups can be either level 0 or level 1. A level 0 incremental backup, which is the base for subsequent incremental backups, copies all blocks containing data, backing the datafile up into a backup set just as a full backup would. The only difference between a level 0 incremental backup and a full backup is that a full backup is never included in an incremental strategy.
A level 1 incremental backup can be either of the following types:
  • A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
  • A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0
Incremental backups are differential by default.
Note:
Cumulative backups are preferable to differential backups when recovery time is more important than disk space, because during recovery each differential backup must be applied in succession. Use cumulative incremental backups instead of differential, if enough disk space is available to store cumulative incremental backups.
The size of the backup file depends solely upon the number of blocks modified and the incremental backup level.

4.4.1.2 Differential Incremental Backups

In a differential level 1 backup, RMAN backs up all blocks that have changed since the most recent cumulative or differental incremental backup, whether at level 1 or level 0. RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup. If no level 1 is available, RMAN copies all blocks changed since the level 0 backup.
The following command performs a level 1 differential incremental backup of the database:
RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;

If no level 0 backup is available, then the behavior depends upon the compatibility mode setting. If compatibility is >=10.0.0, RMAN copies all blocks changed since the file was created, and stores the results as a level 1 backup. In other words, the SCN at the time the incremental backup is taken is the file creation SCN. If compatibility <10.0.0, RMAN generates a level 0 backup of the file contents at the time of the backup, to be consistent with the behavior in previous releases.
Figure 4-1 Differential Incremental Backups (Default)
Description of Figure 4-1 follows
Description of "Figure 4-1 Differential Incremental Backups (Default)"
In the example shown in Figure 4-1, the following occurs:
  • Sunday
    An incremental level 0 backup backs up all blocks that have ever been in use in this database.
  • Monday - Saturday
    On each day from Monday through Saturday, a differential incremental level 1 backup backs up all blocks that have changed since the most recent incremental backup at level 1 or 0. So, the Monday backup copies blocks changed since Sunday level 0 backup, the Tuesday backup copies blocks changed since the Monday level 1 backup, and so forth.
  • The cycle is repeated for the next week.

4.4.1.3 Cumulative Incremental Backups

In a cumulative level 1 backup, RMAN backs up all the blocks used since the most recent level 0 incremental backup. Cumulative incremental backups reduce the work needed for a restore by ensuring that you only need one incremental backup from any particular level. Cumulative backups require more space and time than differential backups, however, because they duplicate the work done by previous backups at the same level.
The following command performs a cumulative level 1 incremental backup of the database:
BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE; # blocks changed since level 0

Figure 4-2 Cumulative Incremental Backups
Description of Figure 4-2 follows
Description of "Figure 4-2 Cumulative Incremental Backups"
In the example shown in Figure 4-2, the following occurs:
  • Sunday
    An incremental level 0 backup backs up all blocks that have ever been in use in this database.
  • Monday - Saturday
    A cumulative incremental level 1 backup copies all blocks changed since the most recent level 0 backup. Because the most recent level 0 backup was created on Sunday, the level 1 backup on each day Monday through Saturday backs up all blocks changed since the Sunday backup.
  • The cycle is repeated for the next week.

4.4.1.4 Basic Incremental Backup Strategy

Choose a backup scheme according to an acceptable MTTR (mean time to recover). For example, you can implement a three-level backup scheme so that a full or level 0 backup is taken monthly, a cumulative level 1 is taken weekly, and a differential level 1 is taken daily. In this scheme, you never have to apply more than a day's worth of redo for complete recovery.
When deciding how often to take full or level 0 backups, a good rule of thumb is to take a new level 0 whenever 50% or more of the data has changed. If the rate of change to your database is predictable, then you can observe the size of your incremental backups to determine when a new level 0 is appropriate. The following query displays the number of blocks written to a backup set for each datafile with at least 50% of its blocks backed up:
SELECT FILE#, INCREMENTAL_LEVEL, COMPLETION_TIME, BLOCKS, DATAFILE_BLOCKS 
  FROM V$BACKUP_DATAFILE 
  WHERE INCREMENTAL_LEVEL > 0 
  AND BLOCKS / DATAFILE_BLOCKS > .5 
  ORDER BY COMPLETION_TIME;

Compare the number of blocks in differential or cumulative backups to a base level 0 backup. For example, if you only create level 1 cumulative backups, then when the most recent level 1 backup is about half of the size of the base level 0 backup, take a new level 0.

4.4.2 Making Incremental Backups: BACKUP INCREMENTAL

After starting RMAN, run the BACKUP INCREMENTAL command at the RMAN prompt. This example makes a level 0 incremental backup of the database:
BACKUP INCREMENTAL LEVEL 0 DATABASE;

This example makes a differential level 1 backup of the SYSTEM tablespace and datafile tools01.dbf. It will only back up those data blocks changed since the most recent level 1 or level 0 backup:
BACKUP INCREMENTAL LEVEL 1
  TABLESPACE SYSTEM
  DATAFILE 'ora_home/oradata/trgt/tools01.dbf';

This example makes a cumulative level 1 backup of the tablespace users, backing up all blocks changed since the most recent level 0 backup.
BACKUP INCREMENTAL LEVEL = 1 CUMULATIVE
  TABLESPACE users;

4.4.3 Incrementally Updated Backups: Rolling Forward Image Copy Backups

Oracle's Incrementally Updated Backups feature lets you avoid the overhead of taking full image copy backups of datafiles, while providing the same recovery advantages as image copy backups.
At the beginning of a backup strategy, RMAN creates an image copy backup of the datafile. Then, at regular intervals, such as daily, level 1 incremental backups are taken, and applied to the image copy backup, rolling it forward to the point in time when the level 1 incremental was created.
During restore and recovery of the database, RMAN can restore from this incrementally updated copy and then apply changes from the redo log, with the same results as restoring the database from a full backup taken at the SCN of the most recently applied incremental level 1 backup.
A backup strategy based on incrementally updated backups can help minimize time required for media recovery of your database. For example, if you run scripts to implement this strategy daily, then at recovery time, you never have more than one day of redo to apply.

4.4.3.1 Incrementally Updated Backups: A Basic Example

To create incremental backups for use in an incrementally updated backups strategy, you must use the BACKUP... FOR RECOVER OF COPY WITH TAG form of the BACKUP command. How the command works is best understood in the context of an example script that would implement the strategy.
This script, run on a regular basis, is all that is required to implement a strategy based on incrementally updated backups:
RUN {
   RECOVER COPY OF DATABASE WITH TAG 'incr_update';
   BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'incr_update'
       DATABASE;
   }

The syntax used in the script does not, however, make it clear how the strategy works. To understand the script and the strategy, it is necessary to understand the effects of these two commands when no datafile copies or incremental backups exist.
  • The BACKUP INCREMENTAL LEVEL 1... FOR RECOVER OF COPY WITH TAG... command does not actually always create a level 1 incremental backup. If there is no level 0 image copy backup of an particular datafile, then executing this command creates an image copy backup of the datafile on disk with the specified tag instead of creating the level 1 backup.
    Note:
    Even when the BACKUP INCREMENTAL LEVEL 1 ... FOR RECOVER OF COPY command is used with DEVICE TYPE SBT to create a backup on tape, the first time it is used it creates the image copy on disk, and does not write any backup on tape. Subsequent incremental level 1 backups can be created on tape once the image copy is on disk.
    Thus, the first time the script runs, it creates the image copy of the datafile needed to begin the cycle of incremental updates. In the second run and all subsequent runs, it produces level 1 incremental backups of the datafile.
  • The RECOVER COPY OF DATABASE WITH TAG... command causes RMAN to apply any available incremental level 1 backups to a set of datafile copies with the specified tag.
    If there is no incremental backup or no datafile copy, the command generates a message but does not generate an error.
    The first time the script runs, this command has no effect, because there is neither a datafile copy nor a level 1 incremental backup.
    The second time the script runs, there is a datafile copy (created by the first BACKUP command), but no incremental level 1 backup, so again, the command has no effect.
    On the third run and all subsequent runs, there is a datafile copy and a level 1 incremental from the previous run, so the level 1 incremental is applied to the datafile copy, bringing the datafile copy up to the checkpoint SCN of the level 1 incremental.
Note also the following details about how this example works:
  • Each time a datafile is added to the database, an image copy of the new datafile is created the next time the script runs. The time after that, the first level 1 incremental for that datafile is created, and on all subsequent runs the new datafile is processed like any other datafile.
  • Tags must be used to identify the incremental level 0 datafile copies created for use in this strategy, so that they do not interfere with other backup strategies you implement. If you have multiple incremental backup strategies in effect, RMAN cannot unambiguously create incremental level 1 backups unless you tag level 0 backups.
    The incremental level 1 backups to apply to those image copies are selected based upon the checkpoint SCNs of the image copy datafiles and the available incremental level 1 backups. (The tag used on the image copy being recovered is not a factor in the selection of the incremental level backups.)
In practice, you would schedule the example script to run once each day, possibly at midnight. On a typical night (that is, after the first two nights), when the script completed the following files would be available for a point-in-time recovery:
  • An image copy of the database, as of the checkpoint SCN of the preceding run of the script, 24 hours earlier
  • An incremental backup for the changes since the checkpoint SCN of preceding run
  • Archived redo logs including all changes between the checkpoint SCN of the image copy and the current time
If, at some point during the following 24 hours, you need to restore and recover your database from this backup, for either complete or point-in-time recovery, you can restore the datafiles from the incrementally updated datafile copies, and apply changes from the most recent incremental level 1 and the redo logs to reach the desired SCN. At most, you will have 24 hours of redo to apply, which limits how long point-in-time recovery will take.
See Also:
Oracle Database 2 Day DBA to see how this technique is used in the Oracle-suggested backup strategy in Enterprise Manager.

4.4.3.2 Incrementally Updated Backups: A One Week Example

The basic example can be extended to provide fast recoverability to a window greater than 24 hours. Alter the RECOVER COPY... WITH TAG to perform incomplete recovery of the datafile copies to the point in time in the past where you want your window of recoverability to begin. This example shows how to maintain a seven day window:
RUN {
   RECOVER COPY OF DATABASE WITH TAG 'incr_update' 
       UNTIL TIME 'SYSDATE - 7';
   BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'incr_update'
       DATABASE;
   }

The effect of the script is as follows:
  • On the first night the RECOVER COPY... UNTIL TIME statement has no effect, and the BACKUP INCREMENTAL... FOR RECOVER OF COPY statement creates the incremental level 0 copy.
  • On the second through seventh nights, the RECOVER COPY... UNTIL TIME statement has no effect because TIME 'SYSDATE - 7' is still a time in the future. The BACKUP INCREMENTAL... FOR RECOVER OF COPY statement creates differetial incremental level 1 backups containing the block changes for the previous day.
  • On the eighth and all subsequent nights night, the RECOVER COPY... UNTIL TIME statement applies the level 1 incremental from seven days ago to the copy of the database. The BACKUP INCREMENTAL... FOR RECOVER OF COPY statement creates an incremental backup containing the changes for the previous day.
As with the basic example, you have fast recoverability to any point in time between the SCN of the datafile copies and the present, using block changes from the incremental backups and individual changes from the redo logs. Because you have the daily level 1 incrementals, you still never need to apply more than one day of redo.

4.4.4 Improving Incremental Backup Performance: Change Tracking

RMAN's change tracking feature for incremental backups improves incremental backup performance by recording changed blocks in each datafile in a change tracking file. If change tracking is enabled, RMAN uses the change tracking file to identify changed blocks for incremental backup, thus avoiding the need to scan every block in the datafile.
After enabling change tracking, the first level 0 incremental backup still has to scan the entire datafile, as the change tracking file does not yet reflect the status of the blocks. Subsequent incremental backup that use this level 0 as parent will take advantage of the change tracking file.
Using change tracking in no way changes the commands used to perform incremental backups, and the change tracking files themselves generally require little maintenance after initial configuration.
Change tracking is disabled by default, because it does introduce some minimal performance overhead on your database during normal operations. However, the benefits of avoiding full datafile scans during backup are considerable, especially if only a small percentage of data blocks are changed between backups. If your backup strategy involves incremental backups, then you should enable change tracking.
One change tracking file is created for the whole database. By default, the change tracking file is created as an Oracle managed file in DB_CREATE_FILE_DEST. You can also specify the name of the block change tracking file, placing it in any location you choose.
Note:
In a Real Applications Clusters (RAC) environment, the change tracking file must be located on shared storage accessible from all nodes in the cluster.
Oracle saves enough change-tracking information to enable incremental backups to be taken using any of the 8 most recent incremental backups as its parent.
Although RMAN does not support backup and recovery of the change-tracking file itself, if the whole database or a subset needs to be restored and recovered, then recovery has no user-visible effect on change tracking. After the restore and recovery, the change tracking file is cleared, and starts recording block changes again. The next incremental backup after any recovery is able to use change-tracking data.

4.4.4.1 Enabling and Disabling Change Tracking

You can enable or disable change tracking when the database is either open or mounted. To alter the change tracking setting, you must use SQL*Plus to connect to the target database with administrator privileges.
To store the change tracking file in the database area, set DB_CREATE_FILE_DEST in the target database. Then issue the following SQL statement to enable change tracking:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;

You can also create the change tracking file in a location you choose yourself, using the following SQL statement:
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING 
  USING FILE '/mydir/rman_change_track.f' REUSE;

The REUSE option tells Oracle to overwrite any existing file with the specified name.
To disable change tracking, use this SQL statement:
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;

If the change tracking file was stored in the database area, then it is deleted when you disable change tracking.

4.4.4.2 Checking Whether Change Tracking is Enabled

From SQL*Plus, you can query V$BLOCK_CHANGE_TRACKING.STATUS to determine whether change tracking is enabled, and if it is, query V$BLOCK_CHANGE_TRACKING.FILENAME to display the filename.

4.4.4.3 Moving the Change Tracking File

If you need to move the change tracking file, the ALTER DATABASE RENAME FILE command updates the control file to refer to the new location. The process outlined in this section describes how to change the location of the change tracking file while preserving its contents.
To relocate the change tracking file:
  1. If necessary, determine the current name of the change tracking file:
    SELECT filename 
    FROM V$BLOCK_CHANGE_TRACKING;
    
    
  2. Shut down the database. For example:
    SHUTDOWN IMMEDIATE
    
    
  3. Using host operating system commands, move the change tracking file to its new location.
  4. Mount the database and move the change tracking file to a location that has more space. For example:
    ALTER DATABASE RENAME FILE     'ora_home/dbs/change_trk.f' TO '/new_disk/change_trk.f'; 
    
    
  5. Open the database:
    ALTER DATABASE OPEN;
    
    
If you cannot shut down the database, then you must disable change tracking and re-enable it at the new location, as in the following example:
ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE 'new_location';

If you choose this method, you will lose the contents of the change tracking file. Until the next time you complete a level 0 incremental backup, RMAN will have to scan the entire file.

4.4.4.4 Estimating Size of the Change Tracking File on Disk

The size of the change tracking file is proportional to the size of the database and the number of enabled threads of redo. The size is not related to the frequency of updates to the database. Typically, the space required for block change tracking is approximately 1/30,000 the size of the data blocks to be tracked. Note, however, the following two factors that may cause the file to be larger than this estimate suggests:
  • To avoid overhead of allocating space as your database grows, the change tracking file size starts at 10MB, and new space is allocated in 10MB incremenents. Thus, for any database up to approximately 300GB the file size is no smaller than 10MB, for up to approximately 600GB the file size is no smaller than 20MB, and so on.
  • For each datafile, a minimum of 320K of space is allocated in the change tracking file, regardless of the size of the file. Thus, if you have a large number of relatively small datafiles, the change tracking file is larger than for databases with a smaller number of larger datafiles containing the same data.

How to multiplex Control Files in Oracle

Control File is a physical component of every Oracle Database, it is a small binary file, which records database structure. Control File includes information like database name, name and location of data files and redo log files, database creation timestamp, current log sequence number, checkpoint information. Additionally if we use RMAN for backup and we do use RMAN catalog, information about RMAN backups is stored in control file. Oracle Database requires at least one Control File to operate. Control Files must be accessible all the time the database is open.
Control File is crucial physical component of every Oracle database. Multiplexing Control Files to several different file systems decreases the probability of losing Control Files. The procedure used to multiplex Control Files can also be used to move or rename Control Files. It is worth to remember that each control file is exact copy of primary Control File. The locations and names of control files is stored in instance parameter named CONTROL_FILES.
The first step in this procedure is to identify the current CONTROL_FILES.

SQL> column is_recovery_dest_file format a25

SQL> column name format a60

SQL> set linesize 160

SQL> 

SQL> select status, name, is_recovery_dest_file

  2  from v$controlfile

  3  /



STATUS  NAME                                             IS_RECOVERY_DEST_FILE

------- ------------------------------------------------------------ -------------------------

        /u01/app/oracle/oradata/orcl/control01.ctl                   NO

        /u01/app/oracle/oradata/orcl/control02.ctl                   NO

        /u01/app/oracle/oradata/orcl/control03.ctl                   NO



SQL> show parameter control_files



NAME                                 TYPE        VALUE

------------------------------------ ----------- ------------------------------

control_files                        string      /u01/app/oracle/oradata/orcl/c

                                                 ontrol01.ctl, /u01/app/oracle/

                                                 oradata/orcl/control02.ctl, /u

                                                 01/app/oracle/oradata/orcl/con

                                                 trol03.ctl



As we can see I have three Control Files however each residing on one single disk, which means that in case of hardware failure I will lose all my Control Files. To prevent this I will multiplex into two more locations, which should be placed on separate hard drives. For presentation purposes I will simulate hard drives my creating directories in /. The suggested location for storing additional CONTROL_FILES is /u[n]/app/oracle/oradata/instance_name/ , where is two digit string like 01 or 02.
[root@localhost ~]# mkdir -p /u02/app/oracle/oradata/orcl

[root@localhost ~]# mkdir -p /u03/app/oracle/oradata/orcl

[root@localhost ~]# chown -R oracle:oinstall /u02/app/

[root@localhost ~]# chown -R oracle:oinstall /u03/app

[root@localhost ~]# chmod -R 775 /u02/app

[root@localhost ~]# chmod -R 775 /u03/app

[root@localhost ~]# ls -l /u02/app/

total 4

drwxrwxr-x 3 oracle oinstall 4096 Jul 17 12:45 oracle

[root@localhost ~]# ls -l /u03/app/

total 4

drwxrwxr-x 3 oracle oinstall 4096 Jul 17 12:45 oracle


After the locations for multiplexed control files were created we need to modify CONTROL_FILES instance parameter, by using ALTER SYSTEM command. We use scope=spfile to indicate that the change will be applied after database restart.

SQL> alter system set control_files = '/u01/app/oracle/oradata/orcl/control01.ctl',

  2  '/u02/app/oracle/oradata/orcl/control02.ctl', '/u03/app/oracle/oradata/orcl/control03.ctl'

  3  scope=spfile

  4  /



System altered.
Then we shut down the database using shutdown immediate and move Control Files to new locations.
SQL> shutdown immediate

Database closed.

Database dismounted.

ORACLE instance shut down.



[oracle@localhost ~]$ mv /u01/app/oracle/oradata/orcl/control02.ctl /u02/app/oracle/oradata/orcl/

[oracle@localhost ~]$ mv /u01/app/oracle/oradata/orcl/control03.ctl /u03/app/oracle/oradata/orcl/

[oracle@localhost ~]$ ls -l /u02/app/oracle/oradata/orcl/

total 9536

-rw-r----- 1 oracle oinstall 9748480 Jul 17 13:02 control02.ctl

[oracle@localhost ~]$ ls -l /u03/app/oracle/oradata/orcl/

total 9536

-rw-r----- 1 oracle oinstall 9748480 Jul 17 13:02 control03.ctl



The last thing to do is to start the database and check the location of Control Files.

[oracle@localhost ~]$ sqlplus / as sysdba



SQL*Plus: Release 11.1.0.6.0 - Production on Fri Jul 17 13:10:29 2009



Copyright (c) 1982, 2007, Oracle.  All rights reserved.



Connected to an idle instance.



SQL> startup

ORACLE instance started.



Total System Global Area  477073408 bytes

Fixed Size                  1300716 bytes

Variable Size             201328404 bytes

Database Buffers          268435456 bytes

Redo Buffers                6008832 bytes

Database mounted.

Database opened.



SQL> set linesize 160

SQL> column name format a60

SQL> column is_recovery_dest_file format a25

SQL> select status, name, is_recovery_dest_file     

  2  from v$controlfile

  3  /



STATUS  NAME                                                         IS_RECOVERY_DEST_FILE

------- ------------------------------------------------------------ -------------------------

        /u01/app/oracle/oradata/orcl/control01.ctl                   NO

        /u02/app/oracle/oradata/orcl/control02.ctl                   NO

        /u03/app/oracle/oradata/orcl/control03.ctl                   NO
 
 
 

HAPPY LEARNING :)

Wednesday, February 26, 2014

12c New Feature: Unified Auditing

Oracle 12c Database provides many new security features. One of them is Unified Auditing. It replaces the Old Audit Views and Database Parameters with a new refreshing central implementation.
I have checked the license requirement but did not find any useable information. It seems that it will not mentioned in the Oracle Pricelist or the Oracle license Guide, but however i have checked against a Oracle Standard Edition and i was not able to activated.
So why to use Unified Auditing?
  • Consolidate all audit information into a single audit trail table
  • Improve audit performance
  • Simple configuration
  • Secure audit data for all RDBMS Options and other components like RMAN and Data Pump
Consolidate all audit information into a single audit trail table
Pre 12c you have many Objects where Audit information will be stored:
  • SYS.AUD$ / DBA_AUDIT_TRAIL
  • SYS.FGA_LOG$/DBA_FGA_AUDIT_TRAIL
  • DBA_COMMON_AUDIT_TRAIL
  • V$XML_AUDIT_TRAIL
  • DVSYS.AUDIT_TRAIL$
  • OS audit files
In Oracle 12c all Information will be stored in a single View named SYS.UNIFIED_AUDIT_TRAIL .Since different RDBMS components have different audit action  this View is also Extensible. For example Real Application Security, OLS or a normal RMAN Operation the View Add or Remove columns.
Improve audit performance
Oracle 12c stores all audit information SGA memory queues. When the queue will be Full then it will be flushed by a new Background Process GEN0. Because the storing auf Audit Information is not a synchronous process it will be less overhead to your sessions. OK i know what you want to say. Audit Records in the SGA and what will happen when the Instanz crash? This is correct, in that case you can loss some Audit Records. But i believe many customers can tolerate some loss of data in case of a instanz crash. You can configure two  queuing modes which controls performance versus Data Loss.
In the Immediate-Write mode the Audit Information is immediately written which cost more performance.
EXECUTE DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_PROPERTY(-
 DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED, -
 DBMS_AUDIT_MGMT.AUDIT_TRAIL_WRITE_MODE, -
 DBMS_AUDIT_MGMT.AUDIT_TRAIL_IMMEDIATE_WRITE);
In the Queued-Write mode the Audit information is not written immediately to disk. This is the default mode.
EXECUTE DBMS_AUDIT_MGMT.SET_AUDIT_TRAIL_PROPERTY(-
 DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED, -
 DBMS_AUDIT_MGMT.AUDIT_TRAIL_WRITE_MODE, -
 DBMS_AUDIT_MGMT.AUDIT_TRAIL_QUEUED_WRITE);
Simple configuration
You can now create a audit policy. Audit policy is a group of named audit action. The fact that you can now group Audit operations make several configurations easier.
Sample:
SQL> CREATE AUDIT POLICY dppol1 ACTIONS COMPONENT=datapump export;
SQL> AUDIT POLICY dppol1;
Check you policy
SQL> col user_name format A15
SQL> col policy_name format A15
SQL> SELECT * FROM AUDIT_UNIFIED_ENABLED_POLICIES
  2  where POLICY_NAME like '%DP%';

USER_NAME    POLICY_NAME    ENABLED_ SUC FAI
--------------- --------------- -------- --- ---
ALL USERS    DPPOL1        BY     YES YES
Secure audit data for all RDBMS Options and other components like RMAN and Data Pump
So let us export a Table let us check what we get.
[oracle@server1 uniaud]$ expdp system/welcome1 dumpfile=employees.dmp tables=HR.EMPLOYEES

Export: Release 12.1.0.1.0 - Production on Sat Jul 20 20:12:17 2013

Copyright (c) 1982, 2013, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics, Real Application Testing and Unified Auditing options
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** dumpfile=employees.dmp tables=HR.EMPLOYEES 
Estimate in progress using BLOCKS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 64 KB
(...)
After the export we check the Unfied Audit what we get.
select DBUSERNAME, DP_TEXT_PARAMETERS1,
 DP_BOOLEAN_PARAMETERS1
 from UNIFIED_AUDIT_TRAIL
 where DP_TEXT_PARAMETERS1 is not null;

DBUSERNAME
------------------------------
DP_TEXT_PARAMETERS1
--------------------------------------------------------------------------------
DP_BOOLEAN_PARAMETERS1
--------------------------------------------------------------------------------
SYSTEM
MASTER TABLE:  "SYSTEM"."SYS_EXPORT_TABLE_01" , JOB_TYPE: EXPORT, METADATA_JOB_M
ODE: TABLE_EXPORT, JOB VERSION: 12.1.0.0.0, ACCESS METHOD: AUTOMATIC, DATA OPTIO
NS: 0, DUMPER DIRECTORY: NULL  REMOTE LINK: NULL, TABLE EXISTS: NULL, PARTITION
OPTIONS: NONE
MASTER_ONLY: FALSE, DATA_ONLY: FALSE, METADATA_ONLY: FALSE, DUMPFILE_PRESENT: TR
UE, JOB_RESTARTED: FALSE
Also don`t forget if you did not see your records you may need to flush them to disk. For this case use this statement.
SQL> EXEC SYS.DBMS_AUDIT_MGMT.FLUSH_UNIFIED_AUDIT_TRAIL

PL/SQL procedure successfully completed.
So how to configure Unified Auditing. This is done on OS Level be relinking the Oracle executeable. My first thought was that this can be done bei the chopt command. But no. You can read MOS Note 1567006.1 How To Enable The New Unified Auditing In 12c for more Information.
Here is a short example of turning on Unified Auditing.
First we check the current status of this option.
select con_id,parameter,value from v$option where parameter = 'Unified Auditing'

    CON_ID PARAMETER            VALUE
---------- -------------------- --------
         0 Unified Auditing     FALSE
Next shutdown all running processes of your Oracle Home (Database and Listener). Next relink your oracle executeable. The directory here is $ORACLE_HOME/rdbms/lib
[oracle@server1 lib]$ make -f ins_rdbms.mk uniaud_on ioracle
/usr/bin/ar d /u01/app/oracle/product/12.1.0/db_ee_1/rdbms/lib/libknlopt.a kzanang.o
/usr/bin/ar cr /u01/app/oracle/product/12.1.0/db_ee_1/rdbms/lib/libknlopt.a /u01/app/oracle/product/12.1.0/db_ee_1/rdbms/lib/kzaiang.o 
chmod 755 /u01/app/oracle/product/12.1.0/db_ee_1/bin

 - Linking Oracle 
rm -f /u01/app/oracle/product/12.1.0/db_ee_1/rdbms/lib/oracle
/u01/app/oracle/product/12.1.0/db_ee_1/bin/orald  -o /u01/app/oracle/product/12.1.0/db_ee_1/rdbms/lib/oracle -m64 -z noexecstack -Wl,--disable-new-dtags -L/u01/app/oracle/product/12.1.0/db_ee_1/rdbms/lib/ -L/u01/app/oracle/product/12.1.0/db_ee_1/lib/ -L/u01/app/oracle/product/12.1.0/db_ee_1/lib/stubs/   -Wl,-E /u01/app/oracle/product/12.1.0/db_ee_1/rdbms/lib/opimai.o /u01/app/oracle/product/12.1.0/db_ee_1/rdbms/lib/ssoraed.o /u01/app/oracle/product/12.1.0/db_ee_1/rdbms/lib/ttcsoi.o -Wl,--whole-archive -lperfsrv12 -Wl,--no-whole-archive /u01/app/oracle/product/12.1.0/db_ee_1/lib/nautab.o /u01/app/oracle/product/12.1.0/db_ee_1/lib/naeet.o /u01/app/oracle/product/12.1.0/db_ee_1/lib/naect.o /u01/app/oracle/product/12.1.0/db_ee_1/lib/naedhs.o /u01/app/oracle/product/12.1.0/db_ee_1/rdbms/lib/config.o  -lserver12 -lodm12 -lcell12 -lnnet12 -lskgxp12 -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lclient12  -lvsn12 -lcommon12 -lgeneric12 -lknlopt `if /usr/bin/ar tv /u01/app/oracle/product/12.1.0/db_ee_1/rdbms/lib/libknlopt.a | grep xsyeolap.o > /dev/null 2>&1 ; then echo "-loraolap12" ; fi` -lskjcx12 -lslax12 -lpls12  -lrt -lplp12 -lserver12 -lclient12  -lvsn12 -lcommon12 -lgeneric12 `if [ -f /u01/app/oracle/product/12.1.0/db_ee_1/lib/libavserver12.a ] ; then echo "-lavserver12" ; else echo "-lavstub12"; fi` `if [ -f /u01/app/oracle/product/12.1.0/db_ee_1/lib/libavclient12.a ] ; then echo "-lavclient12" ; fi` -lknlopt -lslax12 -lpls12  -lrt -lplp12 -ljavavm12 -lserver12  -lwwg  `cat /u01/app/oracle/product/12.1.0/db_ee_1/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat /u01/app/oracle/product/12.1.0/db_ee_1/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnnzst12 -lzt12 -lztkg12 -lmm -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lztkg12 `cat /u01/app/oracle/product/12.1.0/db_ee_1/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnro12 `cat /u01/app/oracle/product/12.1.0/db_ee_1/lib/ldflags`    -lncrypt12 -lnsgr12 -lnzjs12 -ln12 -lnl12 -lnnzst12 -lzt12 -lztkg12   -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 `if /usr/bin/ar tv /u01/app/oracle/product/12.1.0/db_ee_1/rdbms/lib/libknlopt.a | grep "kxmnsd.o" > /dev/null 2>&1 ; then echo " " ; else echo "-lordsdo12"; fi` -L/u01/app/oracle/product/12.1.0/db_ee_1/ctx/lib/ -lctxc12 -lctx12 -lzx12 -lgx12 -lctx12 -lzx12 -lgx12 -lordimt12 -lclsra12 -ldbcfg12 -lhasgen12 -lskgxn2 -lnnzst12 -lzt12 -lxml12 -locr12 -locrb12 -locrutl12 -lhasgen12 -lskgxn2 -lnnzst12 -lzt12 -lxml12  -lgeneric12 -loraz -llzopro -lorabz2 -lipp_z -lipp_bz2 -lippdcemerged -lippsemerged -lippdcmerged  -lippsmerged -lippcore  -lippcpemerged -lippcpmerged  -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lsnls12 -lunls12  -lsnls12 -lnls12  -lcore12 -lsnls12 -lnls12 -lcore12 -lsnls12 -lnls12 -lxml12 -lcore12 -lunls12 -lsnls12 -lnls12 -lcore12 -lnls12 -lasmclnt12 -lcommon12 -lcore12  -laio -lons    `cat /u01/app/oracle/product/12.1.0/db_ee_1/lib/sysliblist` -Wl,-rpath,/u01/app/oracle/product/12.1.0/db_ee_1/lib -lm    `cat /u01/app/oracle/product/12.1.0/db_ee_1/lib/sysliblist` -ldl -lm   -L/u01/app/oracle/product/12.1.0/db_ee_1/lib
test ! -f /u01/app/oracle/product/12.1.0/db_ee_1/bin/oracle ||\
           mv -f /u01/app/oracle/product/12.1.0/db_ee_1/bin/oracle /u01/app/oracle/product/12.1.0/db_ee_1/bin/oracleO
mv /u01/app/oracle/product/12.1.0/db_ee_1/rdbms/lib/oracle /u01/app/oracle/product/12.1.0/db_ee_1/bin/oracle
chmod 6751 /u01/app/oracle/product/12.1.0/db_ee_1/bin/oracle
After you start the database Unified Audit Option should be set to true. Don`t forget that it seems that this only works on a Oracle Enterprise Editon.
select con_id,parameter,value from v$option where parameter = 'Unified Auditing'

    CON_ID PARAMETER            VALUE
---------- -------------------- --------
         0 Unified Auditing     TRUE
Also as i said before. The Audit Information is read only and is placed under the AUDSYS User. This Account is something special because you cannot login into it.
SQL> alter user AUDSYS identified by welcome1 account unlock;

User altered.

SQL> conn audsys/welcome1
ERROR:
ORA-46370: cannot connect as AUDSYS user

Warning: You are no longer connected to ORACLE.
Check the Table which is read only
SQL> select table_name from dba_tables where owner = 'AUDSYS';

TABLE_NAME
--------------------------------------------------------------------------------
CLI_SWP$7cf2fd15$1$1

Space Management in Oracle Database

EXTENT MANAGEMENT and SEGMENT SPACE MANAGEMENT


Many of the people often get confused between segment space management and extent management. These two terms are totally different and can be understood in following way.

EXTENT MANAGEMENT- Whenever a segment will be full and needs extra space then oracle automatically allocates space in the forms of extents(collection of continuous blocks) .How this extent will be allocated and What will be size of this newly allocated extent is decided by extent management according to storage parameters. You can define storage parameters (initial,next,minextents,maxextents,pctincrease...) in case of dictionary managed tablespace(DMTS) but not for locally managed tablespaces(LMTS). In LMTS we have options of AUTOALLOCATE  and UNIFORM. If we give AUTO then Oracle takes the default value of storage parameter like 64k for INITIAL and MINEXTENT 1 and so on. In case of UNIFORM , a uniform size will be given to a newly allocated extent always( 1 mb in case of temporary tablespace which takes UNIFORM  extent size).

SEGMENT SPACE MANAGEMENT- SSM is related to management of free blocks in segments and can be AUTO(recommended) or MANUAL in case of LMTS and MANUAL in case of DMTS. In MANUAL ,oracle maintains a list of free blocks in the form of linked list and has to scan all the blocks(costly) to update this list with new information . In AUTO ,oracle maintains a bitmap to record the information of free blocks . If bit is 1 then block is not free and free if bit is 0. Default is AUTO. In this way AUTO is beneficial than the MANUAL method of segment space management. If SSM is AUTO then we call it as ASSM (automatic segment space management)

Tuesday, February 25, 2014

What is PCTFREE and PCTUSED in Oracle Block.


   PCTFREE and PCTUSED are the block storage parameters.


1. PCTFREE- This parameter is used to specify how much space should be left in the block for updates.
For eg. if The PctFree for a block is 30, then Oracle keeps on adding new rows to the block until the block is 70 % full. It leaves the 30 % for future updates. That is, in case, any row is updated and requires some more space, it is taken out from the 30 % remaining in the Block. U should specify the Value of this parameter HIGH if future updates in the rows of the table are going to need more space. In case ur table is less prone to updates, then this value can be specified LOW.

PCTUSED : As I have mentioned above, Oracle stops INSERTING new Rows in a table when the space usage reaches the PCTFREE limit.
Now consider this ---- When should Oracle start inserting new rows in the Block ?
PctUSED parameter is taken into consideration for the answer of the above question.
Suppose u have specified PCTUSED as 40 %. and PCTFREE as 20 %.

1. Oracle will keep on inserting new rows till the space is 80 % used. It will reserve the remaining 20% for future updates.
2. To start adding new rows again to the block, Oracle will check the space in the Block and the PCTUSED parameter.
3. When the space falls below 40 %, Oracle will start adding new rows to the block.

Wednesday, February 19, 2014

Are you confussed about Oracle Sql ,Plsql syntax--------> Just download the Reference sheet.

       

           Click Here to download    Oracle sql plsql reference sheet

                         

Codes 12 Rules

Codes 12 Rules for RDBMS


Rule 1: The Information Rule All data should be presented to the user in table form. 
Rule 2: Guaranteed Access Rule All data should be accessible without ambiguity. This can be accomplished through a combination of the table name primary key and column name.

Rule 3: Systematic Treatment of Null Values A field should be allowed to remain empty. This involves the support of a null value which is distinct from an empty string or a number with a value of zero. Of course this can't apply to primary keys. In addition most database implementations support the concept of a nun- null field constraint that prevents null values in a specific table column.

Rule 4: Dynamic On-Line Catalog Based on the Relational Model A relational database must provide access to its structure through the same tools that are used to access the data. This is usually accomplished by storing the structure definition within special system tables.

Rule 5: Comprehensive Data Sublanguage Rule The database must support at least one clearly defined language that includes functionality for data definition data manipulation data integrity and database transaction control. All commercial relational databases use forms of the standard SQL (Structured Query Language) as their supported comprehensive language.

Rule 6: View Updating Rule Data can be presented to the user in different logical combinations called views. Each view should support the same full range of data manipulation that direct-access to a table has available. In practice providing update and delete access to logical views is difficult and is not fully supported by any current database.

Rule 7: High-level Insert Update and Delete Data can be retrieved from a relational database in sets constructed of data from multiple rows and/or multiple tables. This rule states that insert update and delete operations should be supported for any retrievable set rather than just for a single row in a single table.

Rule 8: Physical Data Independence The user is isolated from the physical method of storing and retrieving information from the database. Changes can be made to the underlying architecture ( hardware disk storage methods ) without affecting how the user accesses it.

Rule 9: Logical Data Independence How a user views data should not change when the logical structure (tables structure) of the database changes. This rule is particularly difficult to satisfy. Most databases rely on strong ties between the user view of the data and the actual structure of the underlying tables.

Rule 10: Integrity Independence The database language (like SQL) should support constraints on user input that maintain database integrity. This rule is not fully implemented by most major vendors. At a minimum all databases do preserve two constraints through SQL. No component of a primary key can have a null value. (see rule 3) If a foreign key is defined in one table any value in it must exist as a primary key in another table.

Rule 11: Distribution Independence A user should be totally unaware of whether or not the database is distributed (whether parts of the database exist in multiple locations). A variety of reasons make this rule difficult to implement; I will spend time addressing these reasons when we discuss distributed databases

 Rule 12: Nonsubversion Rule There should be no way to modify the database structure other than through the multiple row database language(like SQL). Most databases today support administrative tools that allow some direct manipulation of the datastructure.