Saturday, July 13, 2013

Moving Spfile From File-System To ASM

1. Create spfile in ASM "+DATA" disk group :

SQL> connect / as sysdba

SQL> show parameter spfile

NAME TYPE VALUE
------- ------- ------------------------------
spfile string /u01/app/oracle/product/11.2.0/db_1/dbs/spfileracdb11.ora

SQL> create pfile='/u01/app/oracle/product/11.2.0/db_1/dbs/initracdb11.ora' from spfile='/u01/app/oracle/product/11.2.0/db_1/dbs/spfileracdb11.ora';

SQL> create pfile from spfile
File created.

SQL> create spfile='+DATA' from pfile='/u01/app/oracle/product/11.2.0/db_1/dbs/initracdb11.ora';
File created.

SQL> exit

2. Modify initracdb11.ora on node1 and initracdb12.ora on node2 files to point to location in ASM

[oracle@node1 dbs]$ echo "SPFILE='+RACDB_DATA/racdb1/spfileracdb1.ora'" > /u01/app/oracle/product/11.2.0/db_1/dbs/initracdb11.ora
[oracle@node1 dbs]$ ssh node2 "echo \"SPFILE='+RACDB_DATA/racdb1/spfileracdb1.ora'\" > /u01/app/oracle/product/11.2.0/db_1/dbs/initracdb12.ora"

3. Update OCR with new SPFILE location

[oracle@node1 dbs]$ srvctl modify database -d racdb1 -p +RACDB_DATA/racdb1/spfileracdb1.ora

4. Rename any existing spfiles in $ORACLE_HOME/dbs
5. Restart all instances to switch to new SPFILE
6. Check New Location For Spfile .

difference Oracle On Windows vs Linux

Since 6 years i supported oracle in Unix and  Windows  env.


1-Installation Oracle on Windows / Linux

For installation of Oracle on windows doesn't require any other user creation, we can perform oracle installation using "administrator" superuser of windows. For installation of Oracle on Unix/linux required to creating separate operating system user account. Using super user "root" we doesn't require to perform Oracle installation.

For installation of Oracle on windows, if we create separate operating system then it should be group of super user administrator. For installation of Oracle on Unix/Linux, when we create operating system user then it should be not part of super user group.

2-Default Location Of Windows , Linux :

Default location of password file and parameter file for Windows is ORACLE_HOME\database folder.Default location of password file and parameter file for Unix/Linux is ORACLE_HOME/dbs folder.

ORACLE_BASE,ORACLE_HOME,ORACLE_SID are defined in registry of Windows as HKEY_LOCAL_MACHINE\SOFTWARE\ORACLE\HOME0. ORACLE_BASE,ORACLE_HOME,ORACLE_SID are defined as user's environment variables in Unix/Linux.

3-Symbolic Links windows , Linux

Symbolic links are NOT supported for user's environment variables or registry parameter in Windows. Symbolic links are supported for user's environment variables in Unix/Linux.

4-Enviorment variable in Windows, Linux

In windows we should need to set environment variable using "set" command and it doesn't save in user profile. In Unix and Linux we should need to set environment variable using "export" command and it can save using .profile (in Unix) and .bash_profile (in Linux).

5-Shared Library DLL windows , Linux

Oracle's shared libraries are called as shared DLL in windows. Oracle's shared libraries are available in Unix/Linux.

6-Relinking On Windows , Linux

Relinking of Oracle executable is not available in Windows. Relinking of Oracle executable is available in Unix/Linux.

7-Shared memory Segment Windows, Linux

Shared memory , shared segments and semaphores are NOT adjustable in Windows. Shared memory segment(SHMMAX), shared segments (SHMMNI) and semaphores (SEMMNS) are adjustable using kernel parameters in Unix/Linux.

8-Memory and processes In Windows , Linux

Oracle's SGA locking in real memory doesn't possible in Windows. Oracle's SGA locking in real memory is possible in Unix/Linux.

Each background process of Oracle is implementing as Thread inside single process in Windows. Each background process of Oracle is a process in Unix/Linux.

9-GUI , CLUI Windows , Linux

Windows called as GUI because it provides Graphical User Interface. Unix and Linux called as CLUI called Command Line User Interface. Due to this reason Unix and Linux provides more performance than Windows due to resource utilization.

10-File System , Security Windows , Linux

Windows is flat file system. Unix and Linux is hierarchical model file system. Windows kernel stores in couple of files like Registry. Unix and Linux kernel stores in many files which are hierarchy. It is very easy to understand Unix and Linux file systems in any version.

Earlier FAT and FAT32 file system has no security in Windows. Using NTFS file system windows use file permission based security. In Unix and Linux has traditional file permission security with owner,group and other users.Unix has greater built-in security and permissions features than Windows. Linux contains also same type of security and permissions logic like Unix.

11-Mointer Oracle In Windows, Linux

There are very few utilities available in Windows for performance monitoring and administration. There are lot of command line utilities are available in Unix/Linux for performance monitoring and administration.

12-Source Code Windows , Linux

Source code of Operating system doesn't available in Windows. Source code of Operating system is available in some of Linux flavors, means we can modify source code of operating system.

13-Skills Windows , Linux

Oracle on Windows magnetize because easy to understand, easy to maintain, easy to develop, resource availability and with good support. Oracle on Unix/Linux is not easy to understand,easy to maintain or easy to develop because it requires high skill set and depth knowledge.

Oracle deployment is very easy in Windows because not need to more knowledge or special skill sets. Oracle deployment is not easy in Unix/Linux because it requires special skill sets.

Windows is user friendly operating system. Unix and Linux doesn't user friendly operating system.

14-Virus Windows , Linux

There is high risk of virus attacks on Windows. Because majority of windows users run as Administrator and virus can be affecting on any of files of kernel due to super user account. There is minimum risk for virus attacks on Unix and Linux. Because most of Unix box or Linux box is being run by user interface not using "root" super user. Due to this reason virus attacker cannot able to modify kernel of operating system.  

Default Users

SYS - support user. or db root user .
The SYS user owns all base tables and user-accessable view of the data dictionary (Oracle configuration information). No Oracle user should ever alter (update, delete, or insert) any rows or schema objects conatained in the SYS schema, because such activity can compromise data integrety. The security administrator must keep strict control of this central account.

SYSTEM
The SYSTEM user is used to create additional tables and views that display administrative information, and internal tables and views used by various Oracle options and tools.

SYSMAN
The SYSMAN user represents the Enterprise Manager super admin account. This EM admin can create and modify other EM admin accounts as well as admin the database instance itself.

DBSNMP
The DBSNMP user is used by EM to monitor the database. EM uses this account to access performance stats about the database. The DBSNMP credentials sometimes referred to as the monitoring credentials. 

Diffrence between SYSDBA and SYSOPER role

The big diffrence between SYSDBA and SYSOPER privs is that SYSDBA can do anything (just like root). The SYSOPER privs allow you just about the same amount of control but won't allow you to look at user data. Both privs allow you to ALTER DATABASE, CREATE SPFILE, STARTUP or SHUTDOWN, ALTER DATABASE ARCHIVELOG, and includes RESTRICTED SESSION privs. However, only SYSDBA can CREATE or DROP DATABASE, and the ALTER DATABASE RECOVER options for SYSOPER are limited to complete recovery only. 

Comprare Oracle Data Between two Schema

There are Serveral Tools Used for Comparing  2 database schema-

1-Open Source application for comparing two tables of data .
2-TOAD FOR ORACLE
3-Oracle SQL DEVELOPER
4-Oracle Enterprise Manager (OEM).
5-RedGate to Compare Data, schema , Database .

Difference between V$Session_Wait V$Session_event

 Simple Topic But useful and good to know .


V$SESSION_WAIT
Displays the current or last wait for each session. or its displays the events for which sessions have just completed waiting or are currently waiting.

V$SESSION_EVENT
Lists information on waits for an event by a session or Its  similar to V$SYSTEM_EVENT, but displays all waits for each session.


Means, a session is running and waiting to acquire a resource. Because V$SESSION_WAIT is a current state view, it also contains a finer-granularity of information than V$SESSION_EVENT or V$SYSTEM_EVENT. It includes additional identifying data for the current event in three parameter columns: P1, P2, and P3.

For example, V$SESSION_EVENT can show that session 124 (SID=124) had many waits on the db file scattered read event, but it does not show which file and block number. However, V$SESSION_WAIT shows the file number in P1, the block number read in P2, and the number of blocks read in P3 (P1 and P2 let you determine for which segments the wait event is occurring).

Oracle Version Digit Mean

[oracle@boost ~]$ sqlplus "/as sysdba"

SQL*Plus: Release 12.1.0.1.0 Production on Sat Jul 13 08:52:25 2013

Copyright (c) 1982, 2013, Oracle.  All rights reserved.


Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options

SQL>


The first digit is the most general identifier. It represents a major new version of the software that contains significant new functionality.
Database Maintenance Release Number

The second digit represents a maintenance release level. Some new features may also be included.
Application Server Release Number

The third digit reflects the release level of the Oracle Application Server (OracleAS).
Component-Specific Release Number

The fourth digit identifies a release level specific to a component. Different components can have different numbers in this position depending upon, for example, component patch sets or interim releases.
Platform-Specific Release Number

The fifth digit identifies a platform-specific release. Usually this is a patch set. When different platforms require the equivalent patch set, this digit will be the same across the affected platforms.


Spool File With Date/Time Name

i use to created spool file With Date/Time Name

SET TERMOUT OFF
COLUMN today_ddmmyyyy_col NEW_VALUE today_ddmmyyyy
SELECT TO_CHAR ( SYSDATE, 'DDMMYYYY') AS today_ddmmyyyy_col
FROM    dual;
SET TERMOUT ON
SPOOL ticket_&today_ddmmyyyy..log


ORA-01438: value larger than specified precision allowed for this column

You have receive this error while trying to insert big Value In Column with Specific Range ,

Below are some option to trace this issue .

option -1
Enable Audit On know which One of these Column caused the error

SQL > Create table error_ora as select * from scott.dept
SQL> desc dept
 Name                                Null?    Type
 ----------------------------------- -------- ------------------------
 DEPTNO                              NOT NULL NUMBER(2)
 DNAME                                        VARCHAR2(14)
 LOC                                          VARCHAR2(13)

SQL> audit insert on joshiv.error_ora whenever not successful;

Audit succeeded.

SQL> insert into joshiv.error_ora values(2000,'joshiv','joshiv');
insert into scott.dept values(2000,'joshiv','joshiv')
                              *
ERROR at line 1:
ORA-01438: value larger than specified precision allowed for this
column


SQL> select sql_text,returncode from dba_audit_trail
  2  where owner='joshiv' and obj_name='error_ora';

SQL_TEXT
----------------------------------------------------------------------
RETURNCODE
----------
insert into scott.dept values(2000,'joshiv','joshiv')
      1438
 option #2 :

Enable Tracing Level 1438


SQL > conn joshiv/joshiv ;
SQL> create table error_ora as select * from scott.dept ;

SQL> select * from error_ora ;

    DEPTNO DNAME          LOC
---------- -------------- -------------
        10 ACCOUNTING     NEW YORK
        20 RESEARCH       DALLAS
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON


SQL> alter system set events='1438 trace name Errorstack forever,level 10';


SQL> insert into error_ora values (100000000000000000,'joshiv','vinodj');                        
insert into error_ora values (100000000000000000,'joshiv','vinodj')
                         *
ERROR at line 1:
ORA-01438: value larger than specified precision allowed for this column

In Trace File :

ORA-01438: value larger than specified precision allowed for this column
Current SQL statement for this session:
insert into error_ora values (100000000000000000,'joshiv','vinodj')


Database report .

There are 3 Oracle database report which help us to manager or  dig issue .

AWR : automatic workload repository

The AWR is used to collect performance statistics including:
Wait events used to identify performance problems.
Time model statistics indicating the amount of DB time associated with a process from the V$SESS_TIME_MODEL and V$SYS_TIME_MODEL views.
Active Session History (ASH) statistics from the V$ACTIVE_SESSION_HISTORY view.
Some system and session statistics from the V$SYSSTAT and V$SESSTAT views.
Object usage statistics.
Resource intensive SQL statements.
I will not get into Details how to generate AWR since i mention it before on my Blog .


 ADDM : automatic database diagnostic monitor


analyzes data in the Automatic Workload Repository (AWR) to identify potential performance bottlenecks.and we use it for the following cases :

     CPU bottlenecks
     Undersized memory structures
     I/O capacity issues
     High load SQL statements
     RAC specific issues
     Database configuration issues
     Also provides recommendations on hardware changes, database configuration & schema changes.
Generate ADDM  :
Login to SQL
@$ORACLE_HOME/rdbms/admin/addmrpt.sql
 enter system password when you asked for .
Specify a begin_snap from the list and press Enter.
Specify the end_snap from the list and press Enter.
 Report Name
ASH : Active Session History

statistics from the in-memory performance monitoring tables also used to track session activity and simplify performance tuning.

ASH reports Give the following information :

Top User Events (frequent wait events)
Details to the wait events
Top Queries
Top Sessions
Top Blocking Sessions
Top DB Object.
Activity Over Time
 Generate ASH reports :



Useful Linux Commands For DBA

Show Routing Table :

netstat -r
Kernel IP routing table
Destination     Gateway         Genmask         Flags   MSS Window  irtt Iface
172.16.0.0      *               255.255.255.0   U         0 0          0 eth0
169.254.0.0     *               255.255.0.0     U         0 0          0 eth0
default         172.16.0.150    0.0.0.0         UG        0 0          0 eth0

 SQL> show parameter background_dump_dest

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
background_dump_dest                 string      /oracle/product/admin/CRMPRD/b
                 
 grep ^ORA- *log |cut -f2 -d"-"|cut -f1 -d:|awk '{print "ORA-" $1}'|sort -u
ORA-00001
ORA-00018
ORA-00060
ORA-00108
ORA-00130
ORA-00270
ORA-00308
ORA-00312
ORA-00600
ORA-00604
ORA-01013
ORA-01034
ORA-01089
ORA-01151
ORA-01172
ORA-01555
ORA-03113
ORA-03135
ORA-06512
ORA-07217
ORA-07445
ORA-1013
ORA-1089
ORA-1109
ORA-1113
ORA-1116
ORA-1119



Sort Files By Size :
ls -l |sort -k 5

Find Command archive and move to another Folder :
find ./ -name "*.arch" -mtime +1 -exec mv {} /u01/;

Find Command archive and Remove it:
find ./ -name "*.ARC" -mtime +1 -exec rm {} \;

Find Command with Zip :
find ./ -name "*.ARC" -mtime +1 -exec gzip {} \;

 Find Command With List :
 find ./ -name "*.ARC" -mtime +1 -ls 

Zombie process

Today sysadmin team told our Db server having many Zombie process .
After doing some googling i found  Zombie process is an inactive computer process On Unix operating systems, a zombie process or defunct process is a process that has completed execution but still has an entry in the process table, allowing the process that started it to read its exit status. In the term's colorful metaphor, the child process has died but has not yet been reaped..."


Find zombie by :

# ps aux | awk '{ print $8 " " $2 }' | grep -w Z
 after that use kill -9  PID

Linux RunLevel

 During the boot up for Linux the init command open files called "/etc/initab"  this file linux start decide which run level the system should booted to. After start the OS you can check "/etc/initab" using Editor (vim command).

there's different type of run level in linux you should know about them :

0 - halt (Do NOT set initdefault to this).
1 - Single user mode.
2 - Multiuser, without NFS (The same as 3, if you do not have networking).
3 - Full multiuser mode.
4 - unused.
5 - X11.
6 - reboot (Do NOT set initdefault to this).
The Above modes available in /etc/initaband you can check them, when you open the files you will see lines


 id:5:initdefault:
 Which indicate for default level. and you can change it.

Short Description for the RunLevels :

Runlevel 0:

Cause the system shutdown , and you can't set this as default. no reason to do that.

RunLevel 1:

in this Level System start in something called Single User Mode which mean root user only who can log in to the system.and notice there's no networking in this mode it will be useful for repair and maintenance.

RunLevel 2:

The System Will log in to mutli user mode which mean you can log in to any users but without networking .

RunLevel 3:

Its same as Runlevel 2 but with networking, This level is common for most linux.

RunLevel 4:

Custom Level, or Custom Boot Level ( Undefined one).

RunLevel 5:

Networking, Multi user Mode With X window Which mean when the OS end of boot the GUI screen will appear to users "Welcome Screen" and can log in, this is what you see in the Linux For example Redhat.

RunLevel 6:

Reboot your Operating System, Sure you don't want to set this to default.

you can use any runlevel by command --> init 'run-level-number'

Connect to Oracle without Tnsnames.ora

Sometimes you can Bypass the tnsnames.ora and connect to sqlplus without even Create new connection, all you have to do is  put all of the connectivity information in your connection string this type of connection called "EZCONNECT".

sqlplus username/password@[//]host[:port][/service_name]
 and to enable EZCONNECT you should add the below line in sqlnet.ora which is located in the same directory $ORACLE_HOME/network/admin
check the below examples :

NAMES.DIRECTORY_PATH=(ezconnect, tnsnames)

The Below using the default listener port 1521 :

1- sqlplus scott/tiger@myservername/orcl1

Different Listener port will be like the below :

2-  sqlplus scott/tiger@myservername:1522/orcl1

3-   sqlplus USER/PASSWORD@//myservername:1521/orcl1

and if you want to disable EZCONNECT For Secuirty Reasons add the below line in :

NAMES.DIRECTORY_PATH=( tnsnames)

Hack oracle Password


 if you have any User with DBA role then you can do that  .

SQL> select utl_inaddr.get_host_name((select username||'='||password from dba_users where rownum=1)) from dual;
select utl_inaddr.get_host_name((select username||'='||password from dba_users where rownum=1)) from dual
       *
ERROR at line 1:
ORA-29257: host SYS=ABB9BAD22B911985 unknown
ORA-06512: at "SYS.UTL_INADDR", line 4
ORA-06512: at "SYS.UTL_INADDR", line 35
ORA-06512: at line 1


SQL> 

Now You can use any Software or Online Site to Hack This Password.


kind of Oracle Patches :

Lets discuss about something Oracle Patches .

There's Four kind of Oracle Patches :

One-off patches (bug fix)
CPU (security patches)
Upgrade patches (bug fixes)
PSU (bug fixes are security patches)
There's another kind called Bundle Patched for windows and exadata.
The most two kind of patch that people get little confused about them is CPU and PSU what are they ? when should i use them ?  is there any different between them ?

First Thing you need to know about them since they have different name then sure it's have different.

CPU:  security fixes each quarter rather than the cumulative database.

PSU : same as CPU  patches but include both the security fixes and priority fixes.Note Once a PSU is applied, only PSUs can be applied in future quarters until the database is upgraded to a new base version.
Which mean you can't Apply CPU and PSU and same database.

Some MOS note that could be Useful :

Introduction to Oracle Patch Set Updates (PSU) 854428.1
Quick Reference to Patchset Patch Numbers [ID 753736.1]
Quick Reference to Patch Numbers for Database PSU, SPU(CPU), Bundle Patches and Patchsets [ID 1454618.1]
New Patch Nomenclature for Oracle Products [ID 1430923.1]

PSU  contain fix for bugs that cause Instance crash,Wrong results and Data Corruption on the other hand Dictionary changes , Major Algorithm changes ,and Optimizer plan changes not fixed by PSU.

to check Applied PSU patched you need to run :
opatch lsinventory -bugs_fixed | grep -i 'DATABASE PSU'
 and if you need to check CPU :
 Select * from registry$history;

Something about oracle inventory

The inventory is created once you install Database or used for upgrades and patches. two kind of oracle inventory  one called central inventory per server and another one called local inventory for each ORACLE_HOME, the difference between each of them central inventory basically contains a high-level list of components installed on the server. It is updated each time components are installed or uninstalled but it does not have detailed information such as the patch level of each ORACLE_HOME. on another hand  local inventory  contains some component information included with patch-level information.

another different is The Central Inventory contains the information related to all Oracle products installed on a host. and consists of a file called inventory.xml, which lists all the Oracle Homes installed on the node.
for local inventory ( $ORACLE_HOME/inventory ) that contains the detail of the patch level for that ORACLE_HOME. The Local Inventory gets updated whenever a patch is applied to the ORACLE_HOME, using OUI. If the Local Inventory becomes corrupt or is lost, this is very difficult to recover, and may result in having to reinstall the ORACLE_HOME and re-apply all patchsets and patches.

For More Information about this topic refer to MOS :
 Global and Local Inventory explained [ID 360079.1]
FAQs on Central Inventory and Oracle Home Inventory (Local Inventory) in Oracle RDBMS [ID 564192.1]

oracle patch applied detail

The easiest way to get this information is from the OPatch utility.  The OPatch utility can be found under $ORACLE_HOME/OPatch.
Example:
$ORACLE_HOME/OPatch/opatch lsinventory
The above command will list out all the currently applied patches to your Oracle environment using the files in the oraInventory. Today i did some googling to find out  same information from inside the database, there are a two views that can be used.
Example:
SQL>select * from sys.v$version ;
SQL>select * from sys.registry$history;
I really like the sys.registry$history because if gives you a running history of your applied patches and upgrades.  


Import datapump job status

Often time I find myself having to do imports of data and during the import process we used is see import job status using below option.
  1. Use the UNIX “ps –ef” command to track the import the command problem.  Good way to make sure that the process hasn’t error our and quite.
  2. From the UNIX command prompt, use the “tail –f” option against the import log file.  This will give you updates as the log file records the import process.
  3. Set the “status” parameter either on the command line or in the parameter file for the import job.  This will display the status of the job on your standard output. 
  4. Use the database view “dba_datapump_jobs” to monitor the job.  This view will tell you a few key items about the job.  The important column in the view is STATUS.  If this column says “executing” then the job is currently running.
  5. Lastly, a good way to watch this process is from the “v$session_longops” view.  This view will give you a way to calculate percentage completed.

Unreachable/metric collection error…

One of the most confusing (and frustrating) things with Oracle Enterprise Manager is figuring out why agents are not uploading from time-to-time .

An unreachable agent could be almost anything, a firewall blocking the required upload port, invalid DNS entries, and hostname configuration issues to network related issues.  In most of these cases, except for agent configuration issues, the DBA doesn’t have access to resolve these issues and require the assistance of different departments within the IT.  Sometimes these external resources are not available to help troubleshoot and resolve network issues.




DATABASE ARCHIVE ENABLED

In order to  enable archive log mode in the database.  Lets take a look at the steps needed to place the database into archivelog mode. using below option today i had keep my one db on archive mode .


Daily we used to ask Oem team ”Tablespace Space Used (%)” metric is giving me wrong data ,. After doing some resource and googling it  i found below
First we want to connect to the database as SYSDBA.

sqlplus / as sysdba

Lets see what  mode we are currently in.  The archive log list command will provide feedback on the current mode of the database.

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 393
Current log sequence 395
Now that we  know the database is in noarchivelog mode, we need to identify a location for the archive logs.  In current releases of the database we can store archive logs in the fast_recovery_area.  For example purposes we are going to put the archive logs in a different area.

SQL> alter system set log_archive_dest_1=’LOCATION /oracle/archivelogs’ scope=spfile;

Once we set our destination for archive logs, we need to shutdown the database.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

Before we can switch the mode of the database, we need to mount the database.

SQL> startup mount
ORACLE instance started.
Total System Global Area 801701888 bytes
Fixed Size 2232640 bytes
Variable Size 524291776 bytes
Database Buffers 268435456 bytes
Redo Buffers 6742016 bytes
Database mounted.
Now we can switch the mode of the database by using the alter database command.

SQL> alter database archivelog;

Database altered.
Once the mode has been switch to archive log, we can open the database.

SQL> alter database open;

Database altered.
Lets verify that the database is in archive log mode now.

SQL> archive log list 
Database log mode Archive Mode
Automatic archival Enabled
Archive destination //oracle/archivelogs
Oldest online log sequence 393
Next log sequence to archive 395
Current log sequence 395
And additional test to make sure that archive logs are going to the correct location we can switch the log file.

SQL> alter database switch logfile;
System altered.
SQL> !ls -ltr /oracle/archivelogs
total 7432
-rw-rw—-. 1 oracle oracle 7607296 May 22 09:15 1_395_809518082.dbf


TABLESPACE SPACE USED (%) Oem 12C

Daily we used to ask Oem team ”Tablespace Space Used (%)” metric is giving us wrong data ,. After doing some resource and googling it  i found below.
For as many OEMs I’ve setup, the one metric that has always amazed me has been the  ”Tablespace Space Used (%)” metric.  This metic is often misunderstood although it “should” be quite simple to understand.  What is so hard to understand about percentage (%) used?
In reviewing the documentation for OEM 11g and OEM 12c, the explanation for this metric has not changed much between releases.  The calculation that is performed to trigger this metic is really simple math:
Tablespace Space Used (%) = (TotalUsedSpace / MaximumSize) * 100
Once this metric has been triggered, most DBAs start scrambling to perform one of the following task:
·         Increase the size of the tablespace
·         Reorganizing the entire tablespace (fragmentation issues)
·         Relocate segements to another tablespace
·         Run Segment Advisor on the tablespace
What I have come to find out is, some times OEM will trigger this metric and the data files may not need any adjustments.  In order to get a clearer understanding of what caused this metric to trigger, we need to look at the “fulTbsp.pl” script.  This script is located in the $AGENT_HOME/sysman/admin/scripts directory.
In reviewing the “fulTbsp.pl” script, Oracle is not only looking at the current size of the data files and the maxsize of the datafile; they are looking at the file system space as well.  The reason for this is to ensure that the data files have enough space to expand if needed.
Now, here is where it can become misleading.  By setting the Tablespace Space Used (%) metric for critical to 95, we are thinking that the metric will trigger when the tablespace reaches 95% used, correct.  Before rushing to perform the tasks above, lets check and see what space is actually used in the tablespace.  In order to do this, Oracle provides us with a DBA view (DBA_TABLESPACE_USAGE_METRICS) to review the percentage of tablespace used.  Below I have provided a sample query for getting the usage of a tablespace:
select 
           tablespace_name, 
           round(used_space/(1024*1024),2), 
           round(tablespace_size/(1024*1024),2), 
           round(used_percent, 2) 
from 
           dba_tablespace_usage_metrics
where 
           round(used_percent,2) > 90;
Often, I have found that when an alert is triggered for the Tablespace Space Used (%) metric, the data files are less than 90% full.  This is due to the alert being triggered because OEM makes the determination that there is not enough space on the file system to expand the data file if needed.  If you keep this in mind, you’ll be able to keep a firm grasp on what is going on from the OEM and your tablespaces.

Saturday, July 6, 2013

DB Health Check

Thanks to The Human Fly via Twitter @sjaffarhussain I see that Oracle Corporation have a published note onHow to Perform a Database Health Check. (Note 122669.1). I read this with some interest as this is something that I do quite frequently as part of my day job. Here is the contents list of that note.
1. Parameter file
2. Controlfiles
3. Redolog files
4. Archiving
5. Datafiles
5.1 Autoextend
5.2 Location
6. Tablespaces
6.1 SYSTEM Tablespace
6.2 SYSAUX Tablespace (10g Release and above)
6.3 Locally vs Dictionary Managed Tablespaces
6.4 Temporary Tablespace
6.5 Tablespace Fragmentation
7. Objects
7.1 Number of Extents
7.2 Next extent
7.3 Indexes
8. AUTO vs MANUAL undo
8.1 AUTO UNDO
8.2 MANUAL UNDO
9. Memory Management
9.1 Pre Oracle 9i
9.2 Oracle 9i
9.3 Oracle 10g
9.4 Oracle 11g
10. Logging & Tracing
10.1 Alert File
10.2 Max_dump_file_size
10.3 User and core dump size parameters
10.4 Audit files



The difference between Delete and Truncate in Oracle

  1. “Truncate is fast, delete is slow”. Hmmm, well that’s mostly true I guess. The most basic or all answers.
  2. “Truncate is DDL, delete is DML”. Is this just so obvious that people don’t mention it?
  3. “You can’t rollback a truncate”. Indeed. See 2 above, of course. And 1.
  4. “Truncate is implicitly a commit”. A better answer than 3 I think. I seem to recall that someone once mentioned that there are actually two commits in a truncate, one before and one after … but maybe I dreamed that. I should test it really.
  5. “You can’t grant permission to truncate a table”. Ah, practical experience shines through. If you don’t like your ETL process to connect as the owner of the schema then this is a challenge that has to be overcome with stored procedures or something sophisticated like that. You really don’t want to grant “DROP ANY TABLE” to your ETL user.
  6. “You can delete any subset of rows, but you can only truncate the complete table, or a partition or subpartition of it”. Is this also so obvious that nobody mentions it?
  7. “Truncate makes unusable indexes usable again”. A real gotcha for the unwary. If you attempt to optimise a data load by rendering indexes unusable and truncating a table (possibly followed by an index rebuild and a partition exchange) then be careful of the order.
  8. “Truncate can’t maintain foreign keys”. It’s “cascading delete”, not “cascading truncate”. That would be an interesting feature though — point 4 above would make it a little trickier. Truncating an index cluster is pretty close to a “cascading truncate” to a limited extent though. In any case no truncate is permitted on a table referenced by foreign keys.
  9. “You can’t flashback a truncate”. This is an oddity to me. We can flashback a “drop table”, rollback uncommited deletes, or use flashback to recover pre-commit deleted data, but a truncate is a barrier across which we cannot flashback.
  10. “Truncate deallocates space, delete doesn’t”. Unless you want it not to, using the “reuse storage” clause. However the high water mark is reset in either case so maybe that’s a better answer …
  11. “Truncate resets the high water mark, delete doesn’t”. And on the indexes, also. See 7 above also.
  12. “Truncate allows special treatment of materialized view logs”. More of that practical experience and/or documentation-reading skill shows through.
  13. “DML triggers do not fire on a truncate”. Because … um … it’s DDL not DML.

    I suppose there are other issues as well, but this shows that it’s a complex business.  I really wish we could grant “Truncate table” to a user or role … enhancement request time I think.

ORA-01422..

There was an interesting issue at one of the customer sites. Few tables in the database were altered and the dependent objects became invalid. But the attempts to compile the objects using utlrp.sql or manually were failing. In all the cases it was giving the same error:
SQL> alter function SCOTT.SOME_FUNCTION compile;
 alter function SCOTT.SOME_FUNCTION compile
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at line 27

SQL>

At first look it sounded like some issue with the dictionary as the error in case of every object (be it a view, function or package) was the same.
Everybody was trying to compile the invalid objects and surprisingly few VIEWs (that were not getting compiled from SQL*Plus) got compiled from Toad ! But that didn’t explain anything. In fact it was more confusing.
Finally I enabled errorstack for event 1422 and tried to compile a view. Here is the relevant content from the trace file
----- Error Stack Dump -----
ORA-01422: exact fetch returns more than requested number of rows
----- Current SQL Statement for this session (sql_id=7kb01v7t6s054) -----
SELECT SQL_TEXT FROM V$OPEN_CURSOR VOC, V$SESSION VS WHERE VOC.SADDR = VS.SADDR AND AUDSID=USERENV('sessionid') AND UPPER(SQL_TEXT) LIKE 'ALTER%'

I took it to be some system SQL and started searching in that direction and obviously that was of no use.
In the mean time another guy almost shouted…”oh there is a trigger to capture DDL operations in the database; it must be that”. And indeed it was. Here is the code that was creating the problem:
 select sql_text into vsql_text
           from v$open_cursor voc, v$session vs
           where voc.saddr = vs.saddr
           and audsid=userenv('sessionid')
           and upper(sql_text) like 'ALTER%';
 
As v$open_cursor was returning multiple rows, hence the problem !

Thursday, July 4, 2013

Turn off redo log generation

Turn off redo log generation

The new TRANSFORM option introduced in data pumps import provides the flexibility to turn off the redo generation for the objects during the course of import. When DISABLE_ARCHIVE_LOGGING values is specified with theTRANSFORM option, redo generation for the objects in the context will be turned off during the entire import duration. This feature provides a great relief when importing large tables, and reduces the excessive redo generation, which results in quicker imports. This attribute applies to tables and indexes.
This example demonstrates this feature:
$ ./impdp directory=dpump dumpfile=abcd.dmp logfile=abcd.log TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y