Saturday, July 13, 2013

Unreachable/metric collection error…

One of the most confusing (and frustrating) things with Oracle Enterprise Manager is figuring out why agents are not uploading from time-to-time .

An unreachable agent could be almost anything, a firewall blocking the required upload port, invalid DNS entries, and hostname configuration issues to network related issues.  In most of these cases, except for agent configuration issues, the DBA doesn’t have access to resolve these issues and require the assistance of different departments within the IT.  Sometimes these external resources are not available to help troubleshoot and resolve network issues.




DATABASE ARCHIVE ENABLED

In order to  enable archive log mode in the database.  Lets take a look at the steps needed to place the database into archivelog mode. using below option today i had keep my one db on archive mode .


Daily we used to ask Oem team ”Tablespace Space Used (%)” metric is giving me wrong data ,. After doing some resource and googling it  i found below
First we want to connect to the database as SYSDBA.

sqlplus / as sysdba

Lets see what  mode we are currently in.  The archive log list command will provide feedback on the current mode of the database.

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 393
Current log sequence 395
Now that we  know the database is in noarchivelog mode, we need to identify a location for the archive logs.  In current releases of the database we can store archive logs in the fast_recovery_area.  For example purposes we are going to put the archive logs in a different area.

SQL> alter system set log_archive_dest_1=’LOCATION /oracle/archivelogs’ scope=spfile;

Once we set our destination for archive logs, we need to shutdown the database.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.

Before we can switch the mode of the database, we need to mount the database.

SQL> startup mount
ORACLE instance started.
Total System Global Area 801701888 bytes
Fixed Size 2232640 bytes
Variable Size 524291776 bytes
Database Buffers 268435456 bytes
Redo Buffers 6742016 bytes
Database mounted.
Now we can switch the mode of the database by using the alter database command.

SQL> alter database archivelog;

Database altered.
Once the mode has been switch to archive log, we can open the database.

SQL> alter database open;

Database altered.
Lets verify that the database is in archive log mode now.

SQL> archive log list 
Database log mode Archive Mode
Automatic archival Enabled
Archive destination //oracle/archivelogs
Oldest online log sequence 393
Next log sequence to archive 395
Current log sequence 395
And additional test to make sure that archive logs are going to the correct location we can switch the log file.

SQL> alter database switch logfile;
System altered.
SQL> !ls -ltr /oracle/archivelogs
total 7432
-rw-rw—-. 1 oracle oracle 7607296 May 22 09:15 1_395_809518082.dbf


TABLESPACE SPACE USED (%) Oem 12C

Daily we used to ask Oem team ”Tablespace Space Used (%)” metric is giving us wrong data ,. After doing some resource and googling it  i found below.
For as many OEMs I’ve setup, the one metric that has always amazed me has been the  ”Tablespace Space Used (%)” metric.  This metic is often misunderstood although it “should” be quite simple to understand.  What is so hard to understand about percentage (%) used?
In reviewing the documentation for OEM 11g and OEM 12c, the explanation for this metric has not changed much between releases.  The calculation that is performed to trigger this metic is really simple math:
Tablespace Space Used (%) = (TotalUsedSpace / MaximumSize) * 100
Once this metric has been triggered, most DBAs start scrambling to perform one of the following task:
·         Increase the size of the tablespace
·         Reorganizing the entire tablespace (fragmentation issues)
·         Relocate segements to another tablespace
·         Run Segment Advisor on the tablespace
What I have come to find out is, some times OEM will trigger this metric and the data files may not need any adjustments.  In order to get a clearer understanding of what caused this metric to trigger, we need to look at the “fulTbsp.pl” script.  This script is located in the $AGENT_HOME/sysman/admin/scripts directory.
In reviewing the “fulTbsp.pl” script, Oracle is not only looking at the current size of the data files and the maxsize of the datafile; they are looking at the file system space as well.  The reason for this is to ensure that the data files have enough space to expand if needed.
Now, here is where it can become misleading.  By setting the Tablespace Space Used (%) metric for critical to 95, we are thinking that the metric will trigger when the tablespace reaches 95% used, correct.  Before rushing to perform the tasks above, lets check and see what space is actually used in the tablespace.  In order to do this, Oracle provides us with a DBA view (DBA_TABLESPACE_USAGE_METRICS) to review the percentage of tablespace used.  Below I have provided a sample query for getting the usage of a tablespace:
select 
           tablespace_name, 
           round(used_space/(1024*1024),2), 
           round(tablespace_size/(1024*1024),2), 
           round(used_percent, 2) 
from 
           dba_tablespace_usage_metrics
where 
           round(used_percent,2) > 90;
Often, I have found that when an alert is triggered for the Tablespace Space Used (%) metric, the data files are less than 90% full.  This is due to the alert being triggered because OEM makes the determination that there is not enough space on the file system to expand the data file if needed.  If you keep this in mind, you’ll be able to keep a firm grasp on what is going on from the OEM and your tablespaces.

Saturday, July 6, 2013

DB Health Check

Thanks to The Human Fly via Twitter @sjaffarhussain I see that Oracle Corporation have a published note onHow to Perform a Database Health Check. (Note 122669.1). I read this with some interest as this is something that I do quite frequently as part of my day job. Here is the contents list of that note.
1. Parameter file
2. Controlfiles
3. Redolog files
4. Archiving
5. Datafiles
5.1 Autoextend
5.2 Location
6. Tablespaces
6.1 SYSTEM Tablespace
6.2 SYSAUX Tablespace (10g Release and above)
6.3 Locally vs Dictionary Managed Tablespaces
6.4 Temporary Tablespace
6.5 Tablespace Fragmentation
7. Objects
7.1 Number of Extents
7.2 Next extent
7.3 Indexes
8. AUTO vs MANUAL undo
8.1 AUTO UNDO
8.2 MANUAL UNDO
9. Memory Management
9.1 Pre Oracle 9i
9.2 Oracle 9i
9.3 Oracle 10g
9.4 Oracle 11g
10. Logging & Tracing
10.1 Alert File
10.2 Max_dump_file_size
10.3 User and core dump size parameters
10.4 Audit files



The difference between Delete and Truncate in Oracle

  1. “Truncate is fast, delete is slow”. Hmmm, well that’s mostly true I guess. The most basic or all answers.
  2. “Truncate is DDL, delete is DML”. Is this just so obvious that people don’t mention it?
  3. “You can’t rollback a truncate”. Indeed. See 2 above, of course. And 1.
  4. “Truncate is implicitly a commit”. A better answer than 3 I think. I seem to recall that someone once mentioned that there are actually two commits in a truncate, one before and one after … but maybe I dreamed that. I should test it really.
  5. “You can’t grant permission to truncate a table”. Ah, practical experience shines through. If you don’t like your ETL process to connect as the owner of the schema then this is a challenge that has to be overcome with stored procedures or something sophisticated like that. You really don’t want to grant “DROP ANY TABLE” to your ETL user.
  6. “You can delete any subset of rows, but you can only truncate the complete table, or a partition or subpartition of it”. Is this also so obvious that nobody mentions it?
  7. “Truncate makes unusable indexes usable again”. A real gotcha for the unwary. If you attempt to optimise a data load by rendering indexes unusable and truncating a table (possibly followed by an index rebuild and a partition exchange) then be careful of the order.
  8. “Truncate can’t maintain foreign keys”. It’s “cascading delete”, not “cascading truncate”. That would be an interesting feature though — point 4 above would make it a little trickier. Truncating an index cluster is pretty close to a “cascading truncate” to a limited extent though. In any case no truncate is permitted on a table referenced by foreign keys.
  9. “You can’t flashback a truncate”. This is an oddity to me. We can flashback a “drop table”, rollback uncommited deletes, or use flashback to recover pre-commit deleted data, but a truncate is a barrier across which we cannot flashback.
  10. “Truncate deallocates space, delete doesn’t”. Unless you want it not to, using the “reuse storage” clause. However the high water mark is reset in either case so maybe that’s a better answer …
  11. “Truncate resets the high water mark, delete doesn’t”. And on the indexes, also. See 7 above also.
  12. “Truncate allows special treatment of materialized view logs”. More of that practical experience and/or documentation-reading skill shows through.
  13. “DML triggers do not fire on a truncate”. Because … um … it’s DDL not DML.

    I suppose there are other issues as well, but this shows that it’s a complex business.  I really wish we could grant “Truncate table” to a user or role … enhancement request time I think.

ORA-01422..

There was an interesting issue at one of the customer sites. Few tables in the database were altered and the dependent objects became invalid. But the attempts to compile the objects using utlrp.sql or manually were failing. In all the cases it was giving the same error:
SQL> alter function SCOTT.SOME_FUNCTION compile;
 alter function SCOTT.SOME_FUNCTION compile
*
ERROR at line 1:
ORA-00604: error occurred at recursive SQL level 1
ORA-01422: exact fetch returns more than requested number of rows
ORA-06512: at line 27

SQL>

At first look it sounded like some issue with the dictionary as the error in case of every object (be it a view, function or package) was the same.
Everybody was trying to compile the invalid objects and surprisingly few VIEWs (that were not getting compiled from SQL*Plus) got compiled from Toad ! But that didn’t explain anything. In fact it was more confusing.
Finally I enabled errorstack for event 1422 and tried to compile a view. Here is the relevant content from the trace file
----- Error Stack Dump -----
ORA-01422: exact fetch returns more than requested number of rows
----- Current SQL Statement for this session (sql_id=7kb01v7t6s054) -----
SELECT SQL_TEXT FROM V$OPEN_CURSOR VOC, V$SESSION VS WHERE VOC.SADDR = VS.SADDR AND AUDSID=USERENV('sessionid') AND UPPER(SQL_TEXT) LIKE 'ALTER%'

I took it to be some system SQL and started searching in that direction and obviously that was of no use.
In the mean time another guy almost shouted…”oh there is a trigger to capture DDL operations in the database; it must be that”. And indeed it was. Here is the code that was creating the problem:
 select sql_text into vsql_text
           from v$open_cursor voc, v$session vs
           where voc.saddr = vs.saddr
           and audsid=userenv('sessionid')
           and upper(sql_text) like 'ALTER%';
 
As v$open_cursor was returning multiple rows, hence the problem !

Thursday, July 4, 2013

Turn off redo log generation

Turn off redo log generation

The new TRANSFORM option introduced in data pumps import provides the flexibility to turn off the redo generation for the objects during the course of import. When DISABLE_ARCHIVE_LOGGING values is specified with theTRANSFORM option, redo generation for the objects in the context will be turned off during the entire import duration. This feature provides a great relief when importing large tables, and reduces the excessive redo generation, which results in quicker imports. This attribute applies to tables and indexes.
This example demonstrates this feature:
$ ./impdp directory=dpump dumpfile=abcd.dmp logfile=abcd.log TRANSFORM=DISABLE_ARCHIVE_LOGGING:Y