Monday, December 12, 2011

How to Tune query that cannot modify ?



Query cannot be modified as it is hardcoded in an application or is generated by application code.

“Query Tuning issues can often be alleviated by modification of the query in question, either to add

hints or to change the query structure to avoid an issue. However in some cases the query cannot be

modified because it is hardcoded in an application or is generated by application code.

Oracle provides the option of changing the execution plan for running queries, without changing the

code itself. This feature also allows us to insulate our application execution plans against any

structural changes or upgrades. This feature is known as Stored Outlines and the idea is to store

the execution plan in Oracle provided tables, which will later be referred to by the optimizer to
execute the queries, thus providing optimizer plan stability.

Stored outlines are global and apply to all identical statements, regardless of the user firing it.
Under user SYS run $ORACLE_HOME/rdbms/admin/dbmsol.sql”

PARAMETERS TO BE SET
“CREATE_STORED_OUTLINES   Setting this to TRUE or an appropriate category name will automatically enable creating and storing
of outlines for every subsequent query fired. A DEFAULT category is used unless specified.

“”USE_STORED_OUTLINES This parameter enables the use of public stored outlines. Setting USE_STORED_OUTLINES to TRUE causes
the execution plan stored in the DEFAULT category to take effect. Set the parameter to a proper category name to explicitly use a different category.”
“QUERY_REWRITE_ENABLED allows you to enable or disable query rewriting globally for the database”
Example:
“EXAMPLE  1
select * from Joshi;
Execution Plan
———————————————————-
0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=15 Bytes=852)
1    0   TABLE ACCESS (FULL) OF ‘Joshi’ (Cost=1 Card=15 Bytes=852)”
CREATE OUTLINES AND THEN UPDATE THE OL$HINTS VIEW AS FOLLOWING :
“CREATE OR REPLACE OUTLINE ORIGINALSQL ON
select * from Joshi;
CREATE OR REPLACE OUTLINE HINTSQL ON
select /*+ index(Joshi) */  * from Joshi;
UPDATE OUTLN.OL$HINTS
SET OL_NAME=DECODE(OL_NAME,’HINTSQL’,'ORIGINALSQL’,'ORIGINALSQL’,'HINTSQL’)
WHERE OL_NAME IN (‘ORIGINALSQL’,'HINTSQL’);
SET FOLLOWING PARAMETERS
“alter session set query_rewrite_enabled=true;
alter session set use_stored_outlines=true;”
NOTE THE INDEX SCAN WITHOUT ADDITION OF ANY HINT
select * from Joshi;
Query cannot be modified as it is hardcoded in an application or is generated by application code. “Query Tuning issues can often be alleviated by modification of the query in question, either to addhints or to change the query structure to avoid an issue. However in some cases the query cannot bemodified because it is hardcoded in an application or is generated by application code. Oracle provides the option of changing the execution plan for running queries, without changing thecode itself. This feature also allows us to insulate our application execution plans against anystructural changes or upgrades. This feature is known as Stored Outlines and the idea is to storethe execution plan in Oracle provided tables, which will later be referred to by the optimizer toexecute the queries, thus providing optimizer plan stability.Stored outlines are global and apply to all identical statements, regardless of the user firing it.

Under user SYS run $ORACLE_HOME/rdbms/admin/dbmsol.sql” PARAMETERS TO BE SET

“CREATE_STORED_OUTLINES   Setting this to TRUE or an appropriate category name will automatically enable creating and storingof outlines for every subsequent query fired. A DEFAULT category is used unless specified.

“”USE_STORED_OUTLINES This parameter enables the use of public stored outlines. Setting USE_STORED_OUTLINES to TRUE causesthe execution plan stored in the DEFAULT category to take effect. Set the parameter to a proper category name to explicitly use a different category.” “QUERY_REWRITE_ENABLED allows you to enable or disable query rewriting globally for the database”

Example:”EXAMPLE  1

select * from Joshi;

Execution Plan

———————————————————-

0      SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=15 Bytes=852)

1    0   TABLE ACCESS (FULL) OF ‘Joshi’ (Cost=1 Card=15 Bytes=852)”

CREATE OUTLINES AND THEN UPDATE THE OL$HINTS VIEW AS FOLLOWING : “CREATE OR REPLACE OUTLINE ORIGINALSQL ON select * from Joshi;

CREATE OR REPLACE OUTLINE HINTSQL ONselect /*+ index(Joshi) */  * from Joshi;

UPDATE OUTLN.OL$HINTS SET OL_NAME=DECODE(OL_NAME,’HINTSQL’,'ORIGINALSQL’,'ORIGINALSQL’,'HINTSQL’) WHERE OL_NAME IN (‘ORIGINALSQL’,'HINTSQL’); SET FOLLOWING PARAMETERS “alter session set query_rewrite_enabled=true;alter session set use_stored_outlines=true;”

NOTE THE INDEX SCAN WITHOUT ADDITION OF ANY HINT

select * from Joshi;


Friday, December 9, 2011

Rman Catalog Usefull Info ..


Q. What is Catalog database and How to Configure it ?

A. This is a separate database which contains catalog schema. You can use the same target database as the catalog database but its not at all recommended.

Q. How Many catalog database I can have ?

A.  You can have multiple catalog database for the same target database . But at a time you can connect to only 1 catalog database via RMAN. Its not recommended to have multiple catalog database.

Q. Is this mandatory to use catalog database ?

A.  No ! its a optional one.

Q. What is the advantage of catalog database ?

A.  Catalog database is a secondary storage of backup metadata. Its very useful in case you lost the current controlfile, as all the backup information are there in the catalog schema. Secondly from contolfile the older backup information are aged out depending upon the control_file_record_keep_time. RMAN catalog database mainten the history of data.

Q. What is the difference between catalog database & catalog schema ?

A.  Catalog database is like any other database which contains the RMAN catalog user’s schema.

Q. What happen if catalog database lost ?

 A.  Since catalog database is a option one there is no direct effect  of loss of catalog  database. Create  a new catalog database and register the target database with the newly createdcatalog one. All the backup information from the target database current controlfile will be updated to the catalog schema. If any backup information which is aged out from the target database then you need to manually catalog those backup pieces.

Saturday, November 19, 2011

IIT entrance exam 2012 forms online application form --

HI ,
Using below link we can fill the Online IIT entrance form .

http://www.iitg.ac.in/jee/


Let me knew for more detail ..


Thanks
Vinod Joshi

Synchronize a Gap on the Standby database in missing archived redo case


Synchronize a Gap on the Standby DB when the Archived Logs are Lost

Summary
Case Description: The archived logs are not present on the Standby DB and they are not available on the PROD Server,
so now the standby had lagged far behind the primary, many required archived logs to close the gap were removed and no backup of them was available.
In order to close the gap, we will perform an incremental backup that will contain all transactions since the last SCN recorded on the standby database.
Using this technique is convenient if the total amount of transactions that make the gap is less than the database size.

Action Items
In order to avoid this problem to happen in the future, we recommend the following action items
- Setup script for daily backup and cleanup of the recovery area
- Setup job to check for archivelog destination free space and executing the backup and cleanup script if required before the scheduled run.

Implementation Steps

Cancel Recovery on Standby
SQL> alter database recover managed standby database cancel;

Trying to recover on Standby
SQL> recover standby database;
ORA-00279: change 11021986 generated at 01/09/2011 11:39:03 needed for thread 1
ORA-00289: suggestion : /arch/ORACLE/ORADATA/SATI/ARCHIVE/1_205_674755717.ARC
ORA-00280: change 11021986 for thread 1 is in sequence #205

Check current_scn on Standby
SQL> select current_scn from v$database;
CURRENT_SCN
-----------
11021980

Create an Incremental Backup from this SCN on the Primary DB
Z:\backup> rman target sys@PROD_PRIMARY
RMAN> backup incremental from scn 11021980 database tag 'FORSTANDBY' FORMAT '/BACKUP/FOR_STANDBY_%U';


Backup the Controlfile for ‘Standby’ on the Primary DB
RMAN> backup current controlfile for standby format '/BACKUP/FORSTDBYCTRL.bkp';

Transfer the Incremental Backup Sets from Primary DB to the Standby Server (Location = /Temp/Incr_Backup)
cd /BACKUP
copy FOR_STANDBY_*.*  \\STANDBY\ /TEMP/INCR_BACKUP
copy FORSTDBYCTRL.bkp  \\STANDBY\ /TEMP/INCR_BACKUP

Restore controlfile on the Standby
RMAN> RESTORE STANDBY CONTROLFILE FROM '/TEMP/INCR_BACKUP/FORSTDBYCTRL.BKP';

Catalog the Incremental Backups on the Standby Server
Note that for the catalog command to succeed you will need to move the backups to be within the Flash Recovery Area.
RMAN> catalog start with '/FRA/SATISTD/BACKUPSET';

Recover the Database and Cleanup Redologs on the Standby Server
RMAN> recover database noredo;


SQL> alter database flashback off;
SQL> alter database flashback on;
SQL> alter database recover managed standby database disconnect from session;

If more archived logs were created on the primary since the finish of the SCN based incremental backup then you can copy them over and recover the standby database using the command : “recover standby database;”

Enable the broker at both sites and check
When enabling the broker again it will take over the responsibility of managing the site and will resynchronize both sites
SQL> alter system set dg_broker_start=true scope=both;

Wednesday, October 12, 2011

07 Interesting Parameters in UNIX


07 Interesting Parameters in UNIX env..

$? –> Exit Status.

$# –> No: of arguments.

$* –> List of arguments.

$$ –> PID of current shell.

$! -–>PID of the last background process.

$- –> current shell settings.

$0 – $9 –-> Name of the command that is being executed.

Saturday, September 10, 2011

Difference between /bin, /sbin, /usr/bin and /usr/sbin

/bin contains all the essential binaries file that can be execute by local and other users
/sbin contains system files like useradd that can be exeute by root
local user has to use sudo to run these
/usr/bin , /usr/sbin contains non essential binaries e.g. office tools etc
these are parted to minimize size of root

Starting and Stopping Daemons with runlevel

The startup scripts I have been mentioning in the /etc/init.d directory govern the activation of daemons that were installed with some of your Linux packages. The commands to start and stop them are universal.
Starting a Daemon
If a startup script exists in the /etc/init.d directory, then its daemon can be started by specifying its filename followed by the keyword "start" as seen here:
root@u-joshi:~# /etc/init.d/apache start
 * Starting apache 1.3 web server...
   ...done.
root@u-joshi:~#

Stopping a Daemon

Daemons can be stopped by specifying its script filename followed by the keyword "stop":
root@u-joshi:~# /etc/init.d/apache stop
 * Stopping apache 1.3 web server...
   ...done.
root@u-joshi:~#

Restarting a Daemon

Daemons usually only read their configuration files when they are started, therefore if you edit the file, you have to restart the daemon for the new settings to become active. This can be done with the keyword "restart":
root@u-joshi:~# /etc/init.d/apache restart
 * Restarting apache 1.3 web server...
   ...done.
root@u-joshi:~#
chkconfig command can be used to adjust which applications start at each runlevel.
[root@joshi joshi ]# chkconfig --list
keytable 0:off 1:on  2:on  3:on 4:on  5:on 6:off
atd      0:off 1:off 2:off 3:on 4:on  5:on 6:off
syslog   0:off 1:off 2:on  3:on 4:on  5:on 6:off
gpm      0:off 1:off 2:on  3:on 4:on  5:on 6:off
kudzu    0:off 1:off 2:off 3:on 4:on  5:on 6:off
wlan     0:off 1:off 2:on  3:on 4:on  5:on 6:off
sendmail 0:off 1:off 2:off 3:on 4:off 5:on 6:off
...
...
[root@joshi  joshi ]# chkconfig --list | grep mail
sendmail 0:off 1:off 2:off 3:on 4:off 5:on 6:off
[root@joshi joshi ]#
The chkconfig command can be used with grep to determine the run levels in which sendmail will run. Here we see it will run at levels 3 and 5. 

[root@joshi  joshi ]# chkconfig --list | grep mail
sendmail 0:off 1:off 2:off 3:on 4:off 5:on 6:off
[root@bigboy joshi ]#

Linux System Boot Runlevels Detail


Mode Directory Run Level Description
0/etc/rc.d/rc0.dHalt
1/etc/rc.d/rc1.dSingle-user mode
2/etc/rc.d/rc2.dNot used (user-definable)
3/etc/rc.d/rc3.dFull multi-user mode (no GUI interface)
4/etc/rc.d/rc4.dNot used (user-definable)
5/etc/rc.d/rc5.dFull multiuser mode (with GUI interface)
6/etc/rc.d/rc6.dReboot


Based on the selected runlevel, the init process then executes startup scripts located in subdirectories of the /etc/rc.d directory. Scripts used for runlevels 0 to 6 are located in subdirectories /etc/rc.d/rc0.d through /etc/rc.d/rc6.d, respectively.

Determining the Default Boot runlevel

The default boot runlevel is set in the file /etc/inittab with the initdefault variable. When set to 3, the system boots up with the text interface on the VGA console; when set to 5, you get the GUI. Here is a snippet of the file (delete the initdefault line you don't need):
# Default runlevel. The runlevels used by RHS are:
# 0 - halt (Do NOT set initdefault to this)
# 1 - Single user mode
# 2 - Multiuser, without NFS (The same as 3, if you do not have networking)
# 3 - Full multiuser mode
# 4 - unused
# 5 - X11
# 6 - reboot (Do NOT set initdefault to this)
# 
id:3:initdefault:                         # Console Text Mode
id:5:initdefault:                         # Console GUI Mode

Note the following:
  • Most home users boot up with a Windows like GUI (runlevel 5)
  • Most techies will tend to boot up with a plain text-based command-line-type interface (runlevel 3)
  • Changing initdefault from 3 to 5, or vice-versa, has an effect upon your next reboot. See the following section on how to get a GUI login all the time until the next reboot.
  • Of course, don't set the initdefault value to 6 or your system will constantly reboot. Setting it to 0 will never allow it to start!

Troubleshooting : Known RMAN Performance Problems

The note has workaround for the following bugs:


Bug 5219484 - CATALOG RESYNCS ARE VERY SLOW - ROUT TABLE HAS 6 MILLION ROWS +
Bug 5047758 - RMAN SLOW PERFORMANCE ON EXIT (UPDATING STATUS STATISTICS)
Bug 5247609 RMAN SLOW PERFORMANCE DURING REGISTER DATABASE/OPEN RESETLOGS
Bug 1551773 RMAN RESTORE RUNS VERY SLOWLY WHEN THERE ARE MANY ROWS IN THE CKP TABLE
Bug 2803823 RESYNC CATALOG AND MAINTENANCE COMMANDS ARE SLOW
Bug 4013855 - RMAN CROSSCHECK IS TAKING A LONG TIME TO COMPLETE
Note 339964.1 RMAN 8.1.7 Crosscheck Command Is Very Slow
Bug 4548861 RMAN RESTORE ARCHIVELOG UNTIL SEQUENCE TAKES A LONG TIME TO COMPLETE
Note 342999.1 First Resync on Production Host After Running RMAN on
Bug 2876717 RMAN backup starts SLOW if READ-ONLY device is used
Bug 1887868 RMAN RESYNC takes a long time with many tablespaces
Internal Bug 1712720 RMAN MAINTENANCE COMMANDS ARE VERY SLOW
Bug 2174697 RMAN incremental backup performance degrades over time in NOCATALOG mode
Bug 2385857 RMAN backup performance degrades over time for large DB_FILES
Bug 2484250 RMAN CUMULATIVE BACKUP / MAINTENENCE may appear to hang (spin)
Bug 2610596 Huge CPU usage from RMAN server when TAPE not mounted with BACKUP_TAPE_IO_SLAVE=TRUE
Bug 2968394 RESYNC TAKES A LONG TIME TO COMPLETE : LARGE BP TABLE
Bug 1407413 NO INDEX TO ON RI CONSTRAINT BETWEEN BS AND BDF
Bug 4110489 RMAN SKIP READ ONLY TABLESPACE CLAUSE TAKING TOO MUCH TIME TO SKIP
Bug 3966722 CONCURRENT RMAN BACKUPS WAIT ON TM ENQUEUE ON CKP TABLE
Bug 2710321 X$KCCFN / V$DATAFILE much slower in 9i than 8i with many datafiles

Win Up to $5,000 in prizes From Matalink


you have the  opportunity to showcase your expertise and knowledge of Oracle products and technologies with your peers in the industry। Contribute to Oracle's Customer Knowledge Exchange, the online customer channel that enables you to showcase your expertise, build your own customer network, and share news and information with your peers starting April 1 through May 15, 2007 and become eligible to win a $5,000.00 Oracle University Scholarship and much more.

First to Contribute, First to Win - Gift certificates of $100 to the first 20 contributors to have their article published in Oracle MetaLink – Customer Knowledge Exchange।

Best of Luck,

Vinod Joshi

Tuesday, September 6, 2011

Database Health Check

OS Level:-
1. Check physical memory: free -m
2. Virtual memory: vmstat 5
3. top 10 process : top
4. ps -aux
5. Free volume available df-h, du -csh
6. Filesystem space: Under normal threshold.
7. Check the filesystem OS side
whether the sufficient space is available at all mount points.

Database level: 


1) Check extents / Proactive Space addition:
Check each of the Data, Index and temporary tablespaces for extend and blocks
allocation details.
SET LINES 500
SELECT SEGMENT_NAME,TABLESPACE_NAME, EXTENTS,BLOCKS
FROM DBA_SEGMENTS;

SELECT SEGMENT_NAME,TABLESPACE_NAME,EXTENTS,BLOCKS
FROM DBA_SEGMENTS
WHERE TABLESPACE_NAME=’STAR01D’;

2) Check alert log for ORA- and warning messages:
Checking the alert log file on regulary basis, we have to do. Look for any of the oracle related errors.This will give you the error details and time of occurrence.

Look for the Database level or Tablespace level changes
Monitor the alert log file and search the file for each Day activities happening In the Database either it is bouncing of Database or Increase in the size of the tablespaces ,Increase in the size of the Database parameters. In the 11g database we can look for TNS errors in the alert log file.
1. Wait events
2. Long runing jobs from v$session_longops
3. Invalid objects( if found than compile them)
4. Temp usage / Rollback segment/PGA usage:
5. Redo generation/Archive logs generation details:

Make sure there should not be frequent log switch happening in a Database.

Sunday, July 10, 2011

Common Wait Event Names and What They Mean

Wait EventDescription
buffer busy waitsThe session wants to access a data block that is either 1) currently not in memory, but another process has already issued an I/O request to read the block into memory, or 2) in memory but in an incompatible mode (current versus consistent, for example). For example another session is using that block via an insert, update or delete.
control file parallel writeThe session has issued multiple I/O requests in parallel to write blocks to all control files, and is waiting for all of the writes to complete.
control file sequential readThe session is waiting for blocks to be read from a control file.
db file parallel readThe session has issued multiple I/O requests in parallel to read blocks from data files into memory and is waiting for all requests to complete. This may occur during recovery or during regular activity when a session batches many single block I/O requests together and issues them in parallel.
db file parallel writeThe process, typically DBWR, has issued multiple I/O requests in parallel to write dirty blocks from the buffer cache to disk and is waiting for all requests to complete.
db file scattered readThe session has issued an I/O request to read a series of contiguous blocks from a data file into the buffer cache and is waiting for the operation to complete. This typically happens during a full table scan or fast full index scan. Oracle reads up to DB_FILE_MULTIBLOCK_READ_COUNT consecutive blocks at a time and scatters them into buffers in the buffer cache
db file sequential readThe session has issued an I/O request to read one block from a data file into the buffer cache and is waiting for the operation to complete. This typically happens during an index lookup or a fetch from a table by ROWID when the required data block is not already in memory. This call differs from "db file scattered read" in that a sequential read reads data into contiguous memory (whilst a scattered read reads multiple blocks and scatters them into different buffers in the SGA). 
direct path read,
direct path write
The session has issued asynchronous I/O requests that bypass the buffer cache and is waiting for them to complete. These wait events often involve temporary segments, sorting activity, parallel query or hash joins.
enqueueThe session is waiting on an enqueue (a lock you can see in v$lock). This commonly occurs when one user is trying to update a row in a table that is currently being updated by another user.
free buffer waitsThe session needs a free buffer so it can bring a data block into the buffer cache and is waiting for a buffer that is not dirty to become available. This can occur if DBWR is not writing dirty buffers to disk fast enough.
latch freeThe session is waiting for a latch held by another session. (This event does not apply to processes that are spinning while waiting for a latch; when a process is spinning, it is not waiting.)
library cache load lockThe session is waiting for the opportunity to load an object or a piece of an object into the library cache. (Only one process can load an object or a piece of an object at a time.)
library cache pinThe session wants to pin an object in memory in the library cache for examination, ensuring no other processes can update the object at the same time. This happens when you are compiling or parsing a PL/SQL object or a view.
log buffer spaceThe session is waiting for space in the log buffer. (Space becomes available only after LGWR has written the current contents of the log buffer to disk.) This typically happens when applications generate redo faster than LGWR can write it to disk.
log file parallel writeThe session is waiting for blocks to be written to all online redo log members in one group. LGWR is typically the only process to see this wait event. It will wait until all blocks have been written to all members.
log file sequential readThe session is waiting for blocks to be read from the online redo log into memory. This primarily occurs at instance startup and when the ARCH process archives filled online redo logs.
log file switch completionThe session is waiting for a log file switch to complete, typically so more redo can be generated.
log file syncThe session is waiting for LGWR to finish flushing the log buffer to disk. This occurs when a user commits a transaction. (A transaction is not considered committed until all of the redo to recover the transaction has been successfully written to disk). Probably you are commiting too often
undo segment extensionThe session is waiting for an undo segment to be extended or shrunk.
write complete waitsThe session is waiting for a requested buffer to be written to disk; the buffer cannot be used while it is being written.

Important Initialization parameters that affect the CBO

Important Initialization parameters that affect the CBO
Apart from generating statistics, the setup of the below mentioned parameters play a very crucial role in dictating how your system will work. Setting these will depend a lot on what kind of environment you want to create, Online, Batch processing, Data warehousing or a combination of more than one.

1) OPTIMIZER_MODE
This will decide the mode in which the optimizer engine should run in. Valid values are RULE, CHOOSE, ALL_ROWS, FIRST_ROWS (_n).
Optionally, the OPTIMIZER_MODE can be set to CHOOSE. This is kind of an intermediate option between RBO and CBO. In fact, it tries to run the query in either CBO or RBO depending on the availability or unavailability of statistics. Therefore, if the tables present in the query have statistics generated on them, CBO (ALL_ROWS only) is preferred or RBO is taken up.
e.g.: optimizer_mode = first_rows
For an OLTP system, FIRST_ROWS would be the ideal option for fast response time. ALL_ROWS is meant for batch-oriented applications. Note, a plan that produces the first n rows with the fastest response time might not be an optimal plan if requirement is to obtain the entire result, so decide as per the need of the application

2) OPTIMIZER_INDEX_COST_ADJOptimizer_Index_Cost_Ad takes a value between 1 and 10000 (default 100). Setting this value makes the index more or less friendly to the optimizer. For example setting it to 50 tells the optimizer that an execution plan that uses index(es) is half as expensive as other execution plans that are not using them. The lower the value (less than 100), the less full table scan executions will take place in the system.
Use of indexes in OLTP is a crucial factor for deciding the response time, but sometimes, full table scans provide better throughput. Nested loop joins are influenced by this parameter value. Set this parameter to a value that is optimal for your application. I recommend set this parameter between 10 - 20 for OLTP and 50 for DSS Systems..
If you are having slow performance because the CBO first_rows optimizer mode is favoring too many full-table scans, you can reset the optimizer_index_cost_adj parameter to immediately tune all of the SQL in your database to favor index scans over full-table scans. This is a "silver bullet" that can improve the performance of an entire database in cases where the database is OTLP and you have verified that the full-table scan costing is too low.
3) OPTIMIZER_INDEX_CACHING
This tells optimizer to favor nested loops and IN-list iterators over sort-merge and hash joins. The default value is 100 and makes nested loops and IN-list operator look less expensive and more appealing to Optimizer. The value basically indicates the percentage of index blocks that the optimizer should assume are present in cache. 
I recommend set this parameter to 85.

4) DB_FILE_MULTIBLOCK_READ_COUNT
This parameter determines the number of database blocks read in one input/output operation during a full table scan. The value set is used in computing the cost of full table scans. OLTP systems will not benefit by setting a high value for this parameter as the computed cost for full table scan execution plans would reduced. The maximum size is Operating system dependent.
e.g.: db_file_multiblock_read_count = 1 (may be set to a higher value)

5) SORT_AREA_SIZE
This parameter defines the maximum memory space allocated per user process that requires sorting of data and for insert and updates to bitmap indexes. Optimizer uses the set value for determining the cost of sorting in a query. This defaults to 64K. Normally a size of 64K to 1M is appropriate for OLTP systems.
Sort-merge joins are influenced by this parameter value. The bigger the size the more appealing will be sort-merge joins over nested-loop and hash joins. The lower the size the more sorting will take place in temporary segments. So assign a value that has been evaluated for your setup.
Oracle recommends the use of PGA_AGGREGATE_TARGET instead of this parameter from Oracle 9i.

6) SORT_MULTIBLOCK_READ_COUNT
This specifies the number of database blocks to be read each time a sort performs a read from a temporary segment. The default value is 2. General recommendation is to keep the default value. However, if you intend to hike up the SORT_AREA_SIZE considerably, you may evaluate the improvement by changing this parameter.
e.g.: sort_multiblock_read_count = 2

7) HASH_JOIN_ENABLED
Hash joins are available only in CBO. In hash joins, a hash table is created on the join key of the smallest sized table. It then joins the other tables to find the match. Hash joins may prove to be faster than other type of joins in some conditions, especially when the index is missing or search criteria is not very selective. Hash joins require a large amount of memory as the hash tables are retained; this may sometimes result in memory swapping.
Nested-loop joins return the first row faster than sort-merge and hash joins and are preferred for OLTP, but other types of joins cannot be ignored for running other aspects of the applications.
e.g.: hash_join_enabled = true

8) HASH_AREA_SIZE
This specifies the maximum amount of memory in bytes to be used for a hash join per process. It is defaulted to 2 times SORT_AREA_SIZE.
Oracle recommends the use of PGA_AGGREGATE_TARGET instead of this parameter from Oracle 9i.
e.g.: hash_area_size = 2097152
Setting this to a very low number may sometimes result in the following error.
ORA-6580: Hash Join ran out of memory while keeping large rows in memory.

9) HASH_MULTIBLOCK_IO_COUNT
This specifies how many sequential blocks a hash join reads and writes in one Input/Output activity. Oracle recommends not changing or assigning a value to this parameter; this will let oracle decide on the appropriate value for each individual query. In such casees, the value of the parameter will appear as 0 in the V$PARAMETER view.
This parameter is renamed to _HASH_MULTIBLOCK_IO_COUNT as of Oracle 9i.
e.g.: hash_multi_block_io_count = 0

10) BITMAP_MERGE_AREA_SIZE
This parameter is relevant for systems using bitmap indexes. It specifies the amount of memory Oracle uses to merge bitmaps retrieved from a range scan of a bitmap index. The default value is 1 MB, which is considered sufficient for most setups.
Oracle recommends use of PGA_AGGREGATE_TARGET instead of this parameter from Oracle 9i.
e.g.: bitmap_merge_area_size = 1048576

11) QUERY_REWRITE_ENABLED
This parameter is relevant for systems using Materialized views, Function based indexes and stored outlines. Setting this parameter enables query rewrite for the database. The materialized views should also be query rewrite enabled to allow the optimizer to redirect a query to it rather than the actual tables listed in the FROM clause. Query rewriting is internally done by the optimizer based on what results are expected and whether these are already present in materialized view form.
e.g.: query_rewrite_enabled = true
12) QUERY_REWRITE_INTEGRITY
This parameter determines the extent to which the optimizer must enforce query rewriting to Materialized views. This determines the accuracy of the query rewrite. It can be set to ENFORCED, TRUSTED or STALE_TOLERATED. ENFORCED option uses Materialized views if they contain fresh data, thus guarantying the integrity and consistency of data. TRUSTED uses relationships that have been defined in the system for rewriting the query. STALE_TOLERATED allows the use of relationships that are not enforced and may use stale data. For OLTP systems, ENFORCED would be the ideal setup, as reports need to be up-to-date.
e.g.: query_rewrite_integrity = enforced
13) ALWAYS_ANTI_JOIN
This parameter specifies the join method for anti-joins, for example when a NOT IN operator is present in your query. It can be set to NESTED_LOOPS, MERGE or HASH. It is defaulted to NESTED_LOOPS in Oracle 8i and CHOOSE in Oracle 9i.
This parameter is renamed to _ALWAYS_ANTI_JOIN as of Oracle 9i.
e.g.: always_anti_join = nested_loops

14) ALWAYS_SEMI_JOIN
This parameter specifies the join method for semi-joins. These types of joins are carried out by Optimizer after transforming a query. In such joins, duplicate values from the inner table are removed and then the type of join specified in the parameter is used to perform a semi-join. It can be set to NESTED_LOOPS, MERGE or HASH. In Oracle 8i, it is defaulted to STANDARD and in Oracle 9i it is defaulted to CHOOSE, to pick up an appropriate join.
This parameter is renamed to _ALWAYS_SEMI_JOIN as of Oracle 9i.
e.g.: always_semi_join = nested_loops

15) STAR_TRANSFORMATION_ENABLED
This specifies whether query transformation will be applied to star queries. It can be set to TRUE, FALSE or TEMP_DISABLE (transformation will take place but will not use temporary tables). I presently set it to FALSE due to some known issues of sub-optimal queries being generated. If you intend to use this, please upgrade your version to 8.1.7.4 and above.
e.g.: star_transformation_enabled = false

16) PARALLEL_BROADCAST_ENABLED
This parameter refers to parallel executions in cluster databases. It is meant for improving hash and sort-merge join operations where a very large result set is joined with a very small result set. When this option is enabled, the optimizer broadcasts a copy of all rows in the smaller result set to all cluster databases that are processing some rows of the larger result set.
It is obsolete in release 9.2.0.
e.g.: parallel_broadcast_enabled = false

17) OPTIMIZER_DYNAMIC_SAMPLING
This parameter is introduced in release 9i. It is meant for situations where tables are not analyzed. As CBO depends heavily on statistics, the parameter tells the optimizer to sample the unanalyzed tables that are being used in a query. A level of 0 to 10 can be specified, the higher the value the more time optimizer spends in sampling.
e.g.: optimizer_dynamic_sampling = 1
18) PARTITION_VIEW_ENABLED
This parameter is meant for backward compatibility to support partition views. Oracle recommends use of partition tables rather than partition views. If you are migrating to CBO, chances are that you may not be using partition views.
e.g.: partition_view_enabled = false

19) CURSOR_SHARING
This parameter determines what kind of SQL statements can share the same cursor. It can be set to FORCE, SIMILAR or EXACT. FORCE will try to squeeze statements that may differ in some literals to share the same cursor. SIMILAR is somewhat the same but will try to maintain the plan optimization for identical statements. EXACT allows statements with exact identical text to share a cursor.
Using FORCE may sometimes result in unexpected results.
e.g.: cursor_sharing = exact
20) PGA_AGGREGATE_TARGET
Introduced in Oracle 9i, this parameter specifies the aggregate PGA memory available to all server processes attached to an instance. This parameter can be set for automatic sizing of SQL working areas. It replaces other existing parameters like SORT_AREA_SIZE, BITMAP_MERGE_AREA_SIZE and HASH_AREA_SIZE.
It can be set to a value between 10 MB to 4000 GB, depending on the setup requirement. 
1- Make a first estimate for PGA_AGGREGATE_TARGET  based on the following rule:
    PGA_AGGREGATE_TARGET  = (Total Physical Memory * 0.16)    - For OLTP systems  (16% of Server Physical Mem)
    PGA_AGGREGATE_TARGET  = (Total Physical Memory * 0 .4  )   - For DSS systems   (40% of Server Physical Mem)

Saturday, July 2, 2011

DBA Daily Activity


The following activities are monitored on a Database :
Task/Machine
Schedule
Description
Connect to Database
Hourly
To monitor the availability of database server and Oracle listener.
Check Alert Logs
Hourly
To check the alert log file periodically to identify any logs for errors /    warnings. It will automatically send an SMS to the administrator.
Check Tablespace Size
Hourly
To check the size of free tablespace and free space available. If the   tablespace reaches the threshold limit, it will automatically send an SMS   message to the administrator.
Verify Rollback Segments
Shift
To check the status of the rollback segments.
Verify if any resources locked
Shift
To identify any locked resources.
Verify Archive Logs
Daily
To check if the archive logs are created properly in the current directory   and if standby configuration is enabled, check whether the archive logs are   backed, transferred and applied properly.
Hit Ratio
Daily
To check the overall buffer cache hit ratio for the entire instance since it   was started. If the ratio reaches the threshold limit, it will automatically   send an SMS message to the administrator.
Check Listener Log File Size
Daily
To check the size of listener log files on each database instance. If the   listener log file size reaches the threshold limit, notification will be sent to   the administrator.
Size of Extents
Daily
To check the segments of each table space, number of extents, initial size   of extent, next extent size, minimum extents, maximum extent and status.
Check Disk Space
Daily
To check the disk space availability for data files, log files and temp space.   If the available size reaches the threshold limit, it will automatically send   an SMS message to the administrator.
Check CPU Usage
Daily
Monitors the CPU performance and its usage.
Check Memory Usage
Daily
Monitors the memory usage, memory page faults, etc.
Check System Log Files
Daily
To check the Event Viewersystem log files. To see and watch for any OS   related warnings or errors.
Check for Invalid Objects
Weekly
To identify invalid objects in the database.
Check for Chained Rows
Weekly
To identify chained rows and manag

·         Oracle Database shall be  monitored using  The Capacity Planner and Performance Manager applications, This includes special Performance Manager diagnostics such as Performance Overview, TopSessions, TopSQL and Lock Monitor  .
·         Administrators will be able to investigate an event directly from the console’s event viewer through direct access to Performance Manager charts, diagnostic help and historical data related to metrics being monitored by the event.
·         This provides a logical step-by-step methodology for discovering and investigating performance problems.
·         In addition, events and Performance Manager charts can share common metric thresholds, allowing consistent performance thresholds to be used by administrators whether monitoring in lights-out mode or real time.   With 9i, events can also be registered directly from Performance Manager charts.

Friday, June 24, 2011

CTET Detail

You can find your admission card in Below given link .
Provisional Admit Card Information 
http://www.cbseonline.ernet.in/ctet/adminfo.aspx

General Information
http://hyd.tcs-itontap.com/dotcom/CBSE/


Tuesday, June 21, 2011

What is the best backup strategy for RMAN backup on production database?


The question of strategy really depends on your environment, if your database is small,
I recommend using full backup every day. if you have a larger bank to use incremental backup and
full week on the weekend, if you have a DataGuard and want a bit more secure you can also perform
backup of your DataGuard instead of a backup of your database production more for this you need a
little more attention..



Thursday, June 16, 2011

Unlocking the locked table

Some times we get an error while running a DDL statement on a table. something like below

SQL> drop table aa;
drop table aa
*
ERROR at line 1:
ORA-14452: attempt to create, alter or drop an index on temporary table already
in use
This happens because some other session is using this table or having a lock on this table.
Following is the simple procedure to kill the session holding the lock on this table and drop the table.
Note that this should be done only if you are sure that this table is no more required and all sessions holding lock on this table can be deleted
1. Get the object ID of the table to be dropped
SQL> select object_id from dba_objects where object_name = 'AA';
 OBJECT_ID
----------
   3735492
2. Get the session ID which is holding lock on this object from v$locked_object view
SQL> select OBJECT_ID, SESSION_ID, ORACLE_USERNAME, PROCESS from v$locked_object where OBJECT_ID = 3735492;
 OBJECT_ID SESSION_ID ORACLE_USERNAME                PROCESS
---------- ---------- ------------------------------ ------------------------
   3735492       1124 MSC                            4092@AKPRADH-LAP

3. Get the serial# of the SID using v$session
SQL> select sid, serial# from v$session where sid = 1124;
       SID    SERIAL#
---------- ----------
      1124      51189

Kill the session by connecting as sysdba and try dropping the table
SQL> alter system kill session '1124, 51189';
System altered.
SQL> drop table aa;
Table dropped.

Once the locks are removed, you should be able to drop the table.

Transparent Application Failover TAF

1. BASIC METHOD
ADD THE ENTRIES TO THE RAC DATABASE NODE1 tnsnames.ora file
CDBS_TAF =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip.localdomain)(PORT = 1521))
    (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip.localdomain)(PORT = 1521))
    (LOAD_BALANCE = yes)
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = cdbs)
      (FAILOVER_MODE=
      (TYPE=SELECT)
      (METHOD=BASIC)
     )
    )
  )
Make sure all the resources are up and running.
root@node1 bin]# ./crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora....s1.inst application    ONLINE    ONLINE    node1
ora....s2.inst application    ONLINE    ONLINE   node2
ora....bs1.srv application    ONLINE    ONLINE    node1
ora....bs2.srv application    ONLINE    ONLINE   node2
ora...._srv.cs application    ONLINE    ONLINE   node2
ora.cdbs.db    application    ONLINE    ONLINE   node2
ora....SM1.asm application    ONLINE    ONLINE    node1
ora....E1.lsnr application    ONLINE    ONLINE    node1
ora....de1.gsd application    ONLINE    ONLINE    node1
ora....de1.ons application    ONLINE    ONLINE    node1
ora....de1.vip application    ONLINE    ONLINE    node1
ora....SM2.asm application    ONLINE    ONLINE   node2
ora....E2.lsnr application    ONLINE    ONLINE   node2
ora....de2.gsd application    ONLINE    ONLINE   node2
ora....de2.ons application    ONLINE    ONLINE   node2
ora....de2.vip application    ONLINE    ONLINE   node2
[oracle@node1 admin]$ sqlplus system/oracle@CDBS_TAF
SQL*Plus: Release 10.2.0.1.0 - Production on Thu Aug 26 17:36:45 2010
Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
SQL> show user;
USER is "SYSTEM"
SQL> col osuser format a10
SQL> col username format a10
SQL> col failover_type format a15
SQL> col failover_method format a18
SQL> select username,osuser,failover_method,failover_type from v$session
  2  where username='SYSTEM';
USERNAME   OSUSER     FAILOVER_METHOD    FAILOVER_TYPE
---------- ---------- ------------------ ---------------
SYSTEM     oracle     BASIC              SELECT

SQL> select instance_name from v$instance;
INSTANCE_NAME
----------------
cdbs1
SQL> select count(*) from demotb;
  COUNT(*)
----------
       954
SQL> select * from demotb;
run the long running query

simultaneously open another terminal and give shutdown abort instance cdbs1
SQL> shu abort
ORACLE instance shut down.
but the query runs smoothly without any interruption after the completion of the query, verify the instance name
SQL>select instance_name from v$instance;
INSTANCE_NAME
----------------
cdbs2
automatically it switch over to the instance cdbs2 , because of this user can access the database and no failure in queries. users can access without interruption.
USER will not receive any error message while running query.
2. PRECONNECT METHOD
Add the entries to the tnsnames.ora file on both nodes.
CDBS1_TAF =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node1-vip.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = cdbs)
      (INSTANCE_NAME = cdbs1)
      (FAILOVER_MODE=
      (BACKUP=CDBS2_TAF)
      (TYPE=SELECT)
      (METHOD=PRECONNECT)
     )
    )
  )

CDBS2_TAF =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = node2-vip.localdomain)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = cdbs)
      (INSTANCE_NAME = cdbs2)
      (FAILOVER_MODE=
      (BACKUP=CDBS1_TAF)
      (TYPE=SELECT)
      (METHOD=PRECONNECT)
     )
    )
  )
connect to the database on node1
[oracle@node1 ~]$ sqlplus system/oracle@CDBS1_TAF
SQL*Plus: Release 10.2.0.1.0 - Production on Fri June 16 12:23:11 2010
Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, Real Application Clusters, OLAP and Data Mining options
SQL> SELECT INSTANCE_NAME FROM V$INSTANCE;
INSTANCE_NAME
----------------
cdbs1
SQL> COL USERNAME FOR a10
SQL> col osuser for a10
SQL> col failover_type format a15
SQL> col failover_method for a18
SQL> select username,failover_type,failover_method from v$session
  2  where username='SYSTEM';
USERNAME   FAILOVER_TYPE   FAILOVER_METHOD
---------- --------------- ------------------
SYSTEM     SELECT          PRECONNECT
SQL> select count(*) from demo;
  COUNT(*)
----------
     49783
run the long running query.
SQL> select * from demo;
simultaneously issue shutdown abort in new terminal
SQL> shu abort;
ORACLE instance shut down.
SQL>
the query runs smoothly, user will not receive any kind of error messages.
then check the instance name in node1 machine.
SQL> select instance_name from v$instance;
INSTANCE_NAME
----------------
cdbs2
SQL> select username,osuser,failover_type,failover_method from v$session
  2  where username='SYSTEM';
USERNAME   OSUSER     FAILOVER_TYPE   FAILOVER_METHOD
---------- ---------- --------------- ------------------
SYSTEM     oracle     SELECT          PRECONNECT
SQL>
checking node2 machine before and after TAF.
Before TAF
SQL> select username,osuser,failover_type,failover_method from v$session
  2  where username='SYSTEM';
USERNAME                       OSUSER                         FAILOVER_TYPE
------------------------------ ------------------------------ -------------
FAILOVER_M
----------
SYSTEM                         oracle                         NONE
NONE

SQL> select instance_name from v$instance;
INSTANCE_NAME
----------------
cdbs2
SQL> select instance_name from v$instance;
INSTANCE_NAME
----------------
cdbs2
After TAF.
SQL> select username,osuser,failover_type,failover_method from v$session
  2  where username='SYSTEM';
USERNAME                       OSUSER                         FAILOVER_TYPE
------------------------------ ------------------------------ -------------
FAILOVER_M
----------
SYSTEM                         oracle                         SELECT
PRECONNECT




Connecting to RAC using SQLDeveloper

The following are two definition in tnsnames.ora pointing to the same RAC database. The first one uses name (racora1),
the second (racora) uses actual IP address:

racora1 = (DESCRIPTION =  (ADDRESS_LIST =    (ADDRESS = (PROTOCOL = TCP)(HOST = db1.testoracle.COM)(PORT = 1521))    (ADDRESS = (PROTOCOL = TCP)(HOST = db2.testoracle.COM)(PORT = 1521))    (LOAD_BALANCE = yes)  )  (CONNECT_DATA =    (SERVER = DEDICATED)    (SERVICE_NAME = ORCL.testoracle.COM)    (FAILOVER_MODE =      (TYPE = SELECT)      (METHOD = BASIC)      (RETRIES = 180)      (DELAY = 5)    )  ))

racora2 = (DESCRIPTION =  (ADDRESS_LIST =    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.155.1.1)(PORT = 1521))    (ADDRESS = (PROTOCOL = TCP)(HOST = 192.155.1.2)(PORT = 1521))    (LOAD_BALANCE = yes)  )  (CONNECT_DATA =    (SERVER = DEDICATED)    (SERVICE_NAME = ORCL.testoracle.COM)    (FAILOVER_MODE =      (TYPE = SELECT)      (METHOD = BASIC)      (RETRIES = 180)      (DELAY = 5)    )  ))

If you use SQL Developer to connect
to racora1 using the TNS, you will encounter the following error:
Status: Failure - Test failed: (null).

It works fine if you connect to racora. Apparently, it is not able to perform a nslookup to obtain the IP address when using TNS. It works fine if you use advance and use thin client to connect as in:
Strangely, if you use Advanced and put the following string, it works:
jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS = (PROTOCOL = TCP)(HOST = db1.testoracle.COM)(PORT = 1521))(ADDRESS = (PROTOCOL = TCP)(HOST = db2.testoracle.COM)(PORT = 1521))(LOAD_BALANCE = yes))(CONNECT_DATA=(SERVER = DEDICATED)(SERVICE_NAME = ORCL.testoracle.COM)(FAILOVER_MODE=(TYPE = SELECT)(METHOD = BASIC)(RETRIES = 180)(DELAY = 5))))

Wednesday, June 15, 2011

How to find what is locking a table.


How to find what is locking a table.
       1. Query from DBA_OBJECTS to find the object_name of the table getting locked.
       2. Query from V$LOCK where id1 = 'table_name', get sid.
       3. Query from v$PROCESS where pid = sid. THis has info on what is locking the table.

Select Randomly from Table
This feature allows you to randomly "sample" from a table. This feature has many great uses. The syntax is as follows:
SELECT COUNT(*) * 100
FROM EMP SAMPLE (1);
This will randomly sample 1% of the rows, multiple the count of them x 100 to get a rough estimate of the amount of rows in the table.
You can also randomly sample by blocks for better performance but possibly less random:
SELECT *
FROM EMP SAMPLE BLOCK (1);
Again, this samples roughly 1% of the table by blocks, which may not be 1% of the actual rows. But this will cause fewer blocks to be visited and decrease the elapsed time, but if the data is grouped in the table, it may not be very random.
This tool can be used to get a rough idea of the data in the table, or give good estimates when using group functions. For example, a great use of this would be on a 40 million row table:
SELECT AVG(number_of children) * 20
FROM dependants sample (5);
This will give you an average of the number of dependants by only sampling 5% of the table by only visiting 2 million rows and not 40 million.