Friday, March 21, 2014

Background Processes

Background Processes


The relationship between the physical and memory structures is maintained and enforced by Oracle’s background processes.

• Mandatory background processes
DBWn PMON CKPT
LGWR SMON RECO

• Optional background processes
ARCn LMON Snnn
QMNn LMDn
CJQ0 Pnnn
LCKn Dnnn


Background Processes

The Oracle architecture has five mandatory background processes that are discussed further in this lesson. In addition to the mandatory list, Oracle has many optional background process that are started when their option is being used. These optional processes are not within the scope of this course, with the exception of the ARCn background process. Following is a list of some optional background processes:

• RECO: Recoverer
• QMNn: Advanced Queuing
• ARCn: Archiver
• LCKn: RAC Lock Manager—Instance Locks
• LMON: RAC DLM Monitor—Global Locks
• LMDn: RAC DLM Monitor—Remote Locks
• CJQ0: Snapshot Refresh
• Dnnn: Dispatcher
• Snnn: Shared Server

• Pnnn: Parallel Query Slaves



Database Writer (DBWn)


DBWn writes when:
• Checkpoint
• Dirty buffers threshold reached
• No free buffers
• Timeout
• RAC ping request
• Tablespace offline
• Tablespace read only
• Table DROP or TRUNCATE
• Tablespace BEGIN

BACKUP


Database Writer:

The server process records changes to rollback and data blocks in the buffer cache. Database Writer (DBWn) writes the dirty buffers from the database buffer cache to the data files. It ensures that a sufficient number of free buffers—buffers that can be overwritten when server processes need to read in blocks from the data files—are available in the database buffer cache. Database performance is improved because server processes make changes only in the buffer cache.

DBWn defers writing to the data files until one of the following events occurs:

• Incremental or normal checkpoint
• The number of dirty buffers reaches a threshold value
• A process scans a specified number of blocks when scanning for free buffers and cannot fine any.
• Timeout occurs.
• A ping request in Real Application Clusters environment.
• Placing a normal or temporary tablespace offline.
• Placing a tablespace in read only mode.
• Dropping or Truncating a table.

• ALTER TABLESPACE tablespace name BEGIN BACKUP


Log Writer (LGWR)


LGWR writes:
• At commit
• When one-third full
• When there is 1 MB of redo
• Every 3 seconds
• Before DBWn writes

LOG Writer

LGWR performs sequential writes from the redo log buffer cache to the redo log file under
the following situations:
• When a transaction commits
• When the redo log buffer cache is one-third full
• When there is more than a megabyte of changes records in the redo log buffer cache
• Before DBWn writes modified blocks in the database buffer cache to the data files
• Every 3 seconds.
Because the redo is needed for recovery, LGWR confirms the commit only after the redo is
written to disk.
LGWR can also call on DBWn to write to the data files.

Note: DBWn does not write to the online redo logs.


System Monitor (SMON)

Responsibilities:
• Instance recovery:
– Rolls forward 
changes in the 
redo logs
– Opens the 
database for 
user access
– Rolls back 
uncommitted 
transactions
• Coalesces free 
space ever 3 sec
• Deallocates 
temporary segments

System Monitor

If the Oracle instance fails, any information in the SGA that has not been written to disk is lost. For example, the failure of the operating system causes an instance failure. After the loss of the instance, the background process SMON automatically performs instance recovery when the database is reopened. Instance recovery consists of the following steps:

1. Rolling forward to recover data that has not been recorded in the data files but that has been recorded in the online redo log. This data has not been written to disk because of the loss of the SGA during instance failure. During this process, SMON reads the redo log files and applies the changes recorded in the redo log to the data blocks. Because all committed transactions have been written to the redo logs, this process completely recovers these transactions.
2. Opening the database so that users can log on. Any data that is not locked by unrecovered transactions is immediately available.
3. Rolling back uncommitted transactions. They are rolled back by SMON or by the individual server processes as they access locked data.

SMON also performs some space maintenance functions:

• It combines, or coalesces, adjacent areas of free space in the data files.
• It deallocates temporary segments to return them as free space in data files. Temporary

segments are used to store data during SQL statement processing.


Process Monitor (PMON)


Cleans up after 
failed processes by:
• Rolling back the 
transaction
• Releasing locks
• Releasing other 
resources
• Restarts dead
 dispatchers

Process Monitor
The background process PMON cleans up after failed processes by:
• Rolling back the user’s current transaction
• Releasing all currently held table or row locks
• Freeing other resources currently reserved by the user
• Restarts dead dispatchers

Checkpoint (CKPT)

Responsible for:
• Signalling DBWn 
at checkpoints
• Updating datafile 
headers with 
checkpoint 
information
• Updating control 
files with 
checkpoint 
information

Checkpoint:

An event called a checkpoint occurs when the Oracle background process DBWn writes all the modified database buffers in the SGA, including both committed and uncommitted data, to the data files.

Checkpoints are implemented for the following reasons:
• Checkpoints ensure that data blocks in memory that change frequently are written to data files regularly. Because of the least recently used algorithm of DBWn, a data block that changes frequently might never qualify as the least recently used block and thus might never be written to disk if checkpoints did not occur.
• Because all database changes up to the checkpoint have been recorded in the data files, redo log entries before the checkpoint no longer need to be applied to the data files if instance recovery is required. Therefore, checkpoints are useful because they can expedite instance recovery.

Checkpoint(continued):
At a checkpoint, the following information is written:
• Checkpoint number into the data file headers
• Checkpoint number, log sequence number, archived log names, and system change 
numbers into the control file.
CKPT does not write data blocks to disk or redo blocks to the online redo logs.


Archiver (ARCn)

• Optional background process
• Automatically archives online redo logs when 
ARCHIVELOG mode is set
• Preserves the record of all changes made to the 
database

The Archiver Process:

All other background processes are optional, depending on the configuration of the database;however, one of them, ARCn, is crucial to recovering a database after the loss of a disk. As online redo log files fill, the Oracle server begins writing to the next online redo log file. The process of switching from one redo log to another is called a log switch. The ARCn process initiates backing up, or archiving, of the filled log group at every log switch. It automatically archives the online redo log before the log can be reused, so that all of the changes made to the database are preserved. This enables the DBA to recover the database to the point of
failure, even if a disk drive is damaged.

Archiving Redo Log Files
One of the important decisions that a DBA has to make is whether to configure the database
to operate in ARCHIVELOG or in NOARCHIVELOG mode.

NOARCHIVELOG Mode: In NOARCHIVELOG mode, the online redo log files are overwritten each time a log switch occurs. LGWR does not overwrite a redo log group until the checkpoint for that group is complete. This ensures that committed data can be recovered if there is an instance crash. During the instance crash, only the SGA is lost. There is no loss of disks, only memory. For example, an operating system crash causes an instance crash.

Archiving Redo Log Files (continued):

ARCHIVELOG Mode: If the database is configured to run in ARCHIVELOG mode, inactive groups of filled online redo log files must be archived before they can be used again. Since changes made to the database are recorded in the online redo log files, the database administrator can use the physical backup of the data files and the archived online redo log files to recover the database without losing any committed data because of any single point of failure, including the loss of a disk. Usually, a production database is configured to run in ARCHIVELOG mode.


Thursday, February 27, 2014

ORA-14402: updating partition key column would cause a partition change


SQL> conn corporate/log
Connected.

updation script :
SQL> update STOCKVALUE set username=:username,modifiedon=to_date(:modifiedon,'dd/mm/yyyy hh24:mi:ss '),app_desc=:app_desc,app_level=:app_level,isrejected=:isrejected,qty= :qty,reservedforbranch= :reservedforbranch,batch=null,isreserved=:isreserved,amount= :amount,partyname=:partyname,gdocid=null,stock_value= :stock_value,rate= :rate,stock_qty= :stock_qty,trans_type= :trans_type,docdate=to_date(:docdate,'dd/mm/yyyy hh24:mi:ss '),docid=:docid,branch= :branch,location= :location,stocktrans_type= :stocktrans_type,postaccountflag=:postaccountflag,plusorminus=:plusorminus,expiry_date=null,packsize= :packsize,itemid= :itemid
 where STOCKVALUEid=200003000050517

status=ORA-14402: updating partition key column would cause a partition change


Solution : 

SQL> alter table STOCKVALUE enable row movement;

Table altered.

SQL> update STOCKVALUE set username=:username,modifiedon=to_date(:modifiedon,'dd/mm/yyyy hh24:mi:ss '),app_desc=:app_desc,app_level=:app_level,isrejected=:isrejected,qty= :qty,reservedforbranch= :reservedforbranch,batch=null,isreserved=:isreserved,amount= :amount,partyname=:partyname,gdocid=null,stock_value= :stock_value,rate= :rate,stock_qty= :stock_qty,trans_type= :trans_type,docdate=to_date(:docdate,'dd/mm/yyyy hh24:mi:ss '),docid=:docid,branch= :branch,location= :location,stocktrans_type= :stocktrans_type,postaccountflag=:postaccountflag,plusorminus=:plusorminus,expiry_date=null,packsize= :packsize,itemid= :itemid
 where STOCKVALUEid=200003000050517

1 row updated.



Friday, February 7, 2014

Oracle Data Pump in Oracle 11g (expdp and impdp)

Introduction

Oracle's export utility allows you to extract data from the database and write that data to the operating system file. The file to which you extract data when you use Oracle8i’s export utility is referred to as a dump file. Exports dump file contains both metadata and data. Metadata refers to the Data Definition Language (DDL) statements necessary to recreate the objects that have been exported.

To migrate data between oracle databases, within oracle databases, different tablespaces, to change ownership of objects, Oracle has provided with utilities called Exports and Imports. Exports and Imports are mainly used for data reorganization, which leads to greater performance. Exports and imports are also used as a logical backup tool.

New features in oracle 8i enable us to migrate data between different tablespaces. This utility is used to migrate data between higher versions, releases of Oracle used for repeating test runs (large sample data) for development environment, moving data from testing to development. Also used to archive large historical data, migrating data from one O/S platform to another O/S platform.

Some of the important uses of the export utility include the following:

    1. Copying tables or entire schemas, from one database to another.
    2. Reorganizing a table by exporting the data, recreating the table with different storage parameters, and reloading the data-all in the same database.
    3. Storing data as secondary backup
    4. Creating a logical backup that you can use to restore specific tables rather than the entire database.
    5. Creating the temporary backup of objects that you are going to delete.

The various operations that are to be known in order to use effectively are:

  • Starting the export utility
  • Passing parameters to it
  • Running it interactively
  • Getting help when you need it
  • Using its prerequisites
      Using Export prerequisites

To use the Export utility, a user must have the CREATE SESSION privilege on the target database. That’s all you need as long as you are executing objects that you own. To export tables owned by another user or to export the entire database, you must have the EXP_FULL_DATABASE role, and you must have it enabled. Typically, you will have the DBA role, which includes the EXP_FULL_DATABASE role, so you can export pretty much anything that you want to export.

Before using Export against a database, you must run the CATEXP.SQL script once to create views and tables that the Export utility requires. The EXP_FULL_DATABASE role is one of the items that CATEXP.SQL creates. The CATEXP.SQL script is run by CATALOG.SQL, so if you ran CATALOG.SQL when you first created the database, you are all set. If you find that you do need to run either of these scripts, you’ll find them in the  $ORACLE_HOME/RDBMS/ADMIN directory.

Create Directory

CONN SYS AS SYSDBA

SQL> CREATE OR REPLACE DIRECTORY EXPORTDR AS ‘E:\AXPDUMP’;

SQL> GRANT READ, WRITE ON DIRECTORY 
EXPORTDR TO CORPORATE;



Schema Export and Import

  • Below is syntax for export & import individual Schema. 

expdp system/admin@orcl schemas=CORPORATE directory=EXPORTDR dumpfile=CORPORATE.DMP logfile=corporate.log


impdp system/admin@orcl schemas=CORPORATE directory=EXPORTDR dumpfile=CORPORATE.DMP logfile=corporate.log



Tables Export and Import

  • Below is syntax for export & import individual tables.  



expdp corporate/corporate@orcl tables=stockvalue,saleinvoices directory=EXPORTDR dumpfile=stockvaluesaleinvoices.dmp logfile=stockvaluesaleinvoices.log

impdp corporate/corporate@orcl tables=stockvalue,saleinvoices directory=EXPORTDR dumpfile=stockvaluesaleinvoices.dmp logfile=stockvaluesaleinvoices.log


Table Export and Import From Full Backup With include and exclude

  • The INCLUDE and EXCLUDE parameters can be used to limit the export/import to specific objects. When the INCLUDE parameter is used, only those objects specified by it will be included in the export/import. When the EXCLUDE parameter is used, all objects except those specified by it will be included in the export/import.

impdp SYSTEM/admin@db schemas=CORPORATE include=TABLE:"IN ('STOCKVALUE', 'SALESINVOICE')" directory=EXPORTDR dumpfile=backupdumpfilename.dmp logfile=logfilenameforimp.log

impdp SYSTEM/admin@db schemas=CORPORATE exclude=TABLE:"= 'EMPLOYEE'" directory=EXPORTDR dumpfile=CORPORATE.dmp logfile=CORPORATE.log

Full Database Export and Import

  • Below is syntax for export & import Full Database.  


expdp system/admin@orcl full=y directory=EXPORTDR dumpfile=fulldatabase.dmp logfile=fulldatabase.log

impdp system/admin@orcl full=y directory=EXPORTDR dumpfile=fulldatabase.dmp logfile=fulldatabase.log

Database Export and Import with Network_Link

Below are steps for Import & Export with Network_Link.Make sure network should be available till finish the Job.

SQL> create public database link "DEV" connect to SYSTEM
identified by "admin"
using 'DEV'

Database link created.

Import with Network_Link

impdp system/admin@test LOGFILE=15102013_2120-system.txt NETWORK_LINK=DEV schemas=CORPORATE directory=DATA_PUMP_DIR


;;; 
Import: Release 11.2.0.3.0 - Production on Tue Oct 15 21:20:05 2013

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
;;; 
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_IMPORT_SCHEMA_01":  system/********@test LOGFILE=15102013_2120-system.txt NETWORK_LINK=PROD schemas=CORPORATE directory=DATA_PUMP_DIR 
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 46.12 GB
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"CORPORATE" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
. . imported "CORPORATE"."MVW_SALESREGISTER"            21857513 rows
. . imported "CORPORATE"."STOCKVALUE"                   29229785 rows
. . imported "CORPORATE"."MVW_COSTOFGOODS_V2"           27764400 rows
. . imported "CORPORATE"."SALESINVOICEHDR"              12856752 rows
. . imported "CORPORATE"."SALESINVOICEDTL"              27722234 rows
. . imported "CORPORATE"."TB_COSTOFGOODS2_2011"         5650656 rows
. . imported "CORPORATE"."MVW_COSTOFGOODS_V3"           12301907 rows
. . imported "CORPORATE"."TB_COSTOFGOODS2_2012"         12301893 rows
. . imported "CORPORATE"."STOCKBATCHVALUE_V2"           15365374 rows
. . imported "CORPORATE"."MVW_STOCKBATCHVALUE_2012"     12248286 rows
. . imported "CORPORATE"."STOCKBATCHVALUE_2012"         12248288 rows
. . imported "CORPORATE"."SALESSUMMARY"                 5050365 rows
. . imported "CORPORATE"."STOCKBATCHVALUE"              9779245 rows
. . imported "CORPORATE"."MVW_STOCKBATCHVALUE"          9779245 rows
. . imported "CORPORATE"."SALESCLOSINGDTL"              12207626 rows
. . imported "CORPORATE"."STOCKBATCHVALUE_V2_2011"      5596522 rows
. . imported "CORPORATE"."RECVPKTS"                     1991476 rows
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE

Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW
Processing object type SCHEMA_EXPORT/JOB
Processing object type SCHEMA_EXPORT/REFRESH_GROUP
Job "SYSTEM"."SYS_IMPORT_SCHEMA_01" completed with 0 error(s) at 00:31:12


Export with Network_Link

expdp system/admin@test LOGFILE=15102013_2120-system.txt NETWORK_LINK=DEVschemas=CORPORATE directory=DATA_PUMP_DIR

;;; 
Export: Release 11.2.0.3.0 - Production on Thu Feb 6 20:00:01 2014

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.
;;; 
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Starting "SYSTEM"."SYS_EXPORT_SCHEMA_01":  SYSTEM/********@TEST version=11.1.0 directory=DATA_PUMP_DIR dumpfile=06022014_2000-CORPORATE.dmp logfile=06022014_2000-CORPORATE.txt 
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 39.61 GB
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_INDEX/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW
Processing object type SCHEMA_EXPORT/JOB
Processing object type SCHEMA_EXPORT/REFRESH_GROUP
. . exported "CORPORATE"."MVW_COSTOFGOODS_V2"            4.576 GB 31590861 rows
. . exported "CORPORATE"."TB_COSTOFGOODS2_2011"          817.7 MB 5650656 rows
. . exported "CORPORATE"."SALESINVOICEHDR"               1.918 GB 6789553 rows
. . exported "CORPORATE"."MVW_SALES13"                   1.795 GB 13058630 rows
. . exported "CORPORATE"."SALESINVOICEDTL"               1.812 GB 14800698 rows
. . exported "CORPORATE"."TB_COSTOFGOODS2_2012"          1.765 GB 12301893 rows
. . exported "CORPORATE"."STOCKBATCHVALUE_2013"          1.305 GB 13628992 rows
. . exported "CORPORATE"."MVW_STOCKBATCHVALUE"           1.305 GB 13628992 rows
. . exported "CORPORATE"."STOCKBATCHVALUE"               1.305 GB 13628992 rows
. . exported "CORPORATE"."MVW_STOCKBATCHVALUE_2012"      1.171 GB 12248286 rows
. . exported "CORPORATE"."SALESSUMMARY_BKUP30122013"     1.047 GB 5636356 rows
. . exported "CORPORATE"."SALESSUMMARY_TEMP"             1.029 GB 5535489 rows
. . exported "CORPORATE"."SALESCLOSINGDTL"               437.1 MB 6599208 rows
. . exported "CORPORATE"."RECVPKTS"                      272.1 MB 2759249 rows
. . exported "CORPORATE"."MVW_SALESSUMMARY"              505.7 MB 5804784 rows
. . exported "CORPORATE"."MVW_STOCKVALUESUM"             444.1 MB 2720793 rows
. . exported "CORPORATE"."STOCKSUMMARY"                  404.1 MB 7191161 rows
. . exported "CORPORATE"."STOCKSUMMARY_OLD"              375.0 MB 6676708 rows
. . exported "CORPORATE"."MVW_STOCKSUMMARY"              339.4 MB 6653335 rows
. . exported "CORPORATE"."MVW_PENDINGCRFSTATUS"          304.3 MB  506551 rows
. . exported "CORPORATE"."SENDPKTS"                      85.88 MB  975402 rows
. . exported "CORPORATE"."ACCOUNTSHDR"                   123.6 MB  465284 rows
. . exported "CORPORATE"."TB_COSTOFGOODS2_2013"          268.4 MB 1930988 rows
. . exported "CORPORATE"."STOCKVALUE":"P0_2014"          221.8 MB 1398622 rows
. . exported "CORPORATE"."PRICELIST"                     217.7 MB 1370344 rows
.
.
.
.
.
.

Master table "CORPORATE"."SYS_EXPORT_SCHEMA_01" successfully loaded/unloaded
******************************************************************************
Dump file set for CORPORATE.SYS_EXPORT_SCHEMA_01 is:
Job "CORPORATE"."SYS_EXPORT_SCHEMA_01" successfully completed at 20:33:17




Please share your valuable Feedback...

Monday, February 3, 2014

ORA-00322: log 3 of thread 1 is not current copy

Below are steps to resolve an Issue

Error details:

SQL> startup
ORACLE instance started.

Total System Global Area 701046856 bytes
Fixed Size 454728 bytes
Variable Size 377487360 bytes
Database Buffers 321912832 bytes
Redo Buffers 1191936 bytes
Database mounted.
Errors in file e:\app\cabinda02\diag\rdbms\orcl\orcl\trace\orcl_m000_3660.trc:
ORA-00322: log 3 of thread 1 is not current copy
ORA-00312: online log 3 thread 1: 'E:\APP\CABINDA02\ORADATA\ORCL\REDO03.LOG'


Solution:

SQL> shut immediate

SQL> startup mount

ORACLE instance started.

Total System Global Area 2538741760 bytes
Fixed Size                  2257872 bytes
Variable Size             889195568 bytes
Database Buffers         1627389952 bytes
Redo Buffers               19898368 bytes

Database mounted.

SQL> recover database using backup controlfile;

Apply all the redolog files ( specify entire redolog file location path ) until success the media recovery.After Media recovery is completed open the database with RESETLOGS option.

SQL> ALTER DATABASE OPEN RESETLOGS;




Please share your feed back.


Friday, January 31, 2014

Database Manual Creation.

 Database Manual Creation

           Dear Friends follow the below steps for Database manual creation means without using DBCA command. Follow the below step by step without fail and change the file system as per your requirement.

           Install the Oracle Home.
For Installation of Oracle Home below steps to be followed.
Ø  Go to the DUMP location and click on setup.exe



 Should maintain the above directory structure.


Ø  Click NEXT…….



Ø   Click NEXT…..



Ø   Click on Install….




Ø  Wait For Some time to be finished.




Ø  Click on Exit.
Now Oracle Home is Ready to use. Based on this Oracle HOME we are going to be created Database manually. Below are steps for Database Manual Creation.

Manual DB Creation

Don’t for get to open the cmd prompt as Administrator.
Click on Start ->cmd ->Run as Administrator

ü  C:\>set ORACLE_HOME=E:\app\Administrator\product\11.1.0\db_1
ü  C:\>set PATH=%ORACLE_HOME%;%PATH%
ü  C:\>set ORACLE_SID=ORACLE

Create below DUMP Directories required …………

ü  C:\>mkdir E:\app\Administrator\admin\adump
ü  C:\>mkdir E:\app\Administrator\admin\dpdump
ü  C:\>mkdir E:\app\Administrator\admin\pfile
ü   C:\>mkdir E:\app\Administrator\diag
ü   C:\>mkdir E:\app\Administrator\flash_recovery_area
ü   C:\>mkdir E:\app\Administrator\oradata
ü  C:\>mkdir E:\app\Administrator\oradata\ORACLE




=======================================
Create the parameter file (Pfile)
=========================================
below are minimum requirement for  DB creation. Based on the oracle version we have to change compatible ='11.1.0'.In this scenario our oracle version 11.1.0. & make sure that file extension is initoracle.ora


db_name='ORACLE'
db_block_size=8192
memory_target=500m
processes=100
open_cursors=300
remote_login_passwordfile='EXCLUSIVE'
undo_tablespace='UNDOTBS1'
compatible ='11.1.0'
audit_trail ='db'
db_recovery_file_dest_size=5g
db_recovery_file_dest='E:\APP\ADMINISTRATOR\flash_recovery_area'
audit_file_dest='E:\APP\ADMINISTRATOR\admin\adump'
diagnostic_dest='E:\APP\ADMINISTRATOR\diag'
control_files = ('E:\APP\ADMINISTRATOR\oradata\control1.ctl', 'E:\APP\ADMINISTRATOR\oradata\control2.ctl', 'E:\APP\ADMINISTRATOR\oradata\control3.ctl')

========================================================================

=======================
Create a Windows service
==============================


C:\>oradim –new –sid oracle  startmode auto 

Instance created.

C:\>sc query oracleserviceORACLE




  •  Connect to instance and create SPFILE

C:\>sqlplus



SQL> create spfile from pfile=’ E:\app\Administrator\admin\pfile\initoracle.ora’;

SQL> starup nomount;

ORACLE instance started.


Total System Global Area  523108352 bytes

Fixed Size                  1375704 bytes

Variable Size             314573352 bytes

Database Buffers          201326592 bytes


Redo Buffers                5832704 bytes

==========================================
 Execute the CREATE DATABASE Command

==========================================

CREATE DATABASE oracle
    USER sys IDENTIFIED BY cloud12c@
    USER system IDENTIFIED BY cloud12c@
    MAXLOGFILES 5
    MAXLOGMEMBERS 3
    MAXDATAFILES 200
    MAXINSTANCES 1
    MAXLOGHISTORY 500
LOGFILE
GROUP 1 ('E:\app\Administrator\oradata\oracle\redo01.log','E:\app\Administrator\oradata\oracle\redo02.log') SIZE 50M,
GROUP 2 ('E:\app\Administrator\oradata\oracle\redo03.log','E:\app\Administrator\oradata\oracle\redo04.log') SIZE 50M,
GROUP 3 ('E:\app\Administrator\oradata\oracle\redo05.log','E:\app\Administrator\oradata\oracle\redo06.log') SIZE 50M
DATAFILE 'E:\app\Administrator\oradata\oracle\system01.dbf' SIZE 300M EXTENT MANAGEMENT LOCAL
SYSAUX DATAFILE 'E:\app\Administrator\oradata\oracle\sysaux01.dbf' SIZE 200M
UNDO TABLESPACE UNDOTBS1 DATAFILE 'E:\app\Administrator\oradata\oracle\undotbs01.dbf' SIZE 300M AUTOEXTEND OFF
DEFAULT TEMPORARY TABLESPACE TEMP TEMPFILE 'E:\app\Administrator\oradata\oracle\temp01.dbf' SIZE 200M REUSE AUTOEXTEND OFF
CHARACTER SET WE8ISO8859P1
NATIONAL CHARACTER SET UTF8;


========================
Create data dictionary objects
========================


SQL> @$ORACLE_HOME/rdbms/admin/catalog.sql

SQL> @$ORACLE_HOME/rdbms/admin/catproc.sql

SQL> connect system/system

SQL> @$ORACLE_HOME/sqlplus/admin/pupbld.sql



Please share your valuable feedback....






Create New Database Using DBCA With Silent Option

Create New Database Using DBCA With Silent Option


Below are the steps for how to create Database Manual.

  • Make sure you have no existing database entry in /etc/oratab file for the SID you intend to create
  • Make sure you have no initSID.ora under the $ORACLE_HOME/dbs
  • Construct the dbca command line:
  • Make sure you have no existing Listener & TNS entries.
  • Make to be created all the directory structure as required.