sys.utl_recomp.recomp_parallel(20); is failing with "ORA-00955: name is already used by an existing object"
Issue
The utl_recomp package errors out trying to compile invalids
SQL> exec sys.utl_recomp.recomp_parallel(20);
BEGIN SYS.UTL_RECOMP.RECOMP_PARALLEL;END;
*
ERROR at line 1:
ORA-00955: name is already used by an existing object
ORA-06512: at "SYS.UTL_RECOMP", line 662
ORA-06512: at "SYS.UTL_RECOMP", line 715
ORA-06512: at line 1
Cause
Bug:9881143 where an index is left from a previous failed execution of utl_recomp. The following commands can be used to generate a trace file to verify.
alter session set events '955 trace name errorstack level 3';
exec sys.utl_recomp.recomp_parallel(20);
Looking in the trace file will show the problem index.
*** 2010-12-16 12:42:24.026
ksedmp: internal or fatal error
ORA-00955: name is already used by an existing object
Current SQL statement for this session:
CREATE INDEX utl_recomp_comp_idx1 ON utl_recomp_compiled(obj#) NOPARALLEL
sqlplus / as sysdba
drop index utl_recomp_comp_idx1;
exec sys.utl_recomp.recomp_parallel(20);
Bug:9881143 is fixed in 11.1.0.6.
The utl_recomp package errors out trying to compile invalids
SQL> exec sys.utl_recomp.recomp_parallel(20);
BEGIN SYS.UTL_RECOMP.RECOMP_PARALLEL;END;
*
ERROR at line 1:
ORA-00955: name is already used by an existing object
ORA-06512: at "SYS.UTL_RECOMP", line 662
ORA-06512: at "SYS.UTL_RECOMP", line 715
ORA-06512: at line 1
Cause
Bug:9881143 where an index is left from a previous failed execution of utl_recomp. The following commands can be used to generate a trace file to verify.
alter session set events '955 trace name errorstack level 3';
exec sys.utl_recomp.recomp_parallel(20);
Looking in the trace file will show the problem index.
*** 2010-12-16 12:42:24.026
ksedmp: internal or fatal error
ORA-00955: name is already used by an existing object
Current SQL statement for this session:
CREATE INDEX utl_recomp_comp_idx1 ON utl_recomp_compiled(obj#) NOPARALLEL
Solution
Drop the index utl_recomp_comp_idx1 left by utl_recomp and then rerun the package.sqlplus / as sysdba
drop index utl_recomp_comp_idx1;
exec sys.utl_recomp.recomp_parallel(20);
Bug:9881143 is fixed in 11.1.0.6.
Ref:Error
Running Utl_Recomp.Recomp_Serial [ID 1274504.1]
How to map semaphores id and cleanup in unix to Oracle process
How to map semaphores id in unix to Oracle process:
eg: ps -ef | grep pmon | grep -v grep
oracle 23398 1 0 2011 ? 00:06:05 ora_pmon_DB1
oracle 24488 1 0 Jan10 ? 00:01:51 ora_pmon_DB2
oracle 31682 1 0 Jan23 ? 00:00:23 ora_pmon_DB3
ipcs -map
------ Shared Memory Creator/Last-op --------
shmid owner cpid lpid
80642052 oracle 31653 32595
79790085 oracle 23371 29972 - marked in trace file
80510982 oracle 24226 30222
Now each shmid is mapped with each database. now we need to find out which shmid belongs to which database
1. Login to DB1
2. sqlplus ' /as sysdba'
SQL> oradebug setmypid
Statement processed.
SQL> oradebug ipc
Information written to trace file.
3. got to dump file location and open lastly generated trace file
Dump file /oracle/admin/DB1/udump/db1_ora_15433.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, Real Application Clusters, Oracle Label Security, OLAP,
Data Mining Scoring Engine and Real Application Testing options
ORACLE_HOME = /oracle/product/10.2.0.5/DB1
System name: Linux
Node name: db1-linux
Release: 2.6.18-238.9.1.el5
Version: #1 SMP Fri Mar 18 12:42:39 EDT 2011
Machine: x86_64
Instance name: DB1
Redo thread mounted by this instance: 1
Oracle process number: 33
Unix process pid: 15433, image: oracle@db1-linux (TNS V1-V3)
*** 2012-01-24 12:51:31.038
*** ACTION NAME:() 2012-01-24 12:51:31.038
*** MODULE NAME:(sqlplus@ldb1-linux (TNS V1-V3)) 2012-01-24 12:51:31.038
*** SERVICE NAME:(SYS$USERS) 2012-01-24 12:51:31.038
*** SESSION ID:(2769.41080) 2012-01-24 12:51:31.038
Dump of unix-generic skgm context
areaflags 000000e7
realmflags 0000000f
mapsize 00000800
protectsize 00001000
lcmsize 00001000
seglen 00200000
largestsize 0000001f7d2fb800
smallestsize 0000000001000000
stacklimit 0x7fffd5de24c0
stackdir -1
mode 640
magic acc01ade
Handle: 0x1965f470 `/oracle/product/10.2.0.5/DB1'
Dump of unix-generic realm handle `/oracle/product/10.2.0.5/DB1', flags = 00000000
Area #0 `Fixed Size' containing Subareas 0-0
Total size 0000000000209ab8 Minimum Subarea size 00000000
Area Subarea Shmid Stable Addr Actual Addr
0 0 79790085 0x00000060000000 0x00000060000000
Subarea size Segment size
000000000020a000 0000000540200000
Area #1 `Variable Size' containing Subareas 2-2
Cleanup the process using ipcrm command:
ipcrm -m 79790085
eg: ps -ef | grep pmon | grep -v grep
oracle 23398 1 0 2011 ? 00:06:05 ora_pmon_DB1
oracle 24488 1 0 Jan10 ? 00:01:51 ora_pmon_DB2
oracle 31682 1 0 Jan23 ? 00:00:23 ora_pmon_DB3
ipcs -map
------ Shared Memory Creator/Last-op --------
shmid owner cpid lpid
80642052 oracle 31653 32595
79790085 oracle 23371 29972 - marked in trace file
80510982 oracle 24226 30222
Now each shmid is mapped with each database. now we need to find out which shmid belongs to which database
1. Login to DB1
2. sqlplus ' /as sysdba'
SQL> oradebug setmypid
Statement processed.
SQL> oradebug ipc
Information written to trace file.
3. got to dump file location and open lastly generated trace file
Dump file /oracle/admin/DB1/udump/db1_ora_15433.trc
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, Real Application Clusters, Oracle Label Security, OLAP,
Data Mining Scoring Engine and Real Application Testing options
ORACLE_HOME = /oracle/product/10.2.0.5/DB1
System name: Linux
Node name: db1-linux
Release: 2.6.18-238.9.1.el5
Version: #1 SMP Fri Mar 18 12:42:39 EDT 2011
Machine: x86_64
Instance name: DB1
Redo thread mounted by this instance: 1
Oracle process number: 33
Unix process pid: 15433, image: oracle@db1-linux (TNS V1-V3)
*** 2012-01-24 12:51:31.038
*** ACTION NAME:() 2012-01-24 12:51:31.038
*** MODULE NAME:(sqlplus@ldb1-linux (TNS V1-V3)) 2012-01-24 12:51:31.038
*** SERVICE NAME:(SYS$USERS) 2012-01-24 12:51:31.038
*** SESSION ID:(2769.41080) 2012-01-24 12:51:31.038
Dump of unix-generic skgm context
areaflags 000000e7
realmflags 0000000f
mapsize 00000800
protectsize 00001000
lcmsize 00001000
seglen 00200000
largestsize 0000001f7d2fb800
smallestsize 0000000001000000
stacklimit 0x7fffd5de24c0
stackdir -1
mode 640
magic acc01ade
Handle: 0x1965f470 `/oracle/product/10.2.0.5/DB1'
Dump of unix-generic realm handle `/oracle/product/10.2.0.5/DB1', flags = 00000000
Area #0 `Fixed Size' containing Subareas 0-0
Total size 0000000000209ab8 Minimum Subarea size 00000000
Area Subarea Shmid Stable Addr Actual Addr
0 0 79790085 0x00000060000000 0x00000060000000
Subarea size Segment size
000000000020a000 0000000540200000
Area #1 `Variable Size' containing Subareas 2-2
4. You can map this id to id you got in above output in ipcs
Cleanup the process using ipcrm command:
ipcrm -m 79790085
Oracle 10g Dataguard Best Practice and setup steps
How it works:
As the primary database processes transactions, the ARCH process on the primary database is configured to transmit the redo data to the standby. On standby, one or more RFS processes are used to receive redo data and MRP process applies the redo data to the standby. The primary database also has the Fetch Archive Log (FAL) process to provide a client-server mechanism for transmitting archived logs to the standby following a communication loss between the primary and standby for automatic gap resolution and resynchronization.
Configuration Steps:
This section provides step-by-step instructions on how to setup 10g Dataguard. Additional information has been provided as notes to elaborate or clarify a point. All steps shown were tested on linux.
Primary host : prod-lnx-01
Standby host : stdby-linx-01
ORACLE_SID=SILVER
Kernel :Linux x86 2.4.21-27.EL (CEL 5)
Service names : SILVER_primary, SILVER_stdby
Initial Setup:
Initialization Parameters:
Primary database init.ora:
instance_name = SILVER
DB_UNIQUE_NAME = 'SILVER_primary'
_LOG_ARCHIVE_CALLOUT='LOCAL_FIRST=TRUE'
##COMMON TO BOTH PRIMARY AND STANDBY ROLES
log_archive_config ='DG_CONFIG=(SILVER_primary, SILVER_stdby)'
log_archive_dest_1='LOCATION=/oracle/archive/SILVER/SILVER.arch VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=SILVER_primary'
log_archive_dest_2='SERVICE=SILVER_stdby ARCH VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=SILVER_stdby max_connections=4 delay=30'
log_archive_dest_state_1=enable
log_archive_dest_state_2=enable
log_archive_max_processes=20
log_archive_format='%t_%s_%r.dbf'
remote_login_passwordfile=EXCLUSIVE
#SPECIFIC TO STANDBY ROLE
FAL_CLIENT=SILVER_primary
FAL_SERVER=SILVER_stdby
standby_archive_dest=/oracle/archive/SILVER/SILVER.arch
standby_file_management=auto
NOTE: Values for log_archive_max_processes, max_connections and delay are just recommended values and can be changed as applicable to the env based on the amount of generated redo. You can refer to the test cases to get some idea on what the values should be.
Standby database init.ora:
instance_name = SILVER
DB_UNIQUE_NAME = 'SILVER_stdby'
log_archive_config ='DG_CONFIG=(SILVER_primary, SILVER_stdby)'
log_archive_dest_1='LOCATION=/oracle/archive/SILVER/SILVER.arch VALID_FOR=(ALL_LOGFILES, ALL_ROLES) DB_UNIQUE_NAME=SILVER_stdby'
log_archive_dest_2='SERVICE=SILVER_primary VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=SILVER_primary max_connections=4 delay=30'
log_archive_dest_state_1=enable
log_archive_dest_state_2=enable
log_archive_max_processes=20
log_archive_format='%t_%s_%r.dbf'
remote_login_passwordfile=EXCLUSIVE
#SPECIFIC TO STANDBY ROLE
FAL_CLIENT='SILVER_stdby'
FAL_SERVER='SILVER_primary'
standby_archive_dest='/oracle/archive/SILVER/SILVER.arch'
standby_file_management='auto'
# SPECIFIC TO RECOVERY
parallel_max_servers=16
parallel_execution_message_size=16384
recovery_parallelism=4
Note:
Create just one initialization parameter file that contains parameters used in both Roles (primary / standby). Parameters specific to that role will be used eg: FAL_ parameter will be used only when the database is in standby role. For Log_archive_dest_n the VALID_FOR attribute will differentiate the roles, if not specified the default is (ALL_LOGFILES,ALL_ROLES). This VALID_FOR attribute allows us to use the same initialization parameter file for both the primary and standby roles. This will allow to failover without any changes
Note:
- The parameters specific to recovery should be set only on standby database and they enhance the performance of the recovery. These specific parameters are not recommended on primary.
- Max_connection setting has issue. Refer Issues mentioned at the start of the document
- Settings like Log_archive_max_processes, delay can vary based on the env needs
Configure Net Services:
Note:
The recommendation is to use ports between 1910 to 1920 for Data Guard traffic for DG listener as there is a requirement for asynchronous traffic over WAN to be marked for Scavenger class and these ports are included in scavenger class Qos list.
What is Scavenger class?
Scavenger class traffic is defined as limited percent of the total bandwidth available. When there is available bandwidth the scavenger class will exceed its allocation and is in no way policed or shaped. The Qos policies will kick in only when there is no available bandwidth, and in that scenario the traffic will be limited to that allocation (which isn't often). What it means is during the periods of congestion, the Scavenger class policies get kicked in and the traffic marked for this class will get limited bandwidth, packets might get dropped etc..prior to impacting the critical transactional data
Verify the TCP send/receive size are set to 4665000 (for improved efficiency in log shipping).
cat /proc/sys/net/core/wmem_max
cat /proc/sys/net/core/rmem_max
Add SDU = 32768 in tnsnames.ora in both Primary and DR.
Setup separate listener for DG
On Primary database host:
/etc/listener.ora
LISTENER_SILVER_DG =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST =erpdb-prd-06)(PORT = 1523))
)
)
)
SID_LIST_LISTENER_SILVER_DG =
(SID_LIST =
(SID_DESC =
(SDU=32768)
(SID_NAME = SILVER)
(ORACLE_HOME = /oracle/product/10.2.0.2-64)
)
)
/etc/tnsnames.ora
SILVER_stdby =
(DESCRIPTION =
(SDU = 32768)
(ADDRESS_LIST =
(ADDRESS =
(PROTOCOL = TCP)
(HOST =hpquadopt)
(PORT = 1523)
)
)
(CONNECT_DATA =
(SERVICE_NAME = SILVER)
)
)
On standby host:
/etc/listener.ora
LISTENER_SILVER_DG =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST =hpquadopt)(PORT = 1523))
)
)
)
SID_LIST_LISTENER_SILVER_DG =
(SID_LIST =
(SID_DESC =
(SDU=32768)
(SID_NAME = SILVER)
(ORACLE_HOME = /oracle/product/10.2.0.2-64)
)
)
SILVER_primary =
(DESCRIPTION =
(SDU = 32768)
(ADDRESS_LIST =
(ADDRESS =
(COMMUNITY = portal.com)
(PROTOCOL = TCP)
(HOST =erpdb-prd-06)
(PORT = 1523)
)
)
(CONNECT_DATA =
(SERVICE_NAME = SILVER)
)
)
Configure Primary Database
Ensure initialization parameters and net services configuration in place (refer initial setup)
Shutdown the Database
Setup Remote OS authentication
Create the password file if not in place already
orapwd file=/oracle/admin/<SID>/pfile/orapw<SID> password=<sys password>
Ensure sym link is created from $ORACLE_HOME/dbs to /oracle/admin/<SID>/pfile/orapw<SID>
Copy the orapw<sid> from primary to same location on standby
Ensure the Database is in archivelog mode
Startup the Database
Configure Standby Database
1. Ensure initialization parameters and net services configuration in place (refer initial setup)
2. Add entry in /etc/oratab. Put automatic startup as N
3. Copy the datafiles and required archives from cold/hot backup to DR server
4. Create the standby controlfile on Primary host
SQL>ALTER DATABASE CREATE STANDBY CONTROFLE AS '/tmp/stdby.ctl';
Copy the stdby.ctl file to the standby host. Make sure the control_file parameter reflects the correct name/path
Startup the standby database
SQL>STARTUP MOUNT
(optional) Create the SRL's (standby redo logs). Number of SRL’s should be one more than the number of redo logs on primary and the log size should be same as primary
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 4 size 50 M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 5 size 50 M;
SQL>ALTER DATABASE ADD STANDBY LOGFILE GROUP 6 size 50 M;
Note:
The archiver (ARCn) process on the primary database can optionally transmit redo data to remote standby redo logs (SRL’s) if SRL’s are created. In a failover situation, you will have access to the information already written in the Standby Redo Logs rather than wait until it is archived completely in case of no SRL’s and hence reducing the risk of loosing data.
MAXLOGFILE defaults to 16 , To create online redologs + standby redologs more than this ensure that you recreate the control file to modify maxlogfile to accomodate this number.
The POC testing was done with and without SRL’s. No issues were noticed with either option. But, if you can not afford to loose any data in case of a failover and you need standby recovered real time, you have to use SRL’s
Verify log shipping
Check to see if the redo from the primary is being shipped to the standby
On the primary do the following:
SQL>ALTER SYSTEM SWITCH LOGFILE;
SQl>SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME, ARCHIVED, DEST_ID FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
SEQUENCE# FIRST_TIME NEXT_TIME ARC DEST_ID
---------- ----------- ----------- --- ----------
1243 01-OCT-2006 07-OCT-2006 YES 1
1243 01-OCT-2006 07-OCT-2006 YES 2
1244 01-OCT-2006 07-OCT-2006 YES 1
1244 01-OCT-2006 07-OCT-2006 YES 2
Note
In this configuration, arch process is used for log_archive_dest_2 and SRL’s are created on the standby. On the primary database, after the ARC0 process successfully archives the local online redo log to the local destination (LOG_ARCHIVE_DEST_1), the ARC1 process transmits redo from the local archived redo log files (instead of the online redo log files) to the remote standby destination (LOG_ARCHIVE_DEST_2). On the remote destination, the remote file server process (RFS) will in turn write the redo data to an archived redo log file from a standby redo log file.
Verify log apply
Start managed recovery
SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT;
Verify if the archived logs are applied successfully on the standby
SQL>SELECT SEQUENCE#,APPLIED FROM V$ARCHIVED_LOG ORDER BY SEQUENCE#;
Note:
In our setup, the log apply services on standby are delayed by 90 mins. If we need real time apply enabled, the log apply services can apply redo data as it is received, without waiting for the current standby redo log file to be archived. On the standby cancel out of the current managed recovery and place it back in recovery with Real Time Apply
SQL>ALTER DATABASER RECOVER MANAGED STANDBY DATABASE CANCEL;
SQL>ALTER DATABASE RECOVER MANAGED STANDBY DATABASE USING CURRENT LOGFILE DISCONNECT;
Detailed and simple steps for Schema Migration using exp/imp,pipe,gz between two databases
Detailed and simple steps for Schema Migration using exp/imp,pipe,gz between two databases
Start of Migration
On the Source Database
==================
SQL> select name,open_mode from v$database;
NAME OPEN_MODE
--------- ----------
SOURCE READ WRITE
1. SQL> select owner,sum(bytes)/1024/1024 from dba_segments where owner in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5') group by owner;
2. SQL> select tablespace_name,sum(bytes)/1024/1024 from dba_segments where owner in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5') group by tablespace_name;
3. SQL> select default_tablespace,username from dba_users where username in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
4. SQL> select owner,status,object_type,count(*) from dba_objects where owner in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5') group by object_type,owner,status;
5. SQL> select object_type,count(*) from dba_objects where owner in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5') group by object_type;
6. SQL> select count(*),status from dba_objects where owner in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5') group by status;
7. SQL> select object_type,count(*),status from dba_objects where owner in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5') group by status,object_type;
8. SQL> select tablespace_name,sum(bytes)/1024/1024/1000 from dba_data_files where tablespace_name in('TS1','TS2','TS3','TS4','TS5','TS6','TS7') group by tablespace_name;
9. Export Schemas:
vi exp_SOURCE_schemas.sh
#!/bin/ksh
export NLS_LANG=AMERICAN_AMERICA.UTF8
mknod /tmp/exp_pipe1 p
gzip -cNf SOURCE_schemas.dmp.gz &
exp system/sysmic32 file=/tmp/exp_pipe1 buffer=2097152 log=exp_SOURCE_schemas.log statistics=none owner=SCHEMA1,SCHEMA2,SCHEMA3,SCHEMA4,SCHEMA5
rm -f /tmp/exp_pipe1
unset NLS_LANG
10. nohup ./exp_SOURCE_schemas.sh > exp_SOURCE_schemas.log 2>&1
11. Copy the SOURCE_schemas.dmp.gz to Target Database server.
scp -p SOURCE_schemas.dmp.gz ganesh@target_server:/oracle/export
On the Target Database
==================
SQL> select name,open_mode from v$database;
NAME OPEN_MODE
--------- ----------
TARGET READ WRITE
1. Create tablespaces as per the output from Step 7 you got on Source database
2. Create users on Target database
Run the following script on the source database and execute the resultant script on the Target database
set linesize 300
set pagesize 300
spool create_users.sql
select 'create user '||username||' identified by values '''||password||''' default tablespace '||default_tablespace||' temporary tablespace temp profile '||profile||';' from dba_users where username in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
spool off
3. Grant Quotas on Target database
Run the following script on the source database and execute the resultant script on the Target database
spool create_tablespace_quota.sql
select 'alter user '||username||' quota unlimited on '||tablespace_name||';' from dba_ts_quotas where MAX_BYTES='-1' and username in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
select 'alter user '||username||' quota '||max_bytes||' on '||tablespace_name||';' from dba_ts_quotas where MAX_BYTES!='-1' and username in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
spool off
4. Grant Roles on Target database
Run the following script on the source database and execute the resultant script on the Target database
spool create_grant_roles.sql
select 'grant '||GRANTED_ROLE||' to '||GRANTEE||';' from dba_role_privs where ADMIN_OPTION='NO' and grantee in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
select 'grant '||GRANTED_ROLE||' to '||GRANTEE||' with admin option;' from dba_role_privs where ADMIN_OPTION='YES' and grantee in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
spool off
5. Grant System privs on Target database
spool create_sys_privs.sql
Run the following script on the source database and execute the resultant script on the Target database
select 'grant '||PRIVILEGE||' to '||GRANTEE||';' from dba_sys_privs where ADMIN_OPTION='NO' and grantee in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
select 'grant '||PRIVILEGE||' to '||GRANTEE||' with admin option;' from dba_sys_privs where ADMIN_OPTION='YES' and grantee in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
spool off
6. Import Schemas
vi imp_TARGET_schemas.sh
#!/bin/ksh
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
mknod /tmp/imp_pipe1 p
gunzip -c SOURCE_schemas.dmp.gz >/tmp/imp_pipe1 &
imp system/manager99 file=/tmp/imp_pipe1 buffer=20971520 log=imp_TARGET_schemas.log full=y
rm -f /tmp/imp_pipe1
unset NLS_LANG
7. Verify the logs
8. Do step 1 to step 7 that you have done on the source database.
9. Grant Table privs on the Target
Run the following script on the source database and execute the resultant script on the Target database
spool create_tab_privs.sql
select 'grant '||privilege||' on '||owner||'.'||table_name||' to '||grantee||';' from dba_tab_privs where GRANTABLE='NO' and grantee in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
select 'grant '||privilege||' on '||owner||'.'||table_name||' to '||grantee||' with grant option;' from dba_tab_privs where GRANTABLE='YES' and grantee in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
spool off
9. EXEC DBMS_UTILITY.COMPILE_SCHEMA ('SCHEMA1'); -> Do for all schemas that you have imported
10. exec dbms_stats.gather_schema_stats('SCHEMA1', cascade=>TRUE); -> Do for all schemas that you have imported
End of Migration
Start of Migration
On the Source Database
==================
SQL> select name,open_mode from v$database;
NAME OPEN_MODE
--------- ----------
SOURCE READ WRITE
1. SQL> select owner,sum(bytes)/1024/1024 from dba_segments where owner in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5') group by owner;
2. SQL> select tablespace_name,sum(bytes)/1024/1024 from dba_segments where owner in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5') group by tablespace_name;
3. SQL> select default_tablespace,username from dba_users where username in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
4. SQL> select owner,status,object_type,count(*) from dba_objects where owner in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5') group by object_type,owner,status;
5. SQL> select object_type,count(*) from dba_objects where owner in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5') group by object_type;
6. SQL> select count(*),status from dba_objects where owner in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5') group by status;
7. SQL> select object_type,count(*),status from dba_objects where owner in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5') group by status,object_type;
8. SQL> select tablespace_name,sum(bytes)/1024/1024/1000 from dba_data_files where tablespace_name in('TS1','TS2','TS3','TS4','TS5','TS6','TS7') group by tablespace_name;
9. Export Schemas:
vi exp_SOURCE_schemas.sh
#!/bin/ksh
export NLS_LANG=AMERICAN_AMERICA.UTF8
mknod /tmp/exp_pipe1 p
gzip -cNf SOURCE_schemas.dmp.gz &
exp system/sysmic32 file=/tmp/exp_pipe1 buffer=2097152 log=exp_SOURCE_schemas.log statistics=none owner=SCHEMA1,SCHEMA2,SCHEMA3,SCHEMA4,SCHEMA5
rm -f /tmp/exp_pipe1
unset NLS_LANG
10. nohup ./exp_SOURCE_schemas.sh > exp_SOURCE_schemas.log 2>&1
11. Copy the SOURCE_schemas.dmp.gz to Target Database server.
scp -p SOURCE_schemas.dmp.gz ganesh@target_server:/oracle/export
On the Target Database
==================
SQL> select name,open_mode from v$database;
NAME OPEN_MODE
--------- ----------
TARGET READ WRITE
1. Create tablespaces as per the output from Step 7 you got on Source database
2. Create users on Target database
Run the following script on the source database and execute the resultant script on the Target database
set linesize 300
set pagesize 300
spool create_users.sql
select 'create user '||username||' identified by values '''||password||''' default tablespace '||default_tablespace||' temporary tablespace temp profile '||profile||';' from dba_users where username in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
spool off
3. Grant Quotas on Target database
Run the following script on the source database and execute the resultant script on the Target database
spool create_tablespace_quota.sql
select 'alter user '||username||' quota unlimited on '||tablespace_name||';' from dba_ts_quotas where MAX_BYTES='-1' and username in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
select 'alter user '||username||' quota '||max_bytes||' on '||tablespace_name||';' from dba_ts_quotas where MAX_BYTES!='-1' and username in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
spool off
4. Grant Roles on Target database
Run the following script on the source database and execute the resultant script on the Target database
spool create_grant_roles.sql
select 'grant '||GRANTED_ROLE||' to '||GRANTEE||';' from dba_role_privs where ADMIN_OPTION='NO' and grantee in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
select 'grant '||GRANTED_ROLE||' to '||GRANTEE||' with admin option;' from dba_role_privs where ADMIN_OPTION='YES' and grantee in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
spool off
5. Grant System privs on Target database
spool create_sys_privs.sql
Run the following script on the source database and execute the resultant script on the Target database
select 'grant '||PRIVILEGE||' to '||GRANTEE||';' from dba_sys_privs where ADMIN_OPTION='NO' and grantee in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
select 'grant '||PRIVILEGE||' to '||GRANTEE||' with admin option;' from dba_sys_privs where ADMIN_OPTION='YES' and grantee in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
spool off
6. Import Schemas
vi imp_TARGET_schemas.sh
#!/bin/ksh
export NLS_LANG=AMERICAN_AMERICA.AL32UTF8
mknod /tmp/imp_pipe1 p
gunzip -c SOURCE_schemas.dmp.gz >/tmp/imp_pipe1 &
imp system/manager99 file=/tmp/imp_pipe1 buffer=20971520 log=imp_TARGET_schemas.log full=y
rm -f /tmp/imp_pipe1
unset NLS_LANG
7. Verify the logs
8. Do step 1 to step 7 that you have done on the source database.
9. Grant Table privs on the Target
Run the following script on the source database and execute the resultant script on the Target database
spool create_tab_privs.sql
select 'grant '||privilege||' on '||owner||'.'||table_name||' to '||grantee||';' from dba_tab_privs where GRANTABLE='NO' and grantee in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
select 'grant '||privilege||' on '||owner||'.'||table_name||' to '||grantee||' with grant option;' from dba_tab_privs where GRANTABLE='YES' and grantee in ('SCHEMA1','SCHEMA2','SCHEMA3','SCHEMA4','SCHEMA5');
spool off
9. EXEC DBMS_UTILITY.COMPILE_SCHEMA ('SCHEMA1'); -> Do for all schemas that you have imported
10. exec dbms_stats.gather_schema_stats('SCHEMA1', cascade=>TRUE); -> Do for all schemas that you have imported
End of Migration
How to drop and recreate TEMP Tablespace in Oracle 9i/10g/11g
How to drop and recreate TEMP Tablespace in Oracle 9i/10g/11g
1. Create Temporary Tablespace Temp
CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE ‘/u01/app/oradata/temp01′ SIZE 2000M, ‘/u01/app/oradata/temp02′ SIZE 2000M';
2. Move Default Database temp tablespace
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2;
3. Make sure No sessions are using your Old Temp tablespace
a. Find Session Number from V$SORT_USAGE:
SELECT USERNAME, SESSION_NUM, SESSION_ADDR FROM V$SORT_USAGE;
b. Find Session ID from V$SESSION:
If the resultset contains any tows then your next step will be to find the SID from the V$SESSION view. You can find session id by using SESSION_NUM or SESSION_ADDR from previous resultset.
SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE SERIAL#=SESSION_NUM;
OR
SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE SADDR=SESSION_ADDR;
c. Kill Session:
Now kill the session with IMMEDIATE.
ALTER SYSTEM KILL 'SID,SERIAL#' IMMEDIATE;
4. Drop temp tablespace
DROP TABLESPACE temp INCLUDING CONTENTS AND DATAFILES;
5. Recreate Tablespace Temp
CREATE TEMPORARY TABLESPACE TEMP TEMPFILE /u01/app/temp/temp01′ SIZE 2000M;
6 Move Tablespace Temp, back to new temp tablespace
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;
7. Drop temporary for tablespace temp
DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;
No need to do shutdown when drop temp tablespace and the recreate it. If something happen with temp tablespaces e.g. : crash, corrupt, etc. Oracle database will ignore the error, but DML (insert,update,delete) and SELECT Query will suffer.
1. Create Temporary Tablespace Temp
CREATE TEMPORARY TABLESPACE TEMP2 TEMPFILE ‘/u01/app/oradata/temp01′ SIZE 2000M, ‘/u01/app/oradata/temp02′ SIZE 2000M';
2. Move Default Database temp tablespace
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp2;
3. Make sure No sessions are using your Old Temp tablespace
a. Find Session Number from V$SORT_USAGE:
SELECT USERNAME, SESSION_NUM, SESSION_ADDR FROM V$SORT_USAGE;
b. Find Session ID from V$SESSION:
If the resultset contains any tows then your next step will be to find the SID from the V$SESSION view. You can find session id by using SESSION_NUM or SESSION_ADDR from previous resultset.
SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE SERIAL#=SESSION_NUM;
OR
SELECT SID, SERIAL#, STATUS FROM V$SESSION WHERE SADDR=SESSION_ADDR;
c. Kill Session:
Now kill the session with IMMEDIATE.
ALTER SYSTEM KILL 'SID,SERIAL#' IMMEDIATE;
4. Drop temp tablespace
DROP TABLESPACE temp INCLUDING CONTENTS AND DATAFILES;
5. Recreate Tablespace Temp
CREATE TEMPORARY TABLESPACE TEMP TEMPFILE /u01/app/temp/temp01′ SIZE 2000M;
6 Move Tablespace Temp, back to new temp tablespace
ALTER DATABASE DEFAULT TEMPORARY TABLESPACE temp;
7. Drop temporary for tablespace temp
DROP TABLESPACE temp2 INCLUDING CONTENTS AND DATAFILES;
No need to do shutdown when drop temp tablespace and the recreate it. If something happen with temp tablespaces e.g. : crash, corrupt, etc. Oracle database will ignore the error, but DML (insert,update,delete) and SELECT Query will suffer.
How to drop and recreate UNDO Tablespace in Oracle 9i/10g/11g
How to drop and recreate UNDO Tablespace in Oracle 9i/10g/11g?
Solution:
1. Determine the size of your undo tablespace
SQL> select sum(bytes)/1024/1024/1024 from dba_data_files where tablespace_name='UNDOTBS1';
SUM(BYTES)/1024/1024/1024
-------------------------
12.09375
2. Create a new undo tablespace of the same size (larger or smaller) depending on your database requirements.
SQL> create undo tablespace UNDOTBS2 datafile 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL2\UNDOTBS02.DBF' size 400M;
Tablespace created.
3. Edit your init.ora file and change the parameter "undo_tablespace=UNDOTBS2" so it points to the newly created tablespace.
Change undo_tablespace=UNDOTBS2
Change undo_management=MANUAL
Setting undo_management now ensures the old rollback segments can be taken offline and avoids editing the pfile and restarting the instance again in Step 7.
4. Arrange a time when the database can be shutdown cleanly and perform a shutdown immediate.
5. Startup the database
6. Confirm the new tablespace is in use:
SQL> show parameter undo_tablespace
NAME TYPE VALUE
------------- ----------- ------------
undo_tablespace string UNDOTBS2
7. Check the status of the undo segments and determine if all the segments in the old undo tablespace are offline. The segments in the new tablespace may also show offline.
SQL>select owner, segment_name, tablespace_name, status from dba_rollback_segs order by 3;
OWNER SEGMENT_NAME TABLESPACE_NAME STATUS
------ --------------------------- ------------------------------ -----------
PUBLIC _SYSSMU3$ UNDOTBS1 OFFLINE
PUBLIC _SYSSMU2$ UNDOTBS1 OFFLINE
PUBLIC _SYSSMU19$ UNDOTBS2 OFFLINE
....etc.
If the old segments are online, then they must be taken offline:
SQL>alter rollback segment "_SYSSMU3$" offline;
SQL>alter rollback segment "_SYSSMU2$" offline;
This should be executed for all online rollback segments in the old tablespace.
8. Provided all the segments in the old undo tablespace are offline, you can now drop the old undo tablespace:
SQL>drop tablespace UNDOTBS1 including contents and datafiles;
Tablespace dropped.
9. Edit your init.ora file do the following changes.
undo_management='AUTO'
undo_tablespace='UNDOTBS2'
10. Shutdown the database (shutdown immediate) and restart it.
11. Monitor the alert log simultaneously during all steps.
Reference Metalink Note: [ID 431652.1]
Solution:
1. Determine the size of your undo tablespace
SQL> select sum(bytes)/1024/1024/1024 from dba_data_files where tablespace_name='UNDOTBS1';
SUM(BYTES)/1024/1024/1024
-------------------------
12.09375
2. Create a new undo tablespace of the same size (larger or smaller) depending on your database requirements.
SQL> create undo tablespace UNDOTBS2 datafile 'D:\ORACLE\PRODUCT\10.2.0\ORADATA\ORCL2\UNDOTBS02.DBF' size 400M;
Tablespace created.
3. Edit your init.ora file and change the parameter "undo_tablespace=UNDOTBS2" so it points to the newly created tablespace.
Change undo_tablespace=UNDOTBS2
Change undo_management=MANUAL
Setting undo_management now ensures the old rollback segments can be taken offline and avoids editing the pfile and restarting the instance again in Step 7.
4. Arrange a time when the database can be shutdown cleanly and perform a shutdown immediate.
5. Startup the database
6. Confirm the new tablespace is in use:
SQL> show parameter undo_tablespace
NAME TYPE VALUE
------------- ----------- ------------
undo_tablespace string UNDOTBS2
7. Check the status of the undo segments and determine if all the segments in the old undo tablespace are offline. The segments in the new tablespace may also show offline.
SQL>select owner, segment_name, tablespace_name, status from dba_rollback_segs order by 3;
OWNER SEGMENT_NAME TABLESPACE_NAME STATUS
------ --------------------------- ------------------------------ -----------
PUBLIC _SYSSMU3$ UNDOTBS1 OFFLINE
PUBLIC _SYSSMU2$ UNDOTBS1 OFFLINE
PUBLIC _SYSSMU19$ UNDOTBS2 OFFLINE
....etc.
If the old segments are online, then they must be taken offline:
SQL>alter rollback segment "_SYSSMU3$" offline;
SQL>alter rollback segment "_SYSSMU2$" offline;
This should be executed for all online rollback segments in the old tablespace.
8. Provided all the segments in the old undo tablespace are offline, you can now drop the old undo tablespace:
SQL>drop tablespace UNDOTBS1 including contents and datafiles;
Tablespace dropped.
9. Edit your init.ora file do the following changes.
undo_management='AUTO'
undo_tablespace='UNDOTBS2'
10. Shutdown the database (shutdown immediate) and restart it.
11. Monitor the alert log simultaneously during all steps.
Reference Metalink Note: [ID 431652.1]
Backup verification Statements/Steps
sqlplus "/as sysdba"
startup mount;
prompt The following should return zero rows
select count(*) from v$recover_file;
select count(*) from v$recovery_log;
select count(*) from v$recovery_status;
select count(*) from v$recovery_file_status;
select name,status from v$datafile where (name like '%MISS%' or status not in ('ONLINE', 'SYSTEM'))
/
select count(*) from v$backup where status != 'NOT ACTIVE' ;
col checkpoint_change# format 999999999999999
prompt The following should return one distinct number
Select distinct checkpoint_change# from v$datafile ;
select distinct to_char(CHECKPOINT_TIME,'DD-MON-YYYY HH24:MI:SS') from v$datafile_header;
prompt This should return "0" and "8192" as output
select distinct fhsta from x$kcvfh;
startup mount;
prompt The following should return zero rows
select count(*) from v$recover_file;
select count(*) from v$recovery_log;
select count(*) from v$recovery_status;
select count(*) from v$recovery_file_status;
select name,status from v$datafile where (name like '%MISS%' or status not in ('ONLINE', 'SYSTEM'))
/
select count(*) from v$backup where status != 'NOT ACTIVE' ;
col checkpoint_change# format 999999999999999
prompt The following should return one distinct number
Select distinct checkpoint_change# from v$datafile ;
select distinct to_char(CHECKPOINT_TIME,'DD-MON-YYYY HH24:MI:SS') from v$datafile_header;
prompt This should return "0" and "8192" as output
select distinct fhsta from x$kcvfh;
Oracle Inventory Creationg/Attach/Dettach/Clone/Relink Code tree
1. Point the inventory to correct location
Make sure the file ./etc/oraInst.loc., ./var/opt/oracle/oraInst.loc. & $ORACLE_HOME/oraInst.loc. has the following entries. If not modify the file and save.
inventory_loc=/oracle/product/oraInventory
inst_group=dba
2. Clone the code tree
+ cd $ORACLE_HOME/oui/bin
+ ./runInstaller -silent -clone -invPtrLoc $ORACLE_HOME/oraInst.loc -ignorePreReq ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=Ora10204_TEST ORACLE_HOST_NAME=test-01
!!! Please wait until the command is successful. Do not interrupt. !!!
Please check for .cloning is successful. message and will display message to run
.root.sh.. Sometime it may wait for you see this message. Press .enter. to get
the .command prompt..
Run the following ,
+ cp /oracle/product/10.2.0.4//root.sh /oracle/product/10.2.0.4
+sudo /oracle/product/10.2.0.4//root.sh
Accept the default options for the prompts and location of the files.
3. Detach the code tree from oracle central inventory
+ cd $ORACLE_HOME/oui/bin
+./runInstaller -silent -detachHome -local -noClusterEnabled -invPtrLoc $ORACLE_HOME/oraInst.loc -ignorePreReq ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=Ora10204_TEST
4. Attach the code tree to oracle central inventory . This will add correct host to the inventory
+ cd $ORACLE_HOME/oui/bin
+./runInstaller -silent -attachHome -local -noClusterEnabled -invPtrLoc $ORACLE_HOME/oraInst.loc -ignorePreReq ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=Ora10204_TEST CLUSTER_NODES=test-01
5. Relink the code tree
find . -type l -exec ls -l {} \; | grep -i [oldsid] | awk '{print "rm -f "$9";ln -s "$11" "$9}'
execute the above command to check any softlinks are pointing to the old env. If yes then edit the output of the above command and run them.
Then,
oraenv ( enter SID at the prompt ). Check the above variables are set.
Make sure correct values are set for ORACLE_HOME, PATH & LD_LIBRARY_PATH
Make sure .SHLIB_PATH. variable is NOT set.
+ cd $ORACLE_HOME/bin
+ relink all
Make sure the file ./etc/oraInst.loc., ./var/opt/oracle/oraInst.loc. & $ORACLE_HOME/oraInst.loc. has the following entries. If not modify the file and save.
inventory_loc=/oracle/product/oraInventory
inst_group=dba
2. Clone the code tree
+ cd $ORACLE_HOME/oui/bin
+ ./runInstaller -silent -clone -invPtrLoc $ORACLE_HOME/oraInst.loc -ignorePreReq ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=Ora10204_TEST ORACLE_HOST_NAME=test-01
!!! Please wait until the command is successful. Do not interrupt. !!!
Please check for .cloning is successful. message and will display message to run
.root.sh.. Sometime it may wait for you see this message. Press .enter. to get
the .command prompt..
Run the following ,
+ cp /oracle/product/10.2.0.4//root.sh /oracle/product/10.2.0.4
+sudo /oracle/product/10.2.0.4//root.sh
Accept the default options for the prompts and location of the files.
3. Detach the code tree from oracle central inventory
+ cd $ORACLE_HOME/oui/bin
+./runInstaller -silent -detachHome -local -noClusterEnabled -invPtrLoc $ORACLE_HOME/oraInst.loc -ignorePreReq ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=Ora10204_TEST
4. Attach the code tree to oracle central inventory . This will add correct host to the inventory
+ cd $ORACLE_HOME/oui/bin
+./runInstaller -silent -attachHome -local -noClusterEnabled -invPtrLoc $ORACLE_HOME/oraInst.loc -ignorePreReq ORACLE_HOME=$ORACLE_HOME ORACLE_HOME_NAME=Ora10204_TEST CLUSTER_NODES=test-01
5. Relink the code tree
find . -type l -exec ls -l {} \; | grep -i [oldsid] | awk '{print "rm -f "$9";ln -s "$11" "$9}'
execute the above command to check any softlinks are pointing to the old env. If yes then edit the output of the above command and run them.
Then,
oraenv ( enter SID at the prompt ). Check the above variables are set.
Make sure correct values are set for ORACLE_HOME, PATH & LD_LIBRARY_PATH
Make sure .SHLIB_PATH. variable is NOT set.
+ cd $ORACLE_HOME/bin
+ relink all
DMT v/s LMT
Difference between Dictionary managed tablespace (DMT) and Locally managed tablespace (LMT)
Introduction:
The CREATE TABLESPACE command has a new clause introduced in Oracle8i, the "extent management clause", that specifies how the extents of the tablespace are managed. This clause uses one of the following parameters:
- DICTIONARY:
Specifies that the tablespace is managed using dictionary tables. This is the default in Oracle8i.
- LOCAL:
Specifies that tablespace is locally managed. This is the default in Oracle9i. Exception for the SYSTEM tablespace
Locally Managed Tablespaces:
A tablespace that manages its own extents maintains a bitmap in each datafile to keep track of the free or used status of blocks in that datafile. Each bit in the bitmap corresponds to a group of blocks. When an extent is allocated or freed for reuse, Oracle changes the bitmap values to show the new status of the blocks. These changes do not generate rollback information because they do not update tables (like sys.uet$, sys.fet$) in the data dictionary (except for special cases such as tablespace quota information).
When you create a locally managed tablespace, header bitmaps are created for each datafile. If more datafiles are added, new header bitmaps are created for each added file.
Local management of extents automatically tracks adjacent free space, eliminating the need to coalesce free extents. The sizes of extents that are managed locally can be determined automatically by the system. Alternatively, all extents can have the same size in a locally managed tablespace.
Dictionary Managed Tablespaces:
In DMT, to keep track of the free or used status of blocks, oracle uses data dictionry tables. When an extent is allocated or freed for reuse, free space is recorded in the SYS.FET$ table, and used space in the SYS.UET$ table. Whenever space is required in one of these tablespaces, the ST (space transaction) enqueue latch must be obtained to do inserts and deletes agianst these tables. As only one process can acquire the ST enque at a given time, this often lead to contention. These changes generate rollback information because they update tables (like sys.uet$, sys.fet$) in the data dictionary.
Advantages of Locally Managed Tablespaces(LMT) over Dictionary Managed Tablespaces(DMT):
1. Reduced recursive space management
2. Reduced contention on data dictionary tables
3. No rollback generated
4. No coalescing required
Converting DMT to LMT:
Converting LMT to DMT:
Important Points:
1. LMTs can be created as
a) AUTOALLOCATE: specifies that the tablespace is system managed. Users cannot specify an extent size.
b) UNIFORM: specifies that the tablespace is managed with uniform extents of SIZE bytes. The default SIZE is 1 megabyte.
2. One cannot create a locally managed SYSTEM tablespace in 8i.
3. This is possible with in 9.2.0.X, where SYSTEM tablespace is created by DBCA as locally managed by default. With a locally managed SYSTEM tablespace, the rest of the tablespaces in such database
have to be locally managed as well.
4. Locally managed temporary tablespaces can not be of type "permanent".
Source : dbapool.com
Introduction:
The CREATE TABLESPACE command has a new clause introduced in Oracle8i, the "extent management clause", that specifies how the extents of the tablespace are managed. This clause uses one of the following parameters:
- DICTIONARY:
Specifies that the tablespace is managed using dictionary tables. This is the default in Oracle8i.
- LOCAL:
Specifies that tablespace is locally managed. This is the default in Oracle9i. Exception for the SYSTEM tablespace
Locally Managed Tablespaces:
A tablespace that manages its own extents maintains a bitmap in each datafile to keep track of the free or used status of blocks in that datafile. Each bit in the bitmap corresponds to a group of blocks. When an extent is allocated or freed for reuse, Oracle changes the bitmap values to show the new status of the blocks. These changes do not generate rollback information because they do not update tables (like sys.uet$, sys.fet$) in the data dictionary (except for special cases such as tablespace quota information).
When you create a locally managed tablespace, header bitmaps are created for each datafile. If more datafiles are added, new header bitmaps are created for each added file.
Local management of extents automatically tracks adjacent free space, eliminating the need to coalesce free extents. The sizes of extents that are managed locally can be determined automatically by the system. Alternatively, all extents can have the same size in a locally managed tablespace.
Dictionary Managed Tablespaces:
In DMT, to keep track of the free or used status of blocks, oracle uses data dictionry tables. When an extent is allocated or freed for reuse, free space is recorded in the SYS.FET$ table, and used space in the SYS.UET$ table. Whenever space is required in one of these tablespaces, the ST (space transaction) enqueue latch must be obtained to do inserts and deletes agianst these tables. As only one process can acquire the ST enque at a given time, this often lead to contention. These changes generate rollback information because they update tables (like sys.uet$, sys.fet$) in the data dictionary.
Advantages of Locally Managed Tablespaces(LMT) over Dictionary Managed Tablespaces(DMT):
1. Reduced recursive space management
2. Reduced contention on data dictionary tables
3. No rollback generated
4. No coalescing required
Converting DMT to LMT:
SQL> exec dbms_space_admin.Tablespace_Migrate_TO_Local('ts1'); PL/SQL procedure successfully completed. |
SQL> exec dbms_space_admin.Tablespace_Migrate_FROM_Local('ts2'); PL/SQL procedure successfully completed. |
1. LMTs can be created as
a) AUTOALLOCATE: specifies that the tablespace is system managed. Users cannot specify an extent size.
b) UNIFORM: specifies that the tablespace is managed with uniform extents of SIZE bytes. The default SIZE is 1 megabyte.
2. One cannot create a locally managed SYSTEM tablespace in 8i.
3. This is possible with in 9.2.0.X, where SYSTEM tablespace is created by DBCA as locally managed by default. With a locally managed SYSTEM tablespace, the rest of the tablespaces in such database
have to be locally managed as well.
4. Locally managed temporary tablespaces can not be of type "permanent".
Source : dbapool.com
ORACLE RAC ON/RAC OFF
Oracle's RAC: ON and OFF
In some cases, you may want to disable the RAC options for testing purposes -- perhaps to run a benchmark or convert the RAC binaries to single instance binaries. In such a case, you can use the following procedure to convert the RAC installation to non-RAC. Disabling and enabling RAC options are available only for UNIX platforms. Windows installations do not support relinking binaries with RAC ON and OFF.
Use the following steps to disable RAC (known as RAC OFF):
1. Log in as the Oracle software owner (which is typically the UNIX account oracle) in all nodes.
2. Shut down all the instances from all the nodes using a NORMAL or IMMEDIATE option.
3. Change the working directory to $ORACLE_HOME/rdbms/lib: cd $ORACLE_HOME/rdbms/lib
4. Run the following make command to relink the Oracle binaries without the RAC option: make -f ins_rdbms.mk rac_off
This normally runs for few minutes and should not pose any errors.
5. . Now relink the Oracle binaries: make -f ins_rdbms.mk ioracle
Now the Oracle binaries are relinked with the RAC OFF option. You may have to edit the init.ora or SPFILE parameters accordingly. If errors occur in step 4, you may need to contact Oracle Support and log a service request with the trace and log files.
Use the following steps to enable RAC (known as RAC ON):
1. Log in as the Oracle software owner (typically the UNIX account oracle) in all nodes.
2. Shut down all the instances from all the nodes using a NORMAL or IMMEDIATE option.
3. Change the working directory to $ORACLE_HOME/rdbms/lib:
4. cd $ORACLE_HOME/rdbms/lib Run the following make command to relink the Oracle binaries without the RAC option:
make -f ins_rdbms.mk rac_on
This normally runs for a few minutes and should not pose any errors.
5. Now relink the Oracle binaries:
make -f ins_rdbms.mk ioracle
Now the Oracle binaries are relinked with the RAC ON option. You may need to edit the init.ora or SPFILE parameters accordingly. If any errors occur in step 4, you may need to contact Oracle Support and log a service request with the trace and log files.
In some cases, you may want to disable the RAC options for testing purposes -- perhaps to run a benchmark or convert the RAC binaries to single instance binaries. In such a case, you can use the following procedure to convert the RAC installation to non-RAC. Disabling and enabling RAC options are available only for UNIX platforms. Windows installations do not support relinking binaries with RAC ON and OFF.
Use the following steps to disable RAC (known as RAC OFF):
1. Log in as the Oracle software owner (which is typically the UNIX account oracle) in all nodes.
2. Shut down all the instances from all the nodes using a NORMAL or IMMEDIATE option.
3. Change the working directory to $ORACLE_HOME/rdbms/lib: cd $ORACLE_HOME/rdbms/lib
4. Run the following make command to relink the Oracle binaries without the RAC option: make -f ins_rdbms.mk rac_off
This normally runs for few minutes and should not pose any errors.
5. . Now relink the Oracle binaries: make -f ins_rdbms.mk ioracle
Now the Oracle binaries are relinked with the RAC OFF option. You may have to edit the init.ora or SPFILE parameters accordingly. If errors occur in step 4, you may need to contact Oracle Support and log a service request with the trace and log files.
Use the following steps to enable RAC (known as RAC ON):
1. Log in as the Oracle software owner (typically the UNIX account oracle) in all nodes.
2. Shut down all the instances from all the nodes using a NORMAL or IMMEDIATE option.
3. Change the working directory to $ORACLE_HOME/rdbms/lib:
4. cd $ORACLE_HOME/rdbms/lib Run the following make command to relink the Oracle binaries without the RAC option:
make -f ins_rdbms.mk rac_on
This normally runs for a few minutes and should not pose any errors.
5. Now relink the Oracle binaries:
make -f ins_rdbms.mk ioracle
Now the Oracle binaries are relinked with the RAC ON option. You may need to edit the init.ora or SPFILE parameters accordingly. If any errors occur in step 4, you may need to contact Oracle Support and log a service request with the trace and log files.
Tracing in Oracle 10g
Trace in Oracle 10g
SQL> EXEC DBMS_MONITOR.session_trace_enable;
SQL> EXEC DBMS_MONITOR.session_trace_enable(waits=>TRUE, binds=>FALSE);
SQL> EXEC DBMS_MONITOR.session_trace_disable;
SQL> EXEC DBMS_MONITOR.session_trace_enable(session_id=>1234, serial_num=>1234);
SQL> EXEC DBMS_MONITOR.session_trace_enable(session_id =>1234, serial_num=>1234, waits=>TRUE, binds=>FALSE);
SQL> EXEC DBMS_MONITOR.session_trace_disable(session_id=>1234, serial_num=>1234);
SQL> EXEC DBMS_MONITOR.client_id_trace_enable(client_id=>'tim_hall');
SQL> EXEC DBMS_MONITOR.client_id_trace_enable(client_id=>'tim_hall', waits=>TRUE, binds=>FALSE);
SQL> EXEC DBMS_MONITOR.client_id_trace_disable(client_id=>'tim_hall');
SQL> EXEC DBMS_MONITOR.serv_mod_act_trace_enable(service_name=>'db10g', module_name=>'test_api', action_name=>'running');
SQL> EXEC DBMS_MONITOR.serv_mod_act_trace_enable(service_name=>'db10g', module_name=>'test_api', action_name=>'running', -
> waits=>TRUE, binds=>FALSE);
SQL> EXEC DBMS_MONITOR.serv_mod_act_trace_disable(service_name=>'db10g', module_name=>'test_api', action_name=>'running');
SQL> EXEC DBMS_MONITOR.session_trace_enable;
SQL> EXEC DBMS_MONITOR.session_trace_enable(waits=>TRUE, binds=>FALSE);
SQL> EXEC DBMS_MONITOR.session_trace_disable;
SQL> EXEC DBMS_MONITOR.session_trace_enable(session_id=>1234, serial_num=>1234);
SQL> EXEC DBMS_MONITOR.session_trace_enable(session_id =>1234, serial_num=>1234, waits=>TRUE, binds=>FALSE);
SQL> EXEC DBMS_MONITOR.session_trace_disable(session_id=>1234, serial_num=>1234);
SQL> EXEC DBMS_MONITOR.client_id_trace_enable(client_id=>'tim_hall');
SQL> EXEC DBMS_MONITOR.client_id_trace_enable(client_id=>'tim_hall', waits=>TRUE, binds=>FALSE);
SQL> EXEC DBMS_MONITOR.client_id_trace_disable(client_id=>'tim_hall');
SQL> EXEC DBMS_MONITOR.serv_mod_act_trace_enable(service_name=>'db10g', module_name=>'test_api', action_name=>'running');
SQL> EXEC DBMS_MONITOR.serv_mod_act_trace_enable(service_name=>'db10g', module_name=>'test_api', action_name=>'running', -
> waits=>TRUE, binds=>FALSE);
SQL> EXEC DBMS_MONITOR.serv_mod_act_trace_disable(service_name=>'db10g', module_name=>'test_api', action_name=>'running');
Scripts Used in Critical Production Biz Time Monitoring
=== Temp Usage ===
select sum(blocks)/1024*8 "Size in MB" FROM v$sort_usage;
SELECT
ss.sid, sum(st.blocks)/1024*8 FROM v$sort_usage st, v$session ss where
ss.saddr=st.session_addr group by ss.sid having sum(st.blocks)/1024*8
> 100 order by 2;
SELECT /*+ RULE */ s.username, s.osuser, s.sid, NVL(s.machine,'N/A'),
NVL(s.module,'N/A'), NVL(s.action,'N/A'), NVL(s.program,'N/A'),
s.status ||' for '||
LPAD(((last_call_et/60)-mod((last_call_et/60),60))/60,2,'0') ||':'||
LPAD(ROUND(mod((last_call_et/60),60)),2,'0') ||' Hr' ,
u.tablespace, u.contents, u.extents, round((u.blocks*8)/1024),
s.sql_address, s.sql_hash_value
FROM v$session s, v$sort_usage u
WHERE s.saddr = u.session_addr
AND u.contents = 'TEMPORARY'
AND s.audsid != USERENV('sessionid')
AND (u.blocks*8)/1024 >= 1000
ORDER BY 1,2,3,4,5 Desc;
NVL(s.module,'N/A'), NVL(s.action,'N/A'), NVL(s.program,'N/A'),
s.status ||' for '||
LPAD(((last_call_et/60)-mod((last_call_et/60),60))/60,2,'0') ||':'||
LPAD(ROUND(mod((last_call_et/60),60)),2,'0') ||' Hr' ,
u.tablespace, u.contents, u.extents, round((u.blocks*8)/1024),
s.sql_address, s.sql_hash_value
FROM v$session s, v$sort_usage u
WHERE s.saddr = u.session_addr
AND u.contents = 'TEMPORARY'
AND s.audsid != USERENV('sessionid')
AND (u.blocks*8)/1024 >= 1000
ORDER BY 1,2,3,4,5 Desc;
=== High Redo ===
SELECT s.inst_id,s.sid, s.serial#, s.username, s.program, i.block_changes
FROM gv$session s, gv$sess_io i
WHERE s.sid = i.sid
AND i.block_changes > 10000000
ORDER BY 6 desc, 1, 2, 3, 4;
FROM gv$session s, gv$sess_io i
WHERE s.sid = i.sid
AND i.block_changes > 10000000
ORDER BY 6 desc, 1, 2, 3, 4;
=== Rollback Used ===
SELECT rn.name, ROUND(rs.rssize/1024/1024),
s.username, s.osuser, s.sid, NVL(s.machine,'N/A'),
NVL(s.module,'N/A'), NVL(s.action,'N/A'), NVL(s.program,'N/A'),
s.sql_address, s.sql_hash_value, p.spid,
s.status ||' for '||
LPAD(((last_call_et/60)-mod((last_call_et/60),60))/60,2,'0') ||':'||
LPAD(ROUND(mod((last_call_et/60),60)),2,'0') ||' Hr' ,
round(ceil((t.used_ublk*8)/1024),1)
FROM v$rollname rn, v$rollstat rs,
v$session s, v$transaction t, v$process p
WHERE rn.usn = rs.usn
AND round((t.used_ublk*8)/1024) >= 1000
AND rs.usn = t.xidusn
AND s.sid = p.pid (+)
AND s.taddr = t.addr
ORDER BY 2 desc, s.sid ,s.status
=== Roll back segement Information ====
select tablespace_name, status segment_status, count(extent_id) "Extent Count", sum(blocks) "Total Blocks",
sum(blocks)*8/(1024*1024) "Total Space in GB" from dba_undo_extents where tablespace_name like '%UNDO%'
group by tablespace_name, status;
sum(blocks)*8/(1024*1024) "Total Space in GB" from dba_undo_extents where tablespace_name like '%UNDO%'
group by tablespace_name, status;
=== Shared Pool Usage ===
SELECT INST_ID,ROUND(bytes/1024/1024, 2)||' MB' FROM gv$sgastat WHERE name='free memory' AND pool='shared pool';
=== Archive Generation for last 5 hours ===
SELECT
TO_CHAR(first_time, 'DD-MM-YY') AS Day,TO_CHAR(first_time, 'HH24') AS
Hour,COUNT(*) FROM v$log_history WHERE TO_CHAR(first_time, 'DD-MM-YY') =
TO_CHAR(sysdate, 'DD-MM-YY') AND TO_CHAR(first_time, 'HH24') >=
TO_CHAR(sysdate, 'HH24') - 5 GROUP BY TO_CHAR(first_time, 'DD-MM-YY'),
TO_CHAR(first_time, 'HH24') ORDER BY 2;
=== High Memory ===
select s.inst_id, s.sid, name, round(value/1024/1024) Mem_in_mb, p.*
from gv$sesstat s, v$statname n, gv$session p
where n.statistic# = s.statistic#
and s.inst_id = p.inst_id
and s.sid = p.sid
and name like '%pga memory%'
and value > 1*1024*1024*512;
from gv$sesstat s, v$statname n, gv$session p
where n.statistic# = s.statistic#
and s.inst_id = p.inst_id
and s.sid = p.sid
and name like '%pga memory%'
and value > 1*1024*1024*512;
=== Performance ===
select sql_hash_value, count(*) from v$session where event like 'enq%' group by sql_hash_value;
select
sql_hash_value,username,osuser, count(*) from v$session where event
like 'enq%' and SQL_HASH_VALUE='&event' group by
sql_hash_value,username,osuser;
select sql_text from v$sqlarea where hash_value = '&hash_value';
select s1.sid,FETCHES,ROWS_PROCESSED from v$sql s,v$session s1 where s.HASH_VALUE=s1.SQL_HASH_VALUE and s1.sid=4885;
col EVENT for a50;
col OSUSER for a20;
col USERNAME for a20;
set linesize 175;
select s.sid,s.serial#,s.status,s.username,s.module,s.osuser,x.event from v$session s,v$session_wait x where s.paddr in
( select addr from v$process where spid in (2340,23869,13827,18261,14880,2381))
and x.sid=s.sid;
col OSUSER for a20;
col USERNAME for a20;
set linesize 175;
select s.sid,s.serial#,s.status,s.username,s.module,s.osuser,x.event from v$session s,v$session_wait x where s.paddr in
( select addr from v$process where spid in (2340,23869,13827,18261,14880,2381))
and x.sid=s.sid;
col EVENT for a50;
col OSUSER for a20;
col USERNAME for a20;
set linesize 175;
select s.sid,s.serial#,s.status,s.username,s.module,s.osuser,x.event from v$session s,v$session_wait x
where x.sid=s.sid and x.event like '&event_name';
col OSUSER for a20;
col USERNAME for a20;
set linesize 175;
select s.sid,s.serial#,s.status,s.username,s.module,s.osuser,x.event from v$session s,v$session_wait x
where x.sid=s.sid and x.event like '&event_name';
=== Top 10 Memory Process at OS level ===
UNIX95=1 ps -eo vsz,pid,args | sort +0n -1 | grep -i `echo $ORACLE_SID` | tail -10
=== Other Script to update the daily report ===
select status,count(1) from gv$session group by status;
STATUS COUNT(1)
-------- ----------
ACTIVE 22
INACTIVE 155
select count(1) from dba_tables where logging='NO';
COUNT(1)
----------
919
----------
919
select distinct status,count(1) from dba_indexes group by status;
STATUS COUNT(1)
-------- ----------
N/A 1639
UNUSABLE 1
VALID 6162
-------- ----------
N/A 1639
UNUSABLE 1
VALID 6162
select count(1) from dba_objects where status='INVALID';
'
COUNT(1)
----------
681
'
COUNT(1)
----------
681
What is AWR( Automatic workload repository ) and How to generate the AWR report?
About:
Automatic workload repository is a collection of persistent system performance statistics owned by SYS.
It resides in SYSAUX tablespace. By default snapshot are generated once every 60min and maintained for 7 days by default.
you can generate AWR reports by running SQL scripts:
1)The awrrpt.sql SQL script generates an HTML or text report that displays
statistics for a range of snapshot Ids.
2)The awrrpti.sql SQL script generates an HTML or text report that displays
statistics for a range of snapshot Ids on a specified database and instance.
3) The awrsqrpt.sql SQL script generates an HTML or text report that displays
statistics of a particular SQL statement for a range of snapshot Ids. Run this report
to inspect or debug the performance of a SQL statement.
4) The awrsqrpi.sql SQL script generates an HTML or text report that displays
statistics of a particular SQL statement for a range of snapshot Ids on a specified SQL.
5) The awrddrpt.sql SQL script generates an HTML or text report that compares
detailed performance attributes and configuration settings between two selected
time periods.
6) The awrddrpi.sql SQL script generates an HTML or text report that compares
detailed performance attributes and configuration settings between two selected
time periods on a specific database and instance.
Script Location: $ORACLE_HOME/rdbms/admin
To generate an HTML or text report for a range of snapshot Ids, run the awrrpt.sql
script at the SQL prompt:
Few Views which helps while generating the AWR report
DBA_HIST_SNAPSHOT
DBA_HIST_WR_CONTROL
DBA_HIST_BASELINE
How to Modify the AWR SNAP SHOT SETTINGS:
=====================================
BEGIN
DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(
retention => 43200, -- Minutes (= 30 Days). Current value retained if NULL.
interval => 30); -- Minutes. Current value retained if NULL.
END;
/
Creating the Baseline:
======================
BEGIN
DBMS_WORKLOAD_REPOSITORY.create_baseline (
start_snap_id => 10,
end_snap_id => 100,
baseline_name => 'AWR First baseline');
END;
/
Dropping the AWR baseline:
==========================
BEGIN
DBMS_WORKLOAD_REPOSITORY.drop_snapshot_range(
(row_snap_id=>40,
High_snap_id=>80);
END;
/
Creating SNAPSHOT Manually:
===========================
BEGIN
DBMS_WORKLOAD_REPOSITORY.create_snapshot();
END;
/
Workload Repository Views:
==========================
The following workload repository views are available:
* V$ACTIVE_SESSION_HISTORY - Displays the active session history (ASH) sampled every second.
* V$METRIC - Displays metric information.
* V$METRICNAME - Displays the metrics associated with each metric group.
* V$METRIC_HISTORY - Displays historical metrics.
* V$METRICGROUP - Displays all metrics groups.
* DBA_HIST_ACTIVE_SESS_HISTORY - Displays the history contents of the active session history.
* DBA_HIST_BASELINE - Displays baseline information.
* DBA_HIST_DATABASE_INSTANCE - Displays database environment information.
* DBA_HIST_SNAPSHOT - Displays snapshot information.
* DBA_HIST_SQL_PLAN - Displays SQL execution plans.
* DBA_HIST_WR_CONTROL - Displays AWR settings.
Automatic workload repository is a collection of persistent system performance statistics owned by SYS.
It resides in SYSAUX tablespace. By default snapshot are generated once every 60min and maintained for 7 days by default.
you can generate AWR reports by running SQL scripts:
1)The awrrpt.sql SQL script generates an HTML or text report that displays
statistics for a range of snapshot Ids.
2)The awrrpti.sql SQL script generates an HTML or text report that displays
statistics for a range of snapshot Ids on a specified database and instance.
3) The awrsqrpt.sql SQL script generates an HTML or text report that displays
statistics of a particular SQL statement for a range of snapshot Ids. Run this report
to inspect or debug the performance of a SQL statement.
4) The awrsqrpi.sql SQL script generates an HTML or text report that displays
statistics of a particular SQL statement for a range of snapshot Ids on a specified SQL.
5) The awrddrpt.sql SQL script generates an HTML or text report that compares
detailed performance attributes and configuration settings between two selected
time periods.
6) The awrddrpi.sql SQL script generates an HTML or text report that compares
detailed performance attributes and configuration settings between two selected
time periods on a specific database and instance.
Script Location: $ORACLE_HOME/rdbms/admin
To generate an HTML or text report for a range of snapshot Ids, run the awrrpt.sql
script at the SQL prompt:
Few Views which helps while generating the AWR report
DBA_HIST_SNAPSHOT
DBA_HIST_WR_CONTROL
DBA_HIST_BASELINE
How to Modify the AWR SNAP SHOT SETTINGS:
=====================================
BEGIN
DBMS_WORKLOAD_REPOSITORY.modify_snapshot_settings(
retention => 43200, -- Minutes (= 30 Days). Current value retained if NULL.
interval => 30); -- Minutes. Current value retained if NULL.
END;
/
Creating the Baseline:
======================
BEGIN
DBMS_WORKLOAD_REPOSITORY.create_baseline (
start_snap_id => 10,
end_snap_id => 100,
baseline_name => 'AWR First baseline');
END;
/
Dropping the AWR baseline:
==========================
BEGIN
DBMS_WORKLOAD_REPOSITORY.drop_snapshot_range(
(row_snap_id=>40,
High_snap_id=>80);
END;
/
Creating SNAPSHOT Manually:
===========================
BEGIN
DBMS_WORKLOAD_REPOSITORY.create_snapshot();
END;
/
Workload Repository Views:
==========================
The following workload repository views are available:
* V$ACTIVE_SESSION_HISTORY - Displays the active session history (ASH) sampled every second.
* V$METRIC - Displays metric information.
* V$METRICNAME - Displays the metrics associated with each metric group.
* V$METRIC_HISTORY - Displays historical metrics.
* V$METRICGROUP - Displays all metrics groups.
* DBA_HIST_ACTIVE_SESS_HISTORY - Displays the history contents of the active session history.
* DBA_HIST_BASELINE - Displays baseline information.
* DBA_HIST_DATABASE_INSTANCE - Displays database environment information.
* DBA_HIST_SNAPSHOT - Displays snapshot information.
* DBA_HIST_SQL_PLAN - Displays SQL execution plans.
* DBA_HIST_WR_CONTROL - Displays AWR settings.
Export / Import a table to zipped dump file using mknode
Export a table to zipped dump file using mknode
export scripts
================
1. Make a piped node
2. Gzip the file using gzip and in background
3. Export the file using exp
export NLS_LANG=AMERICAN_AMERICA.UTF8
mknod mknod_tmp p
gzip < mknod_tmp > exp_CMKT.dmp.gz &
exp file=mknod_tmp buffer=209715200 log=exp_CMKT.log direct=y statistics=none parfile=schema_list.par
export scripts
================
1. Make a piped node
2. Gunzip the file using gunzip and in background
3. Create user and grant necessary permission to the users
4. Check for enough space both data and index
5. Import the file using exp
Import a table to zipped dump file using mknode
import scripts
======================
export NLS_LANG=AMERICAN_AMERICA.UTF8
mknod cmkt_node p
gunzip < exp_CMKT.dmp.gz > cmkt_node &
imp file=cmkt_node buffer=209715200 log=imp_cmkt.log ignore=y destroy=n parfile=cmkt_imp_par.par commit=y
export scripts
================
1. Make a piped node
2. Gzip the file using gzip and in background
3. Export the file using exp
export NLS_LANG=AMERICAN_AMERICA.UTF8
mknod mknod_tmp p
gzip < mknod_tmp > exp_CMKT.dmp.gz &
exp file=mknod_tmp buffer=209715200 log=exp_CMKT.log direct=y statistics=none parfile=schema_list.par
export scripts
================
1. Make a piped node
2. Gunzip the file using gunzip and in background
3. Create user and grant necessary permission to the users
4. Check for enough space both data and index
5. Import the file using exp
Import a table to zipped dump file using mknode
import scripts
======================
export NLS_LANG=AMERICAN_AMERICA.UTF8
mknod cmkt_node p
gunzip < exp_CMKT.dmp.gz > cmkt_node &
imp file=cmkt_node buffer=209715200 log=imp_cmkt.log ignore=y destroy=n parfile=cmkt_imp_par.par commit=y
How to Export and Import Statistics
How to Export and Import Statistics of tables?
procedure with a scenerio
Case Definition
A critical application suddenly seems to hang, wait events show long table scans running on the OLTP environment. It comes out that the DBA in charge of this system did run statistics on the tables of the user that owns the application. The gather statistics got stuck and the process was killed. Since this moment the application started to perform extremely slowly.
The production database has several clones; we decide to export back statistics from one of these clones, to the production database.
Steps in Brief
1) Create a table to hold statistics on the source database
2) Generate a script that export table statistics on the clone database
3) Generate a script that import statistics on the clone database
4) Export statistics on clone database
5) Export table containing the exported statistics from clone database
6) Ftp export file with clone statistics table, and the script to import statistics from clone server to production server
7) Import table containing clone statistics into production database
8) Import statistics on production server using the script to import statisctics generated on the clone server
1. Create tables to hold statistics on the clone database
-- On Clone Database
SQL> execute DBMS_STATS.create_stat_table('INV','OLD_STATS');
PL/SQL procedure successfully completed.
SQL> grant SELECT,INSERT,UPDATE,DELETE on INV.OLD_STATS to public;
Grant succeeded.
2. Generate a script that export table statistics on the clone database
The purpose of this script is to generate one export statistics command per table, the export is directed into the table created on step 1.
Variables:
&tabname = the table created on the previous step to hold the statistics
&usrname = The name of the owner of &tabname
---- script to generate export table stats start here ----------
set linesize 130 pagesize 0
spool expstat.sql
select 'exec dbms_stats.export_table_stats('||chr(39)||owner||chr(39)||','||chr(39)||table_name||chr(39)||',null,'||chr(39)||'&tabname'
||chr(39)||',null,true,'||chr(39)||'INV'||chr(39)||')'
from dba_tables where owner ='&usrname'
/
spool off
---- script to generate export table stats end here ----------
Note: you may also use instead of the script this command:
exec DBMS_STATS.export_schema_stats('&usrname','&tabname')
This syntax will run in 10g. It may fail on 8i – 9i databases with some objects. That’s why I prefer the script on these versions.
3. Generate a script that import statistics on the clone database
The purpose of this script is to generate one import statistics command per table, the source is the table created on step 1.
&tabname = the table created on the previous step to hold the statistics
&usrname = The name of the owner of &tabname
---- script to generate import table stats start here ----------
set linesize 130 pagesize 0
spool impstats.sql
select 'exec dbms_stats.import_table_stats('||chr(39)||owner||chr(39)||','||chr(39)||table_name||chr(39)||',null,'||chr(
39)||'&tabname'||chr(39)||',null,true,'||chr(39)||'&usrname'||chr(39)||')'
from dba_tables where owner ='&usrname'
/
spool off
---- script to generate import table stats end here ----------
Execute this script to generate impstats.sql that will import the statistics on the production database.
4. Export statistics on clone database
Using the script expstat.sql; generated on step 2, export statistics into the statistics table created on step 1.
5. Export table containing the exported statistics from clone database
vmractest:/oradisk/av/expstats>exp avargas file=exp_stats_from_clone tables=avr.old_stats feedback=1000
Export: Release 9.2.0.5.0 - Production on Tue Feb 20 11:57:02 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Password:
Connected to: Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
Export done in IW8ISO8859P8 character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path ...
Current user changed to AVR
. . exporting table OLD_STATS
....
4115 rows exported
Export terminated successfully without warnings.
6. Ftp export file with clone statistics table from clone server to production server and script to import statistics from clone server to production server
Execute FTP session from target server, get both the table that contains the exported statistics and the script to import them, generated on step :proddb > ftp vmractest
Connected to vmractest
220 vmractest FTP server (SunOS 5.8) ready.
Name (vmractest:oracle): oracle
331 Password required for oracle.
Password:
230 User oracle logged in.
ftp> cd /oradisk/av/expstats
250 CWD command successful.
ftp> get exp_stats_from_clone.dmp
200 PORT command successful.
150 ASCII data connection for exp_stats_from_clone.dmp (10.5.180.72,64082) (473088 bytes).
226 ASCII Transfer complete.
local: exp_stats_from_clone.dmp remote: exp_stats_from_clone.dmp
478390 bytes received in 0.17 seconds (2680.69 Kbytes/s)
ftp> get impstats.sql
200 PORT command successful.
150 ASCII data connection for impstats.sql (10.5.180.72,64776) (31461 bytes).
226 ASCII Transfer complete.
local: impstats.sql remote: impstats.sql
31704 bytes received in 0.033 seconds (947.63 Kbytes/s)
ftp> bye
221 Goodbye.
7. Import table containing clone statistics into production database
On the production database import the table that contains the exported statistics.
proddb >imp avargas file= exp_stats_from_clone.dmp full =y
Import: Release 9.2.0.5.0 - Production on Tue Feb 20 12:19:11 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Password:
Connected to: Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
Export file created by EXPORT:V09.02.00 via conventional path
import done in UTF8 character set and UTF8 NCHAR character set
export client uses IW8ISO8859P8 character set (possible charset conversion)
export server uses AL16UTF16 NCHAR character set (possible ncharset conversion)
. importing AVARGAS's objects into AVARGAS
. importing AVR's objects into AVR
. . importing table "OLD_STATS" 4115 rows imported
Import terminated successfully without warnings.
8. Import statistics on production server using the script to import statistics generated on the clone server
Using the script impstats.sql; generated on step 3, import statistics into the production database.
procedure with a scenerio
Case Definition
A critical application suddenly seems to hang, wait events show long table scans running on the OLTP environment. It comes out that the DBA in charge of this system did run statistics on the tables of the user that owns the application. The gather statistics got stuck and the process was killed. Since this moment the application started to perform extremely slowly.
The production database has several clones; we decide to export back statistics from one of these clones, to the production database.
Steps in Brief
1) Create a table to hold statistics on the source database
2) Generate a script that export table statistics on the clone database
3) Generate a script that import statistics on the clone database
4) Export statistics on clone database
5) Export table containing the exported statistics from clone database
6) Ftp export file with clone statistics table, and the script to import statistics from clone server to production server
7) Import table containing clone statistics into production database
8) Import statistics on production server using the script to import statisctics generated on the clone server
1. Create tables to hold statistics on the clone database
-- On Clone Database
SQL> execute DBMS_STATS.create_stat_table('INV','OLD_STATS');
PL/SQL procedure successfully completed.
SQL> grant SELECT,INSERT,UPDATE,DELETE on INV.OLD_STATS to public;
Grant succeeded.
2. Generate a script that export table statistics on the clone database
The purpose of this script is to generate one export statistics command per table, the export is directed into the table created on step 1.
Variables:
&tabname = the table created on the previous step to hold the statistics
&usrname = The name of the owner of &tabname
---- script to generate export table stats start here ----------
set linesize 130 pagesize 0
spool expstat.sql
select 'exec dbms_stats.export_table_stats('||chr(39)||owner||chr(39)||','||chr(39)||table_name||chr(39)||',null,'||chr(39)||'&tabname'
||chr(39)||',null,true,'||chr(39)||'INV'||chr(39)||')'
from dba_tables where owner ='&usrname'
/
spool off
---- script to generate export table stats end here ----------
Note: you may also use instead of the script this command:
exec DBMS_STATS.export_schema_stats('&usrname','&tabname')
This syntax will run in 10g. It may fail on 8i – 9i databases with some objects. That’s why I prefer the script on these versions.
3. Generate a script that import statistics on the clone database
The purpose of this script is to generate one import statistics command per table, the source is the table created on step 1.
&tabname = the table created on the previous step to hold the statistics
&usrname = The name of the owner of &tabname
---- script to generate import table stats start here ----------
set linesize 130 pagesize 0
spool impstats.sql
select 'exec dbms_stats.import_table_stats('||chr(39)||owner||chr(39)||','||chr(39)||table_name||chr(39)||',null,'||chr(
39)||'&tabname'||chr(39)||',null,true,'||chr(39)||'&usrname'||chr(39)||')'
from dba_tables where owner ='&usrname'
/
spool off
---- script to generate import table stats end here ----------
Execute this script to generate impstats.sql that will import the statistics on the production database.
4. Export statistics on clone database
Using the script expstat.sql; generated on step 2, export statistics into the statistics table created on step 1.
5. Export table containing the exported statistics from clone database
vmractest:/oradisk/av/expstats>exp avargas file=exp_stats_from_clone tables=avr.old_stats feedback=1000
Export: Release 9.2.0.5.0 - Production on Tue Feb 20 11:57:02 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Password:
Connected to: Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
Export done in IW8ISO8859P8 character set and AL16UTF16 NCHAR character set
About to export specified tables via Conventional Path ...
Current user changed to AVR
. . exporting table OLD_STATS
....
4115 rows exported
Export terminated successfully without warnings.
6. Ftp export file with clone statistics table from clone server to production server and script to import statistics from clone server to production server
Execute FTP session from target server, get both the table that contains the exported statistics and the script to import them, generated on step :proddb > ftp vmractest
Connected to vmractest
220 vmractest FTP server (SunOS 5.8) ready.
Name (vmractest:oracle): oracle
331 Password required for oracle.
Password:
230 User oracle logged in.
ftp> cd /oradisk/av/expstats
250 CWD command successful.
ftp> get exp_stats_from_clone.dmp
200 PORT command successful.
150 ASCII data connection for exp_stats_from_clone.dmp (10.5.180.72,64082) (473088 bytes).
226 ASCII Transfer complete.
local: exp_stats_from_clone.dmp remote: exp_stats_from_clone.dmp
478390 bytes received in 0.17 seconds (2680.69 Kbytes/s)
ftp> get impstats.sql
200 PORT command successful.
150 ASCII data connection for impstats.sql (10.5.180.72,64776) (31461 bytes).
226 ASCII Transfer complete.
local: impstats.sql remote: impstats.sql
31704 bytes received in 0.033 seconds (947.63 Kbytes/s)
ftp> bye
221 Goodbye.
7. Import table containing clone statistics into production database
On the production database import the table that contains the exported statistics.
proddb >imp avargas file= exp_stats_from_clone.dmp full =y
Import: Release 9.2.0.5.0 - Production on Tue Feb 20 12:19:11 2007
Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved.
Password:
Connected to: Oracle9i Enterprise Edition Release 9.2.0.5.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.5.0 - Production
Export file created by EXPORT:V09.02.00 via conventional path
import done in UTF8 character set and UTF8 NCHAR character set
export client uses IW8ISO8859P8 character set (possible charset conversion)
export server uses AL16UTF16 NCHAR character set (possible ncharset conversion)
. importing AVARGAS's objects into AVARGAS
. importing AVR's objects into AVR
. . importing table "OLD_STATS" 4115 rows imported
Import terminated successfully without warnings.
8. Import statistics on production server using the script to import statistics generated on the clone server
Using the script impstats.sql; generated on step 3, import statistics into the production database.
Job Scheduling in Oracle (dba_jobs)
Job Scheduling in Oracle (dba_jobs)
Solution:
Tables Used:
dba_jobs
dba_jobs_running
job_queue_processes - init parameter
Submitting a job
--exec dbms_job.submit('
Solution:
Tables Used:
dba_jobs
dba_jobs_running
job_queue_processes - init parameter
Submitting a job
--exec dbms_job.submit('
', , , TRUE);--
Remove a job
--exec dbms_job.remove()--
Reset a broken job:
--exec dbms_job.broken(, FALSE);--
Killing a Oracle DBMS_JOB
Viewing scheduled dbms_jobs
When
looking at what jobs have been scheduled, there is really only one view
that you need to go to. The dba_jobs view contains all of the
information you need, to see what has been scheduled, when they were
last run, and if they are currently running. Use the following simple
script to take a look. Bear with me on the sub-select, I will build on
this query as we go on in the presentation.
scheduled_dbms_jobs.sql
set linesize 250
col log_user for a10
col job for 9999999 head 'Job'
col broken for a1 head 'B'
col failures for 99 head "fail"
col last_date for a18 head 'Last|Date'
col this_date for a18 head 'This|Date'
col next_date for a18 head 'Next|Date'
col interval for 9999.000 head 'Run|Interval'
col what for a60
select j.log_user,
j.job,
j.broken,
j.failures,
j.last_date||':'||j.last_sec last_date,
j.this_date||':'||j.this_sec this_date,
j.next_date||':'||j.next_sec next_date,
j.next_date - j.last_date interval,
j.what
from (select dj.LOG_USER, dj.JOB, dj.BROKEN, dj.FAILURES,
dj.LAST_DATE, dj.LAST_SEC, dj.THIS_DATE, dj.THIS_SEC,
dj.NEXT_DATE, dj.NEXT_SEC, dj.INTERVAL, dj.WHAT
from dba_jobs dj) j;
What Jobs are Actually Running
A
simple join to the dba_jobs_running view will give us a good handle on
the scheduled jobs that are actually running at this time. This is done
by a simple join through the job number. The new column of interest
returned here is the sid which is the identifier of the process that is
currently executing the job.
running_jobs.sql
set linesize 250
col sid for 9999 head 'Session|ID'
col log_user for a10
col job for 9999999 head 'Job'
col broken for a1 head 'B'
col failures for 99 head "fail"
col last_date for a18 head 'Last|Date'
col this_date for a18 head 'This|Date'
col next_date for a18 head 'Next|Date'
col interval for 9999.000 head 'Run|Interval'
col what for a60
select j.sid,
j.log_user,
j.job,
j.broken,
j.failures,
j.last_date||':'||j.last_sec last_date,
j.this_date||':'||j.this_sec this_date,
j.next_date||':'||j.next_sec next_date,
j.next_date - j.last_date interval,
j.what
from (select djr.SID,
dj.LOG_USER, dj.JOB, dj.BROKEN, dj.FAILURES,
dj.LAST_DATE, dj.LAST_SEC, dj.THIS_DATE, dj.THIS_SEC,
dj.NEXT_DATE, dj.NEXT_SEC, dj.INTERVAL, dj.WHAT
from dba_jobs dj, dba_jobs_running djr
where dj.job = djr.job ) j;
What Sessions are Running the Jobs
Now
that we have determined which jobs are currently running, we need to
find which Oracle session and operating system process is accessing
them. This is done through first joining v$process to v$session by way
of paddr and addr which is the address of the processs that owns the
sessions, and then joining the results back to the jobs running through
the sid value. The new columns returned in our query are spid which is
the operating system process identifier and serial# which is the session
serial number.
session_jobs.sql
set linesize 250
col sid for 9999 head 'Session|ID'
col spid head 'O/S|Process|ID'
col serial# for 9999999 head 'Session|Serial#'
col log_user for a10
col job for 9999999 head 'Job'
col broken for a1 head 'B'
col failures for 99 head "fail"
col last_date for a18 head 'Last|Date'
col this_date for a18 head 'This|Date'
col next_date for a18 head 'Next|Date'
col interval for 9999.000 head 'Run|Interval'
col what for a60
select j.sid,
s.spid,
s.serial#,
j.log_user,
j.job,
j.broken,
j.failures,
j.last_date||':'||j.last_sec last_date,
j.this_date||':'||j.this_sec this_date,
j.next_date||':'||j.next_sec next_date,
j.next_date - j.last_date interval,
j.what
from (select djr.SID,
dj.LOG_USER, dj.JOB, dj.BROKEN, dj.FAILURES,
dj.LAST_DATE, dj.LAST_SEC, dj.THIS_DATE, dj.THIS_SEC,
dj.NEXT_DATE, dj.NEXT_SEC, dj.INTERVAL, dj.WHAT
from dba_jobs dj, dba_jobs_running djr
where dj.job = djr.job ) j,
(select p.spid, s.sid, s.serial#
from v$process p, v$session s
where p.addr = s.paddr ) s
where j.sid = s.sid;
Now
that we have a good handle on how we can look at the jobs and the key
columns involved, let's go through the steps needed to bring down a job.
The following is a 5 to 11 step process that should solve all of your
problems.
Bringing Down a DBMS_JOB
1. Find the Job You Want to Bring Down
In
order to do anything you first need to find the job that is giving you a
headache. Go ahead and run the running_jobs.sql. This will give you the
prime information, job, sid, serial#, and spid, for the following
actions in bringing down the job.
2. Mark the DBMS_JOB as Broken
Use the following command for the job that you have to deal with.
SQL> EXEC DBMS_JOB.BROKEN(job#,TRUE);
All
this command does is mark the job so that if we get it to stop, it
won't start again. Let's make one thing perfectly clear, after executing
this command the job is still running.
As a side note, if you
are trying to shut down a database with jobs that run throughout the
day, they may hinder your attempts to bring down the database cleanly.
This is a wonderful command to make sure no jobs are executing during
the shutdown process. Just be aware that you will need to mark the jobs
as unbroken when the database comes back up, more on that later.
3. Kill the Oracle Session
Since
the job is still running and it isn't going to end soon, you will need
to kill the Oracle session that is executing the job. Use the following
command for to kill the job.
ALTER SYSTEM KILL SESSION 'sid,serial#';
4. Kill the O/S Process
More
often than not the previous step will still leave the job attached to
the database and still running. When this happens you will need to go
out to the operating system level and get rid of the process that has
spawned from the running job. In order to do this you must login to the
database box and issue the following command, depending on the type of
operating system you have.
For Windows, at the DOS Prompt: orakill sid spid
For UNIX at the command line> kill '9 spid
The orakill is an Oracle command, while kill is a Unix command.
5. Check if the Job is Still Running
Re-run
the session_jobs.sql script to see if you have gotten rid of the job.
If you have there is no reason to go further. Usually steps 1 through 4
will be sufficient to get rid of a job but when the job is running wild
you will have to continue with steps 6 through 11 which describes a
process for bouncing the job queue process.
6. Determine the Current Number of Job Queue Processes
SQL> col value for a10
SQL> select name,value from v$parameter where name = 'job_queue_processes';
7. Alter the Job Queue to Zero
SQL> ALTER SYSTEM SET job_queue_processes = 0;
This will bring down the entire job queue processes.
8. Validate that No Processes are Using the Job Queue
Re-run
the session_jobs.sql script to see if any jobs are still running. Since
we have given a hard stop to the job queue and issued the kill
commands, you can now wait until no more jobs are running. After all the
jobs have quit running, you can do whatever maintenance or tuning you
need to do before proceeding.
9. Mark the DBMS_JOB as Not Broken
You can now reset the broken job to not broken so they can run again. Just issue the command.
SQL>EXEC DBMS_JOB.BROKEN(job#,FALSE):
10. Alter the Job Queue to Original Value
Set the job queue to its' original value so that the jobs can run again.
ALTER SYSTEM SET job_queue_processes = original_value;
11. Validate that DBMS_JOB Is Running
To
make sure everything is back to normal, re-run the above scripts to
validate that jobs are scheduled, not broken, and are executing with the
next and last dates columns changing.
Oracle have given us a
great tool for scheduling activities within the database. As with many
things inside the database, not everything goes as planned, nor are we
given adequate tools to fix some of the problems we encounter. With the
eleven steps outlined here, hopefully you will have increased your
arsenal to handle those run away jobs that have given the best of us a
few tense moments.
Source : it-tool-box/blogs
Troubleshooting with DBMS_JOBS
Metalink Note : 313102.1
parallelism in Oracle?
Complete about Oracle Parallelism?
Solution:
Following initialization parameters are required for parallelism setup in database.
PARALLEL_SERVER,
PARALLEL_SERVER_INSTANCES,
PARALLEL_MIN_SERVERS,
PARALLEL_MAX_SERVERS?
PARALLEL_THREADS_PER_CPU
Parallel Queries and Parallel jobs execution
1. Select query with parallelism example:
Select /*+ parallel (a,32) */ * from dba_segments a;
32 is degreee here(Numbers of parallel processes)
Gather statistics of a table using parallelism Example
exec FND_STATS.GATHER_TABLE_STATS (ownname => '&owner', tabname => '&table_name', percent => 20 ,degree => 30 , granularity => 'ALL', cascade => TRUE);
Comple objects in the database with parallelism Example
exec sys.utl_recomp.recomp_parallel(32);
Table used to know number of parallel processes running are
v$px_session - Standard
gv$px_session - For RAC
Solution:
Following initialization parameters are required for parallelism setup in database.
PARALLEL_SERVER,
PARALLEL_SERVER_INSTANCES,
PARALLEL_MIN_SERVERS,
PARALLEL_MAX_SERVERS?
PARALLEL_THREADS_PER_CPU
Parallel Queries and Parallel jobs execution
1. Select query with parallelism example:
Select /*+ parallel (a,32) */ * from dba_segments a;
32 is degreee here(Numbers of parallel processes)
Gather statistics of a table using parallelism Example
exec FND_STATS.GATHER_TABLE_STATS (ownname => '&owner', tabname => '&table_name', percent => 20 ,degree => 30 , granularity => 'ALL', cascade => TRUE);
Comple objects in the database with parallelism Example
exec sys.utl_recomp.recomp_parallel(32);
Table used to know number of parallel processes running are
v$px_session - Standard
gv$px_session - For RAC
How to find out sid of a query running?
How to find out session id of a sql that is running?
SELECT s.sid FROM v$session s, v$sqlarea a,v$process p WHERE s.SQL_HASH_VALUE = a.HASH_VALUE AND s.SQL_ADDRESS = a.ADDRESS AND s.PADDR = p.ADDR and a.sql_text like '%ALTER INDEX%';
Note: Where sql_text is some unique part of the sql running
SELECT s.sid FROM v$session s, v$sqlarea a,v$process p WHERE s.SQL_HASH_VALUE = a.HASH_VALUE AND s.SQL_ADDRESS = a.ADDRESS AND s.PADDR = p.ADDR and a.sql_text like '%ALTER INDEX%';
Note: Where sql_text is some unique part of the sql running
Check Redolog count hourly basis
How to find , how many redo switches happening every hour?
select (to_char(first_time,'mm/dd')), (to_char(first_time,'HH24')), count(*)
from v$log_history
group by (to_char(first_time,'HH24'))
,(to_char(first_time,'mm/dd'))
order by 1,2 asc
/
select (to_char(first_time,'mm/dd')), (to_char(first_time,'HH24')), count(*)
from v$log_history
group by (to_char(first_time,'HH24'))
,(to_char(first_time,'mm/dd'))
order by 1,2 asc
/
Get the 10046 trace for a Session?
1. Get the SID from the user?
2. Get the OS Process ID (SPID) from v$session.
3. Use following commands as / as sysdba
oradebug setospid 420
oradebug unlimit
oradebug EVENT 10046 trace name context forever, level 12
oradebug tracefile_name
oradebug event 10046 trace name context off
No comments:
Post a Comment