Tuesday, 21 January 2014

Database Health check scripts (10g,11g).


Oracle Database Health check scripts

Health of the Database can be check in various ways.  It includes:

Monitoring Scope Current Status OS Level : -

1 Physical memory / Load :Load normal, Load averages: 0.35, 0.37, 0.36
2 OS Space threshold ( archive, ora_dump etc.. ) :Sufficient Space available.
3 Top 10 process consuming memory:No process using exceptional high memory
4 Free volumes available :Sufficient disk space is available on the mount points
5 Filesystem space Under normal threshold
Database level.
6 Check extents / Pro active Space addition:Space is being regularly added.
7 Check alert log for ORA- and warn messages.
8 Major wait events (latch/enqueue/Lib cache pin) No major wait events
9 Max Sessions
10 Long running Jobs 6 inactive sessions running for more than 8 hrs
11 Invalid objects 185
12 Analyze Jobs ( once in a week ) Done on 20-JAN-2008 Time 06:00:06
13 Temp usage / Rollback segment usage Normal
14 Nologging Indexes
15 Hotbackup/Coldbackup Gone fine
16 Redo generation normal
17 PQ proceses Normal
18 I/O Generation Under normal threshold
19 2 PC Pending transactions 0
DR / backup
1 Sync arch Normal
2 Purge arch Normal
3 Recovery status Normal
20)DATABASE HEALTH CHECK SCRIPT: Showing locks and Archive generation details

In Detail DATABASE Health check:
OPERATING SYSTEM:

1)Physical memory/ Load:
1) Free:free command displays amount of total, free and used physical memory (RAM) in the system as well as showing information on shared memory, buffers, cached memory and swap space used by the Linux kernel.
Usage:
$ free -m

2) vmstat:vmstat reports report virtual memory statistics, which has information about processes, swap, free, buffer and cache memory, paging space, disk IO activity, traps, interrupts, context switches and CPU activity
Usage:
$vmstat 5

3) top:top command displays dynamic real-time view of the running tasks managed by kernel and in Linux system. The memory usage stats by top command include real-time live total, used and free physical memory and swap memory with their buffers and cached memory size respectively
Usage:
$top
4) ps :ps command reports a snapshot on information of the current active processes. ps will show the percentage of memory resource that is used by each process or task running in the system. With this command, top memory hogging processes can be identified.
Usage:
$ps aux

2) OS Space threshold ( archive, ora_dump etc.. ):
Checking the OS space is available in all filesystems,specially the location which is having archive logs ,oracle Database files.We can use the below OS commands:
$df –h
$du –csh *
3) Top 10 process consuming memory:
We can Displaying top 10 memory consuming processes as follows:

ps aux|head -1;ps aux|sort -m

We can use the top command, and press M which orders the process list by memory usage.

4) Free volumes available:

We have to make sure Sufficient disk space is available on the mount points on each OS servers where the Database is up and running.

$df –h

5)Filesystem space:

Under normal threshold.Check the filesystem in the OS side whether the sufficient space is available at all mount points.

DATABASE :

6)Check extents / Pro active Space addition:
Check each of the Data,Index and temporary tablespaces for extend and blocks
Allocation details.

SET LINES 1000
SELECT SEGMENT_NAME,TABLESPACE_NAME,EXTENTS,BLOCKS
FROM DBA_SEGMENTS;

SELECT SEGMENT_NAME,TABLESPACE_NAME,EXTENTS,BLOCKS
FROM DBA_SEGMENTS WHERE TABLESPACE_NAME=’STAR01D’;

7) Check alert log for ORA- and warn messages:

Checking the alert log file regulary is a vital task we have to do.In the alert log files we have to looks for the following things:

1) Look for any of the oracle related errors.
Open the alert log file with less or more command and search for ORA-
This will give you the error details and time of occurrence.

2) Look for the Database level or Tablespace level changes
Monitor the alert log file and search the file for each Day activities happening
In the Database either whether it is bouncing of Database.Increase in the size of the tablespaces,Increase in the size of the Database parameters.In the 11g Database we can look for TNS errors in the alert log file.

8) Major wait events (latch/enqueue/Lib cache pin):

We can check the wait events details with the help of below queries:

SELECT s.saddr, s.SID, s.serial#, s.audsid, s.paddr, s.user#, s.username,
s.command, s.ownerid, s.taddr, s.lockwait, s.status, s.server,
s.schema#, s.schemaname, s.osuser, s.process, s.machine, s.terminal,
UPPER (s.program) program, s.TYPE, s.sql_address, s.sql_hash_value,
s.sql_id, s.sql_child_number, s.sql_exec_start, s.sql_exec_id,
s.prev_sql_addr, s.prev_hash_value, s.prev_sql_id,
s.prev_child_number, s.prev_exec_start, s.prev_exec_id,
s.plsql_entry_object_id, s.plsql_entry_subprogram_id,
s.plsql_object_id, s.plsql_subprogram_id, s.module, s.module_hash,
s.action, s.action_hash, s.client_info, s.fixed_table_sequence,
s.row_wait_obj#, s.row_wait_file#, s.row_wait_block#,
s.row_wait_row#, s.logon_time, s.last_call_et, s.pdml_enabled,
s.failover_type, s.failover_method, s.failed_over,
s.resource_consumer_group, s.pdml_status, s.pddl_status, s.pq_status,
s.current_queue_duration, s.client_identifier,
s.blocking_session_status, s.blocking_instance, s.blocking_session,
s.seq#, s.event#, s.event, s.p1text, s.p1, s.p1raw, s.p2text, s.p2,
s.p2raw, s.p3text, s.p3, s.p3raw, s.wait_class_id, s.wait_class#,
s.wait_class, s.wait_time, s.seconds_in_wait, s.state,
s.wait_time_micro, s.time_remaining_micro,
s.time_since_last_wait_micro, s.service_name, s.sql_trace,
s.sql_trace_waits, s.sql_trace_binds, s.sql_trace_plan_stats,
s.session_edition_id, s.creator_addr, s.creator_serial#
FROM v$session s
WHERE ( (s.username IS NOT NULL)
AND (NVL (s.osuser, 'x') <> 'SYSTEM')
AND (s.TYPE <> 'BACKGROUND') AND STATUS='ACTIVE'
)
ORDER BY "PROGRAM";

The following query provides clues about whether Oracle has been waiting for library cache activities:

Select sid, event, p1raw, seconds_in_wait, wait_time
From v$session_wait
Where event = 'library cache pin'
And state = 'WAITING';

The below Query gives details of Users sessions wait time and state:

SELECT NVL (s.username, '(oracle)') AS username, s.SID, s.serial#, sw.event,
sw.wait_time, sw.seconds_in_wait, sw.state
FROM v$session_wait sw, v$session s
WHERE s.SID = sw.SID
ORDER BY sw.seconds_in_wait DESC;

9) Max Sessions:
There should not be more than 6 inactive sessions running for more than 8 hours in a Database in order to minimize the consumption of CPU and I/O resources.

a)Users and Sessions CPU consumption can be obtained by below query:

Set lines 1000
select ss.username, se.SID,VALUE/100 cpu_usage_seconds
from v$session ss, v$sesstat se, v$statname sn
where se.STATISTIC# = sn.STATISTIC#
and NAME like '%CPU used by this session%'
and se.SID = ss.SID and ss.status='ACTIVE'
and ss.username is not null
order by VALUE desc;


b) Users and Sessions CPU and I/O consumption can be obtained by below query:

-- shows Day wise,User wise,Process id of server wise- CPU and I/O consumption
set linesize 140
col spid for a6
col program for a35 trunc
select p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI') date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program ) program,
ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J')) days,
round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_day
from V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bg
where s.paddr=p.addr and ss.sid=s.sid
and ss.statistic#=12 and si.sid=s.sid
and bg.paddr(+)=p.addr
and round((ss.value/100),0) > 10
order by 8;

10) Long running Jobs:

We can find out long running jobs with the help of the below query:

col username for a20
col message for a50
col remaining for 9999
select username,to_char(start_time, 'hh24:mi:ss dd/mm/yy') started,
time_remaining remaining, message
from v$session_longops
where time_remaining = 0
order by time_remaining desc;

11) Invalid objects:

We can check the invalid objects with the help of the below query:

select owner||' '||object_name||' '||created||' '||status from dba_objects where status='INVALID';

12) Analyze Jobs ( once in a week ):

We need to analyze the jobs that are running once in a week as a golden rule.
The below steps can be considered for analyzing jobs.

Analyzing a Running Job
The status of a job or a task changes several times during its life cycle. A job can have the following as its status:
Scheduled: The job is created and will run at the specified time.
Running: The job is being executed and is in progress.
Initialization Error: The job or step could not be run successfully. If a step in a job fails initialization, the job status is Initialization Error.
Failed: The job was executed but failed.
Succeeded: The job was executed completely.
Stopped: The user canceled the job.
Stop Pending: The user has stopped the job. The already running steps are completing execution.
Suspended: This indicates that the execution of the job is deferred.
Inactive: This status indicates that the target has been deleted.
Reassigned: The owner of the job has changed.
Skipped: The job was not executed at the specified time and has been omitted.
The running jobs can be found out by the help of below query:

select sid, job,instance from dba_jobs_running;

We can find out the failed jobs and Broken jobs details with the help of the Below query:

select job||' '||schema_user||' '||Broken||' '||failures||' '||what||' '||last_date||' '||last_sec from dba_jobs;

13) Temp usage / Rollback segment/PGA usage:

We can get information of temporary tablespace usage details with the help of below query:
Set lines 1000
SELECT b.tablespace,
ROUND(((b.blocks*p.value)/1024/1024),2)||'M' "SIZE",
a.sid||','||a.serial# SID_SERIAL,
a.username,
a.program
FROM sys.v_$session a,
sys.v_$sort_usage b,
sys.v_$parameter p
WHERE p.name = 'db_block_size'
AND a.saddr = b.session_addr
ORDER BY b.tablespace, b.blocks;

We can get information of Undo tablespace usage details with the help of the below query:
set lines 1000
SELECT TO_CHAR(s.sid)||','||TO_CHAR(s.serial#) sid_serial,
NVL(s.username, 'None') orauser,
s.program,
r.name undoseg,
t.used_ublk * TO_NUMBER(x.value)/1024||'K' "Undo"
FROM sys.v_$rollname r,
sys.v_$session s,
sys.v_$transaction t,
sys.v_$parameter x
WHERE s.taddr = t.addr
AND r.usn = t.xidusn(+)
AND x.name = 'db_block_size';

We can get the PGA usage details with the help of the below query:
select st.sid "SID", sn.name "TYPE",
ceil(st.value / 1024 / 1024/1024) "GB"
from v$sesstat st, v$statname sn
where st.statistic# = sn.statistic#
and sid in
(select sid from v$session where username like UPPER('&user'))
and upper(sn.name) like '%PGA%'
order by st.sid, st.value desc;
Enter value for user: STARTXNAPP
14)Validating the Backup:

We have to verify the Hotbackup/Coldbackup(or any physical or logical backup) of all the Production and non-production Databases went fine.Make sure you are having a valid backups of all the Databases.Check the Backup locations to make sure the Backup completed on time with the required Backup data.

14)Hotbackup/Coldbackup:
Validating the backup of Database.It should complete on time with the required data for restoring and recovery purpose if required.

15) Redo generation/Archive logs generation details:
We should make sure there should not be frequent log switch happening in a Database.If there are frequent log switches than archive logs might generate more which may decrease the performance of the Database however in a production Database log switches could vary depending upon the Server configuration between 5 to 20.

We can the log switch details with the help of the below query:

Redolog switch Datewise and hourwise:
-------------------------------
set lines 120;
set pages 999;
select to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
group by to_char(first_time,'DD-MON-RR')
order by 1
/
Archive logs generations is directly proportional to the number of log switches happening in a Database. If there are frequent log switches than archive logs might generate more which can affect the performance of Database.


We can use the below queries for archive logs generation details:

a)Archive logs by dates:
set lines 1000
select to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
group by to_char(first_time,'DD-MON-RR')
order by 1
/
b)Archive log generation details Day-wise :

select to_char(COMPLETION_TIME,'DD-MON-YYYY'),count(*)
from v$archived_log group by to_char(COMPLETION_TIME,'DD-MON-YYYY')
order by to_char(COMPLETION_TIME,'DD-MON-YYYY');

c) Archive log count of the day:

select count(*)
from v$archived_log
where trunc(completion_time)=trunc(sysdate);

count of archived logs generated today on hourly basis:
-------------------------------------------------------
select to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
where to_char(first_time,'DD-MON-RR')='16-AUG-10'
group by to_char(first_time,'DD-MON-RR')
order by 1
/

16)I/O Generation:
We can find out CPU and I/O generation details for all the users in the Database with the help of the below query:
-- Show IO per session,CPU in seconds, sessionIOS.
set linesize 140
col spid for a6
col program for a35 trunc
select p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI') date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program ) program,
ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J')) days,
round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_day
from V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bg
where s.paddr=p.addr and ss.sid=s.sid
and ss.statistic#=12 and si.sid=s.sid
and bg.paddr(+)=p.addr
and round((ss.value/100),0) > 10
order by 8;
To know what the session is doing and what kind of sql it is using:

-- what kind of sql a session is using
set lines 9999
set pages 9999

select s.sid, q.sql_text from v$sqltext q, v$session s
where q.address = s.sql_address
and s.sid = &sid order by piece;

eg: sid=1853

17)Sync arch:
In a Dataguard environment we have to check primary is in sync with the secondary Database.This we can check as follows:
The V$ MANAGED_STANDBY view on the standby database site shows you the activities performed by
both redo transport and Redo Apply processes in a Data Guard environment
SELECT PROCESS, CLIENT_PROCESS, SEQUENCE#, STATUS FROM V$MANAGED_STANDBY;
In some situations, automatic gap recovery may not take place and you will need to perform gap recovery manually. For example, you will need to perform gap recovery manually if you are using logical standby databases and the primary database is not available.
The following sections describe how to query the appropriate views to determine which log files are missing and perform manual recovery.
On a physical standby database
To determine if there is an archive gap on your physical standby database, query the V$ARCHIVE_GAP view as shown in the following example:
SQL> SELECT * FROM V$ARCHIVE_GAP;

If it displays no rows than the primary Database is in sync with the standy Database.If it display any information with row than manually we have to apply the archive logs.

After you identify the gap, issue the following SQL statement on the primary database to locate the archived redo log files on your primary database (assuming the local archive destination on the primary database is LOG_ARCHIVE_DEST_1):
Eg:
SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1 AND SEQUENCE# BETWEEN 7 AND 10;
Copy these log files to your physical standby database and register them using the ALTER DATABASE REGISTER LOGFILE statement on your physical standby database. For example:
SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_7.arc';
SQL> ALTER DATABASE REGISTER LOGFILE
'/physical_standby1/thread1_dest/arcr_1_8.arc';

After you register these log files on the physical standby database, you can restart Redo Apply. The V$ARCHIVE_GAP fixed view on a physical standby database only returns the next gap that is currently blocking Redo Apply from continuing. After resolving the gap and starting Redo Apply, query the V$ARCHIVE_GAP fixed view again on the physical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.

On a logical standby database:
To determine if there is an archive gap, query the DBA_LOGSTDBY_LOG view on the logical standby database. For example, the following query indicates there is a gap in the sequence of archived redo log files because it displays two files for THREAD 1 on the logical standby database. (If there are no gaps, the query will show only one file for each thread.) The output shows that the highest registered file is sequence number 10, but there is a gap at the file shown as sequence number 6:
SQL> COLUMN FILE_NAME FORMAT a55
SQL> SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L
2> WHERE NEXT_CHANGE# NOT IN
3> (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# = THREAD#)
4> ORDER BY THREAD#,SEQUENCE#;

THREAD# SEQUENCE# FILE_NAME
---------- ---------- -----------------------------------------------
1 6 /disk1/oracle/dbs/log-1292880008_6.arc
1 10 /disk1/oracle/dbs/log-1292880008_10.arc

Copy the missing log files, with sequence numbers 7, 8, and 9, to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE statement on your logical standby database. For example:
SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE '/disk1/oracle/dbs/log-1292880008_10.arc';

After you register these log files on the logical standby database, you can restart SQL Apply.

The DBA_LOGSTDBY_LOG view on a logical standby database only returns the next gap that is currently blocking SQL Apply from continuing. After resolving the identified gap and starting SQL Apply, query the DBA_LOGSTDBY_LOG view again on the logical standby database to determine the next gap sequence, if there is one. Repeat this process until there are no more gaps.
Monitoring Log File Archival Information:
Step 1 Determine the current archived redo log file sequence numbers.
Enter the following query on the primary database to determine the current archived redo log file sequence numbers:
SQL> SELECT THREAD#, SEQUENCE#, ARCHIVED, STATUS FROM V$LOG
WHERE STATUS='CURRENT';
Step 2 Determine the most recent archived redo log file.
Enter the following query at the primary database to determine which archived redo log file contains the most recently transmitted redo data:
SQL> SELECT MAX(SEQUENCE#), THREAD# FROM V$ARCHIVED_LOG GROUP BY THREAD#;
Step 3 Determine the most recent archived redo log file at each destination.
Enter the following query at the primary database to determine which archived redo log file was most recently transmitted to each of the archiving destinations:
SQL> SELECT DESTINATION, STATUS, ARCHIVED_THREAD#, ARCHIVED_SEQ#
2> FROM V$ARCHIVE_DEST_STATUS
3> WHERE STATUS <> 'DEFERRED' AND STATUS <> 'INACTIVE';

DESTINATION STATUS ARCHIVED_THREAD# ARCHIVED_SEQ#
------------------ ------ ---------------- -------------
/private1/prmy/lad VALID 1 947
standby1 VALID 1 947
The most recently written archived redo log file should be the same for each archive destination listed. If it is not, a status other than VALID might identify an error encountered during the archival operation to that destination.
Step 4 Find out if archived redo log files have been received.
You can issue a query at the primary database to find out if an archived redo log file was not received at a particular site. Each destination has an ID number associated with it. You can query the DEST_ID column of the V$ARCHIVE_DEST fixed view on the primary database to identify each destination's ID number.
Assume the current local destination is 1, and one of the remote standby destination IDs is 2. To identify which log files are missing at the standby destination, issue the following query:
SQL> SELECT LOCAL.THREAD#, LOCAL.SEQUENCE# FROM
2> (SELECT THREAD#, SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=1)
3> LOCAL WHERE
4> LOCAL.SEQUENCE# NOT IN
5> (SELECT SEQUENCE# FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND
6> THREAD# = LOCAL.THREAD#);

THREAD# SEQUENCE#
--------- ---------
1 12
1 13
1 14

18)Purge arch:
We have to make sure the archive logs files are purged safely or move to Tape drive or any other location in order to make space for new archive logs files in the Archive logs destination locations.

19)Recovery status:
In order to do recover make sure you are having latest archive logs,so that you can restore and do the recovery if required.


20) MY DATABASE HEALTH CHECK SCRIPT:
/* SCRIPT FOR MONITORING AND CHECKING HEALTH OF DATABASE-USEFUL FOR PRODUCTION DATABASES */

-- SHOWS RUNNING JOBS
select 'RUNNING JOBS', sid, job,instance from dba_jobs_running;
set lines 1000
-- SHOWS ARCHIVE LOGS GENERAION DETAILS HOURLY AND DATE WISE BASIS
select 'ARCHIVE LOG REPORT',to_char(first_time,'DD-MON-RR') "Date",
to_char(sum(decode(to_char(first_time,'HH24'),'00',1,0)),'99') " 00",
to_char(sum(decode(to_char(first_time,'HH24'),'01',1,0)),'99') " 01",
to_char(sum(decode(to_char(first_time,'HH24'),'02',1,0)),'99') " 02",
to_char(sum(decode(to_char(first_time,'HH24'),'03',1,0)),'99') " 03",
to_char(sum(decode(to_char(first_time,'HH24'),'04',1,0)),'99') " 04",
to_char(sum(decode(to_char(first_time,'HH24'),'05',1,0)),'99') " 05",
to_char(sum(decode(to_char(first_time,'HH24'),'06',1,0)),'99') " 06",
to_char(sum(decode(to_char(first_time,'HH24'),'07',1,0)),'99') " 07",
to_char(sum(decode(to_char(first_time,'HH24'),'08',1,0)),'99') " 08",
to_char(sum(decode(to_char(first_time,'HH24'),'09',1,0)),'99') " 09",
to_char(sum(decode(to_char(first_time,'HH24'),'10',1,0)),'99') " 10",
to_char(sum(decode(to_char(first_time,'HH24'),'11',1,0)),'99') " 11",
to_char(sum(decode(to_char(first_time,'HH24'),'12',1,0)),'99') " 12",
to_char(sum(decode(to_char(first_time,'HH24'),'13',1,0)),'99') " 13",
to_char(sum(decode(to_char(first_time,'HH24'),'14',1,0)),'99') " 14",
to_char(sum(decode(to_char(first_time,'HH24'),'15',1,0)),'99') " 15",
to_char(sum(decode(to_char(first_time,'HH24'),'16',1,0)),'99') " 16",
to_char(sum(decode(to_char(first_time,'HH24'),'17',1,0)),'99') " 17",
to_char(sum(decode(to_char(first_time,'HH24'),'18',1,0)),'99') " 18",
to_char(sum(decode(to_char(first_time,'HH24'),'19',1,0)),'99') " 19",
to_char(sum(decode(to_char(first_time,'HH24'),'20',1,0)),'99') " 20",
to_char(sum(decode(to_char(first_time,'HH24'),'21',1,0)),'99') " 21",
to_char(sum(decode(to_char(first_time,'HH24'),'22',1,0)),'99') " 22",
to_char(sum(decode(to_char(first_time,'HH24'),'23',1,0)),'99') " 23"
from v$log_history
group by to_char(first_time,'DD-MON-RR')
order by 1
/
-- WHAT ALL THE SESSIONS ARE GETTING BLOCKED
select 'SESSIONS BLOCKED',process,sid, blocking_session from v$session where blocking_session is not null;
-- WHICH SESSION IS BLOCKING WHICH SESSION
set lines 9999
set pages 9999
select s1.username || '@' || s1.machine
|| ' ( SID=' || s1.sid || ' ) is blocking '
|| s2.username || '@' || s2.machine || ' ( SID=' || s2.sid || ' ) ' AS blocking_status
from v$lock l1, v$session s1, v$lock l2, v$session s2
where s1.sid=l1.sid and s2.sid=l2.sid
and l1.BLOCK=1 and l2.request > 0
and l1.id1 = l2.id1
and l2.id2 = l2.id2 ;
-- SHOWS BLOCK CHANGES DETAILS AND PHYSICAL READS DETAIL
select a.sid,b.username,block_gets,consistent_gets,physical_reads,block_changes
from V$SESS_IO a,V$SESSION b
where a.sid=b.sid and block_changes > 10000 order by block_changes desc;
-- show IO per session / CPU in seconds. sessionIOS.sql
set linesize 140
col spid for a6
col program for a35 trunc
select p.spid SPID,to_char(s.LOGON_TIME,'DDMonYY HH24:MI') date_login,s.username,decode(nvl(p.background,0),1,bg.description, s.program ) program,ss.value/100 CPU,physical_reads disk_io,(trunc(sysdate,'J')-trunc(logon_time,'J')) days,round((ss.value/100)/(decode((trunc(sysdate,'J')-trunc(logon_time,'J')),0,1,(trunc(sysdate,'J')-trunc(logon_time,'J')))),2) cpu_per_day
from V$PROCESS p,V$SESSION s,V$SESSTAT ss,V$SESS_IO si,V$BGPROCESS bg
where s.paddr=p.addr and ss.sid=s.sid
and ss.statistic#=12 and si.sid=s.sid
and bg.paddr(+)=p.addr
and round((ss.value/100),0) > 10
order by 8;
-- SCRIPT TO IDENTIFY LONG RUNNING STATEMENTS
rem LONGOPS.SQL
rem Long Running Statements
rem Helmut Pfau, Oracle Deutschland GmbH
set linesize 120
col opname format a20
col target format a15
col units format a10
col time_remaining format 99990 heading Remaining[s]
col bps format 9990.99 heading [Units/s]
col fertig format 90.99 heading "complete[%]"
select sid,
opname,
target,
sofar,
totalwork,
units,
(totalwork-sofar)/time_remaining bps,
time_remaining,
sofar/totalwork*100 fertig
from v$session_longops
where time_remaining > 0
/
-- ACTIVE SESSIONS IN DATABASE
select 'ACTIVE SESSION', sid, serial#,machine, osuser,username,status from v$session where username!='NULL' and status='ACTIVE';
-- WHAT SQL A SESSION IS USING
set lines 9999
set pages 9999
select s.sid, q.sql_text from v$sqltext q, v$session s
where q.address = s.sql_address
and s.sid = &sid order by piece;

eg:SID=1844
I would like to add one more script which will tell me details regarding the Size of the Database used,occupied and available and Tablespace usage
details along with hit ratio of various SGA components which can be very helpfull
to monitor the performance of the Databases.

Database_monitor.sql:

ttitle "1. :============== Tablespace Usage Information ==================:" skip 2
set linesize 140
col Total format 99999.99 heading "Total space(MB)"
col Used format 99999.99 heading "Used space(MB)"
col Free format 99999.99 heading "Free space(MB)"
break on report
compute sum of Total space(MB) on report
compute sum of Used space(MB) on report
compute sum of Free space(MB) on report
select a.tablespace_name, round(a.bytes/1024/1024,2) Total,
round( nvl( b.bytes,0)/1024/1024,2) Used,
round(nvl(c.bytes, 0)/1024/1024,2) Free ,
round(nvl(b.bytes,0)*100/nvl(a.bytes,0),2) "% Used"
from sys.sm$ts_avail a, sys.sm$ts_used b, sys.sm$ts_free c
where a.tablespace_name=b.tablespace_name(+)
and b.tablespace_name=c.tablespace_name(+);

ttitle "2. :============== Hit Ratio Information ==================:" skip 2
set linesize 80
clear columns
clear breaks
set pagesize 60 heading off termout off echo off verify off
REM
col val1 new_val lib noprint
select 100*(1-(SUM(Reloads)/SUM(Pins))) val1
from V$LIBRARYCACHE;

ttitle off
col val2 new_val dict noprint
select 100*(1-(SUM(Getmisses)/SUM(Gets))) val2
from V$ROWCACHE;

ttitle off
col val3 new_val phys_reads noprint
select Value val3
from V$SYSSTAT
where Name = 'physical reads';

ttitle off
col val4 new_val log1_reads noprint
select Value val4
from V$SYSSTAT
where Name = 'db block gets';

ttitle off
col val5 new_val log2_reads noprint
select Value val5
from V$SYSSTAT
where Name = 'consistent gets';

ttitle off
col val6 new_val chr noprint
select 100*(1-(&phys_reads / (&log1_reads + &log2_reads))) val6
from DUAL;

ttitle off
col val7 new_val avg_users_cursor noprint
col val8 new_val avg_stmts_exe noprint
select SUM(Users_Opening)/COUNT(*) val7,
SUM(Executions)/COUNT(*) val8
from V$SQLAREA;
ttitle off
set termout on
set heading off
ttitle -
center 'SGA Cache Hit Ratios' skip 2

select 'Data Block Buffer Hit Ratio : '||&chr db_hit_ratio,
' Shared SQL Pool ',
' Dictionary Hit Ratio : '||&dict dict_hit,
' Shared SQL Buffers (Library Cache) ',
' Cache Hit Ratio : '||&lib lib_hit,
' Avg. Users/Stmt : '||
&avg_users_cursor||' ',
' Avg. Executes/Stmt : '||
&avg_stmts_exe||' '
from DUAL;

ttitle "3. :============== Sort Information ==================:" skip 2

select A.Value Disk_Sorts,
B.Value Memory_Sorts,
ROUND(100*A.Value/
DECODE((A.Value+B.Value),0,1,(A.Value+B.Value)),2)
Pct_Disk_Sorts
from V$SYSSTAT A, V$SYSSTAT B
where A.Name = 'sorts (disk)'
and B.Name = 'sorts (memory)';

ttitle "4. :============== Database Size Information ==================:" skip 2


select sum(bytes/1024/1024/1024) Avail from sm$ts_avail union all select sum(bytes/1024/1024/1024) Used from sm$ts_used union all select sum(bytes/1024/1024/1024) Free from sm$ts_free;


Hope this helps you in monitoring your Databases.





Goraknath Shivram Rathod

Friday, 17 January 2014

Oracle Application Scripts

Script to collect Apache, Jserv, Forms, Jinitiator and Perl version in E-Business suite R12



# +===========================================================================+
# | FILENAME
# | R12_Instance_Version_Information.sh
# |
# | DESCRIPTION
# |   This script can be used to collect Apache, Jserv, Forms, Jinitiator and  
# |   Perl version in E-Business suite R12
# |
# | NOTES
# |   Before running the script ensure that you have sourced the environment 
# |   by running APPS<SID>_host.env file from $APPL_TOP.  
# |
# +===========================================================================+
#
(
echo "Script Started "
date
## The script shows output on screen and creates file named hostname_date.txt file in current ## directory
## Collect the Apache version
echo "*****Apache Version*****"
$IAS_ORACLE_HOME/Apache/Apache/bin/httpd -v
echo " "
## Collect perl version
echo "*****perl version******"
$IAS_ORACLE_HOME/perl/bin/perl -v|grep built
echo " "
## Collect Java version
echo "******Java Version******"
sh -c "`awk -F= '$1 ~ /^JSERVJAVA.*$/ {print $2}' $ADMIN_SCRIPTS_HOME/java.sh` -version;"
echo " "
## Collect client JRE version
echo "*****Client JRE version*****"
cat $FORMS_WEB_CONFIG_FILE|grep sun_plugin_version| cut -c 1-35
echo " "
## Collect Forms version
echo "*****Forms Version*****"
$ORACLE_HOME/bin/frmcmp_batch|grep Forms| grep Version
echo " "
## Collect PL/SQL Version
echo "*****PL/SQL Version****"
$ORACLE_HOME/bin/frmcmp_batch|grep PL/SQL|grep Version
echo " "
## Collect Forms communication mode
echo "****Forms Communication Mode****"
cat $FORMS_WEB_CONFIG_FILE|grep serverURL=
echo "If the serverURL parameter has no value then Forms is implemented in socket mode else it is servlet"
echo " "
echo "Script Completed Successfully and it has generated the file  zz`hostname`_`date +%m%d%y.%H%M`.txt file in current directory"
echo "Script completed "
date
) 2>&1 | tee zz`hostname`_`date +%m%d%y.%H%M`.txt
###
### END OF SCRIPT
###

Wednesday, April 17, 2013

Query to check whether the debug option is enabled in the profile


SELECT po.profile_option_name "NAME",
po.USER_PROFILE_OPTION_NAME,
decode(to_char(pov.level_id),
'10001', 'SITE',
'10002', 'APP',
'10003', 'RESP',
'10005', 'SERVER',
'10006', 'ORG',
'10004', 'USER', '???') "LEV",
decode(to_char(pov.level_id),
'10001', '',
'10002', app.application_short_name,
'10003', rsp.responsibility_key,
'10005', svr.node_name,
'10006', org.name,
'10004', usr.user_name,
'???') "CONTEXT",
pov.profile_option_value "VALUE"
FROM FND_PROFILE_OPTIONS_VL po,
FND_PROFILE_OPTION_VALUES pov,
fnd_user usr,
fnd_application app,
fnd_responsibility rsp,
fnd_nodes svr,
hr_operating_units org
WHERE po.user_profile_option_name like '%Debug%'
AND pov.profile_option_value='Y'
AND pov.application_id = po.application_id
AND pov.profile_option_id = po.profile_option_id
AND usr.user_id (+) = pov.level_value
AND rsp.application_id (+) = pov.level_value_application_id
AND rsp.responsibility_id (+) = pov.level_value
AND app.application_id (+) = pov.level_value
AND svr.node_id (+) = pov.level_value
AND org.organization_id (+) = pov.level_value;

Query to check the values of the given profile option


The below Query will give the value for the particular PROFILE OPTION what you give as input. For example if you give the profile name as "ICX_FORMS_LAUNCHER" Then it will show the Value of the same.


SELECT po.profile_option_name "NAME",
po.USER_PROFILE_OPTION_NAME,
decode(to_char(pov.level_id),
'10001', 'SITE',
'10002', 'APP',
'10003', 'RESP',
'10005', 'SERVER',
'10006', 'ORG',
'10004', 'USER', '???') "LEV",
decode(to_char(pov.level_id),
'10001', '',
'10002', app.application_short_name,
'10003', rsp.responsibility_key,
'10005', svr.node_name,
'10006', org.name,
'10004', usr.user_name,
'???') "CONTEXT",
pov.profile_option_value "VALUE"
FROM FND_PROFILE_OPTIONS_VL po,
FND_PROFILE_OPTION_VALUES pov,
fnd_user usr,
fnd_application app,
fnd_responsibility rsp,
fnd_nodes svr,
hr_operating_units org
WHERE po.profile_option_name LIKE '%&profile%'
AND pov.application_id = po.application_id
AND pov.profile_option_id = po.profile_option_id
AND usr.user_id (+) = pov.level_value
AND rsp.application_id (+) = pov.level_value_application_id
AND rsp.responsibility_id (+) = pov.level_value
AND app.application_id (+) = pov.level_value
AND svr.node_id (+) = pov.level_value
AND org.organization_id (+) = pov.level_value

Friday, May 25, 2012

Script that will list all Concurrent programs for which trace is enabled – output should be Program Name / Shortname / Application



#Script to list the concurrent programs with trace enabled
output_log=/tmp/practice/log.txt
export output_log

sqlplus -S apps/xxxxxxx << EOF

spool $output_log

set pages 1000 lines 1000

select a.CONCURRENT_PROGRAM_ID, b.USER_CONCURRENT_PROGRAM_NAME "Program_Name", a.CONCURRENT_PROGRAM_NAME "Short_Name", a.APPLICATION_ID
from fnd_concurrent_programs a, fnd_concurrent_programs_tl b
where a.CONCURRENT_PROGRAM_ID=b.CONCURRENT_PROGRAM_ID and a.ENABLE_TRACE='Y';

exit;
spool off
EOF

Wednesday, May 9, 2012

Script to disable all scheduled requests - After cloning

update fnd_concurrent_requests
set phase_code='C',
status_code='D'
where phase_code = 'P'
and (status_code = 'I' OR status_code = 'Q');
and requested_start_date >= SYSDATE
and hold_flag = 'N';

Apps Script

Query for Scheduled request start on prior run less than 5 mins


select cs.request_id,cs.concurrent_program_id,cs.program,cs.argument_text,cr.resubmit_interval_type_code,cs.requestor,resubmit_interval,resubmit_interval_unit_code from fnd_concurrent_requests cr,fnd_conc_req_summary_v cs where
 cs.phase_code='P'
and cs.status_code in ('I','Q')
and
resubmit_interval_type_code='START' and cr.request_id=cs.request_id
and resubmit_interval_unit_code='MINUTES' and resubmit_interval<5

Monday, June 11, 2012

Script to find who has changed the sysadmin Password

The below script helps us to find the user_name who has changed the sysadmin password

SELECT substr(d.user_name,1,30) || '~'||substr(f.user_name,1,10)||'~'||substr(d.last_update_date,1,10) key,count(*) value
from apps.fnd_user d, apps.fnd_user f
WHERE
trunc(d.last_update_date) = trunc(sysdate)
and d.user_name='SYSADMIN'
and d.last_updated_by=f.user_id GROUP BY substr(d.user_name,1,30) || '~'||
substr(f.user_name,1,10)||'~'||substr(d.last_update_date,1,10);

! Happy that it helps you :) !.

Narasimha Rao

Monday, May 28, 2012

How to find all Form Personalization through script in Oracle APPS

SELECT FORM_ID, FORM_NAME, USER_FORM_NAME, DESCRIPTION
FROM
FND_FORM_VL
WHERE FORM_NAME IN
(SELECT FORM_NAME FROM FND_FORM_CUSTOM_RULES GROUP BY FORM_NAME)

Wednesday, April 25, 2012

Number of times Purge Concurrent Request execution was run in last 30 days



set pages 1000
set line 132
set head on
select  r.requested_start_date,r.actual_start_date,r.phase_code,r.status_code,r.actual_completion_date from
apps.fnd_concurrent_programs_vl p, apps.fnd_concurrent_requests r
where p.concurrent_program_id = r.concurrent_program_id
and p.application_id = r.program_application_id
and p.user_concurrent_program_name like 'Purge Concurrent Request and/or Manager Data'
and  r.actual_start_date >= sysdate-30 order by r.requested_start_date;

Number of times gather schema stats was run in last 30 days


set pages 1000
set line 132
set head on
select  r.request_id,r.actual_start_date,r.phase_code,r.status_code,r.actual_completion_date,argument_text from
apps.fnd_concurrent_programs_vl p, apps.fnd_concurrent_requests r
where p.concurrent_program_id = r.concurrent_program_id
and p.application_id = r.program_application_id
and p.user_concurrent_program_name like '%Gather Schema Statistics%'
and  r.actual_start_date >= sysdate-30 order by r.requested_start_date;

Concurrent Requests Average/Max/Min hours runtime details


set linesize 200
col username for a10
col status for a10
col phase  for a10
col PNAME for a70
col request_id for 99999999999
col PNAME  for a40
select
b.user_name username,
a.USER_CONCURRENT_PROGRAM_NAME  as PNAME,
avg((nvl(ACTUAL_COMPLETION_DATE,sysdate)-a.REQUESTED_START_DATE)*24) avg_Hrs_running,  
max((nvl(ACTUAL_COMPLETION_DATE,sysdate)-a.REQUESTED_START_DATE)*24) Max_Hrs_running,
min((nvl(ACTUAL_COMPLETION_DATE,sysdate)-a.REQUESTED_START_DATE)*24) Min_Hrs_running
from
apps.fnd_conc_req_summary_v a,
apps.fnd_user  b
where
phase_code = 'C' and status_code = 'C' and
a.REQUESTED_START_DATE > sysdate-30 and
upper(a.USER_CONCURRENT_PROGRAM_NAME) like upper('%&str%') and
a.REQUESTED_BY=b.user_id
group by b.user_name,a.USER_CONCURRENT_PROGRAM_NAME;                

Tuesday, April 24, 2012

OS commands Very useful for DBAs and OS files management



Finding OS Version and Bit

OS Version

uname -a
AIX <ServerName> 3 5 00C8E96B4C00

uname
AIX

oslevel -r
5300-06

OS Bit

lsconf|grep -i kernel
Kernel Type: 64-bit

prtconf

/usr/bin/isainfo –kv

getconf LONG_BIT 

getconf -a | grep KERN
uname -m


1024 bytes =  1KB  (4 letters)
1024 *1024 = 1048576 = 1MB (7 letters)
1024 *1024 * 1024 = 1073741824 = 1GB (10 letters)



lssrc -a |grep sendmail

ps –ef|grep sendmail



ps -ef|awk '{print $1}'|sort -n|uniq -c

Process count

ps -ef|awk '{print $1 }'|sort|uniq -c |sort -n

ps -ef|wc -l

ps -ef|grep oracle|wc -l

ps -x|wc –l


                   
!!!CAUTION!!!

  1. ALL ARE EXAMPLES ONLY
  1. The Use of rm –f command must be thoroughly checked before including it in the find command syntax.
  1. The Use of gzip –f command must be thoroughly checked before including it in the find command syntax.
  1. Please understand the use of below commands before executing them in production.
  1. Always edit the below find commands as per requirement.
  1. We will not own the responsibility incase if the below commands are used wrongly by the buyer causing severe damage to the files.
  1. These commands can cause severe damage to the files if used Wrongly in wrong situation.
  1. These commands are examples only and should be used with Caution and buddy check.
  1. Understand/Test & Edit the commands appropriately and use.
Unix commands

To find files from day a week ago
find ./ -name "*" \( -mtime -7 -a -mtime +5 \)  -type f -exec ls -l {} \;

To specify displayed information
find . -type f -name "*.trc" -printf "%f   %c  %a %t  \n"

To find files owned by specified user on desired mountpoint ( /var in this exam.)
find /var -xdev -user oracle -type f -exec ls -l {} \;

Files bigger than specified size in [k for kilobytes | c for bytes ]
find . -type f -size +1024k -exec ls -l {} \;

To compute size of files
ls -ltr * | awk '{ sum += $5 } { print "Size in GB " sum/1024/1024/1024 }' | tail -1

Other commands syntax
uname -a                                               -- To find the OS of the box

df -h                                                        -- To check total, used, avialable free space of all Filesystems
df -k

pwd                                                        -- Print name of current/working directory
                                                                    To check u r in the correct folder or not , else use cd command and change the directory

df -h .                                                      -- To check total, used, avialable free space of the current path FS
df -k .                                                          (59419872 = around 59GB)
df -g .

df -g /opt/oracle/apps/admin/UKPROD  

df -hP | grep /SID/oracle   --To check total, used, avialable free space of the given path FS
df -kP | grep /SID/oracle                    (59419872 = around 59GB)

du -sh .                                                   --to check total size of current directory
du -sk .                                                    (output interms of KB)(23695680 = 23GB)

du -sh *                                                  --to list sizes of all files , folders
du -sk *

du -sh * | sort -n                                  --to list sizes of all files , folders
du -sk * | sort -n

du -sk *|sort -n|tail                             --to list sizes of all files , folders (bigger 10 files)
du -sh *|sort -n|tail

du -sk * | sort -n | egrep 'tblsp_SID.txt|tmproot|REFRESH|clone_base|oradata|xxonline'

du -ch PROD_df_LEVEL0_04-02-2007*     --in the last line can find string* files total size

ls -l l779750[7-9].req

ls -lrt *LEVEL0*
ls -ltr *failed*

ls -ltr SID_arch* | head ; ls -ltr SID_arch* | tail

ls -ltrh

ls -lS                                                        -> list of files/folder SIZEwise sorting (to c bigger files, need to scroll)

ls -lSr                                                      ->list of files/folder SIZEwise reverse sorting (can c bigger files immediately)

ls -lt                                                         -> last accessed date wise sort (old files u can c immediately)

ls -ltr                                                       -> last accessed date wise reverse sort  (to c old files, u need to scroll)
                                                                -- file/directory, owner of file, size, last accessed date, file name

ls -lSrh | tail -30                    -> size wise list of 30 bigger files

ls -lSr |tail

ls -lt *TMP| grep  " May  5"

fuser <filename>                                 -- to find any process is accessing the file
lsof <filename>                                     -- to find any process is accessing the file


Command to check size for a particular string for all platform

ls -ltr *LEVEL0* | awk '{ sum += $5 } { print "Size in GB " sum/1024/1024/1024 }' |tail -1  

ls -ltr SID_df_LEVEL0_18-10-2006* | awk '{ sum += $5 } { print "Size in GB " sum/1024/1024/1024 }' |tail -1



find ./ -name "*.zip*" -mtime +60 -exec ls -ltr {} \;  - searching for old patches

find ./ -name "*log*" -mtime +60 -exec ls -ltr {} \;   - searching for old log files

find . -size +100000000c -xdev -exec ls -l {} \;       - searching for morethan 100MB size files

find ./ -name "*" -size +30000k -exec ls -ltr {} \;

BECAREFUL OF USING THIS COMMANDS
===         -----------------------------------                                                                                                 ===
=== find /opt/oracle/apps/admin/SID/log -mtime +60 -type f -exec ls -l {} \; | wc -l                                                                                                                                                                                              ===
=== find ./ -name "*.trc" -mtime +60 -exec ls -ltr {} \; - listing 20 days old .trc files===
=== find ./ -name "error_log*" -mtime +10 -exec ls -ltr {} \; |wc -l                                                   ===
=== find ./ -name "*.trc" -mtime +10 -exec gzip -f {} \;                - zipping 20 days old .trc files===
=== find ./ -name "*.trc*" -mtime +60 -exec ls -l {} \;                   - Purging 20 days old .trc files===
=== gzip -f `ls -lt *.trc | grep  " Apr " | awk '{print $9}'`                -zipping Apr month .trc files              ===
=== gzip -f `ls -ltr | grep  " Sep 29 10" | awk '{print $9}'` - zipiing sep 29th 10AM files(all)===
=== ls -l `ls -lt *.trc | grep  " Jan " | awk '{print $9}'`   -Purging Jan month .trc files     ===
=== ls -l *trc `ls -l  |grep " Aug "|awk '{print $9}'`                                                                                         ===        
===                                                                                                         ===
=== ls -ltr *LEVEL0* | awk '{ sum += $5 } { print "Size in GB " sum/1024/1024/1024}' |tail -1   --Calculate size
=== ls -ltr PROD_df_LEVEL1_27-09-2007* | awk '{ sum += $5 } { print "Size in GB " sum/1024/1024/1024 }' |tail -1
===
=== find ./ -name "*trc" -size +300k  -mtime +3 -exec ls -ltr {} \;                                                               ===
=== find ./ -name "*trc" -size +300k  -mtime +3 -exec gzip {} \;                                                 ===
=== gzip -f `ls -l |grep " Sep " | awk '{print $9}'`                                                                                            ===                                                                                        
=== gzip -f `ls -l| grep  " Sep 1"| awk '{print $9}'`                                                                                          ===
=== gzip <file name>                                                                                                                                                           ===
===                                                                                                                                                                                         ===
=== rm <file name>   Ex: rm ias.tar                                   -- to remove single file                        ===
=== ls -l SID_df_LEVEL0_30-09-2006*                                            -- to remove multiple files                  ===
===                                                                                                                                                                                         ===
=== find ./ -name "*.*" -mtime +60 -exec ls -ltr {} \;   - searching for old files                         ===                                                                                        
=== find . -mtime +15 -exec gzip -f {} \;              -- zipping 15 days old files                               ===
===                                                                                                                                                                                         ===
=== find . -mtime +30 -exec ls -l {} \;                        -- purge/delete 30 days old file                ===
===                                                                                                                                                                                         ===
=== ls -lt *log* | grep  " Mar " | awk '{print $9}'                                                                                            ===
===                                                                                                                                                                                         ===
=== ls -l `ls -lt *log* | grep  " Feb " | awk '{print $9}'`                                                                   ===
=== ls -l `ls -lt |grep " Jul " | awk '{print $9}'`                                                                                  ===
===                                                                                                                                                                                         ===
=== find ./ -name "*.out" -mtime +60 -exec ls -ltr {} \;  - searching for old files of extn .out===
===                                                                                                                                                                                         ===
=== find . -size +100000000c -xdev -exec ls -l {} \;  - searching for morethan 100MB size files ===
=== find ./ -name "*" -size +30000k -exec ls -ltr {} \;                                                                                    ===
===                                                                                                                                                                                         ===
=== ls -ltr SID_df_LEVEL0_28-10-2006* | awk '{ sum += $5 } { print "Size in GB " sum/1024/1024/1024 }' |tail -1   - total size of this failed LEVEL0 backup
===                                                                                                                                                                                         ===        
===================================================================================================
to purge files with directories inside use : ls -l 
ls -lr oradata_beforePMP - DANGEROUS command  - to purge along with folder forcefully 
===================================================================================================
find . -size +50000000c -print|xargs ls -l

Finding Archivelogs applied lastly

select min(COMPLETION_TIME) last_appl from v$archived_log
where (THREAD#=1 and SEQUENCE#=28040)
or (THREAD#=2 and SEQUENCE#=24386)
or (THREAD#=3 and SEQUENCE#=24259)

LAST_APPL
---------------
08-jun-07 19:12

Removing Old Archivelog files of a date (Replace ls with rm command)

alter session set nls_date_format='dd-mon-rr hh24:mi';
set lines 180
set pagesize 9999
select 'ls -l ' || name
from v$archived_log
where applied = 'YES'
and to_char(COMPLETION_TIME,'rrrrmmdd') between '20070602' and '20070604';

'RM-F'||NAME
---------------------------------------------------
ls -l /SID/arch/SID1/SID_2_23688.arc
ls -l /SID/arch/SID1/SID_1_27359.arc
ls -l /SID/arch/SID1/SID_3_23578.arc
...
ls -l /SID/arch/SID1/SID_2_23992.arc
ls -l /SID/arch/SID1/SID_2_23993.arc
ls -l /SID/arch/SID1/SID_3_23883.arc
ls -l /SID/arch/SID1/SID_1_27663.arc

917 rows selected.



RMAN Sofar done

SELECT sid, serial#, context, sofar, totalwork,
round(sofar/totalwork*100,2) "% Complete"
FROM v$session_longops
WHERE opname LIKE 'RMAN%'
AND opname NOT LIKE '%aggregate%'
AND totalwork != 0
AND sofar <> totalwork
/

find /backup/sid -name "SID_arch_*" -mmin +6000 -exec ls -l {} \; | awk '{sum += $5} {print
"Size GB:" sum/1024/1024/1024 }' | tail -1

Size GB:10.8984

find /backup/sid -name "SID_arch_*" -mmin +6030 -exec ls -l {} \;
Examples (Throughly test yourself and understand before use)
df -hP|grep <codetree>
du -sh *
ls -ltr|tail
ls -lSr|tail
ls -ltr|head
find . -iname *out -mtime +30 -exec ls -l {} \;
find . -mtime +3 -exec gzip -f {} \;
find . -mtime +60 -exec ls -l {} \;
find . -size +100000000c -xdev -exec ls -lrth {} \;

du -sj --->to sum up sizes
nohup find /SID/oracle/product/102/admin/SID_Host/bdump -size +100000000c -name "*.trc*" -mtime +15 -xdev -exec ls -l {} \; &

find . -size +100000000c -name "*log*" -mtime +1 -xdev -exec gzip -f {} \;

find . -size +100000000c -xdev -exec ls -ltr {} \;

find / -size +100000000c -xdev -type f -exec ls -lh {} \;

find . ! -name . -prune -size +100000000c -xdev -exec ls -lrth {} \;

find . ! -name . -prune -size +100000000c -xdev -exec du -sh {} \;

find . ! -name . -prune -name "core-*" -mtime +30 -exec ls -l {} \;

find /tmp ! -name . -prune -name "*.t" -mtime +15 -exec ls -ltrh {} \;

find /tmp ! -name . -prune -name "*.t" -mtime +15 -exec ls -l {} \;

find /tmp -name "*.tmp.gz" -mtime +15 -exec ls -ltrh {} \;

find /tmp -name "*.t" -mtime +1 -exec gzip  -f {} \;

find . ! -name . -prune -name "*.t" -mtime +3 -exec gzip -f {} \;

find . ! -name . -prune -name "*.tmp" -mtime +15 -exec ls -ltrh {} \;

find . ! -name . -prune -name "*.tmp" -mtime +15 -exec ls -l {} \;

find . ! -name . -prune -name "*.tmp" -mtime +3 -exec gzip -f {} \;

find / -xdev -size '+5000k'|xargs ls -lh | grep -v dev |grep aptrvrbi

find . -xdev -size '+5000k'|xargs ls -lh | grep -v dev |grep iasrvrbi

find /tmp -xdev -size '+5000k'|xargs ls -l | grep -v dev|head -500

find . -xdev -size '+50000k'|xargs ls -ltr | grep -v dev

find . -xdev -size '+5000k'|xargs ls -ltrh | grep -v dev

find . ! -name . -prune -xdev -size '+5000k'|xargs ls -lh | grep -v dev

find . ! -name . -prune -xdev -size '+5000k'|xargs ls -ltrh | grep -v dev

find . ! -name . -prune -name "*.tmp" -mtime +30 -exec ls -ltr {} \;

find . ! -name . -prune -name "*.tmp" -mtime +30 -exec ls -l {} \;

find . ! -name . -prune -name "*.tmp" -mtime +30 -exec ls -l {} \;

find . -mtime +30 -size +100000000c -xdev -exec ls -ltrh {} \;

find . -size +100000000c -xdev -exec ls -ltr {} \;

find . -size +10000000c -xdev -exec ls -ltrh {} \;

find . -size +10000000c -xdev -exec ls -ltr {} \;

find . -size +10000000c -xdev -exec du -sk {} \;

find / -xdev -size '+10000k'|xargs ls -ld | grep -v dev |grep SID --->10000k files

find ./ -name "o1*.out" -size +3000k  -mtime +10 -exec gzip {} \;

du -sk *|sort -n|tail -15

*applcsf log/out/tmp

*product *iAS/Apache/Apache *806 network/admin/log

*APPLMGR *common/admin log/out

find /SID/applmgr/common/admin/log/SID_oradev -mtime +60 -type f -exec ls -l {} \;

nohup find /SID/applcsf/log/SID_ERP01 -mtime +30 -exec ls -l {} \; &


nohup find /SID/applcsf/tmp -mtime +7 -exec gzip -f{} \; &

nohup find /SID/applcsf/log -mtime +7 -exec gzip -f {} \; &


nohup find /SID/applcsf/out -mtime +7 -exec gzip -f {} \; &

nohup find /SID/applcsf/tmp -mtime +3 -exec gzip -f {} \; &

nohup find /SID/applcsf/log -mtime +3 -exec gzip -f {} \; &

nohup find /SID/applcsf/ -mtime +3 -exec gzip -f {} \; &

nohup find /SID/applcsf/out -mtime +7 -exec gzip -f {} \; &

nohup find /SID/applcsf/out/SID_Host -mtime +7 -exec gzip -f {} \; &
find . -mtime +30 -exec ls -l {} \; --> purging files more than 30days old

find . -mtime +10 -exec gzip -f {} \; --> zipping files more than 7days old

find /tmp -name "*.t" -mtime +3 -exec gzip -f {} \;

find ./ -name "*.tmp" -mtime +15 -exec ls -l {} \;

find ./ -name "*.trc" -mtime +1 -exec gzip -f {} \;

find . -name '*trw' -mmin +240 -exec gzip  {} \; 
find /SID/3rdparty/apps/jboss-4.0.4.GA/server/default/log -name "*.log.*" -mtime +30 -exec ls -l {} \;

find . -mtime +7 -exec ls -lrt {} \;

find ./ -name "*core*" -mtime +1 -exec ls -ltr {} \;

df -k | sort -n | tail

nohup find /SID/applcsf/log/SID_Host -mtime +30 -exec ls -l {} \; &

find . -name "*.t" -exec ls -l {} \;

find /tmp -name "*.t" -mtime +3 -exec gzip -f {} \;

find /tmp -name "*.t" -mtime +30 -exec ls -l {} \;

find /tmp -name "*.t" -mtime +15 -exec ls -ltrh {} \;

find /tmp -name "*.TMP" -mtime +30 -exec ls -l {} \;

find /tmp -name "O*.t" -mtime +30 -user USER -maxdepth 1 -exec ls -l {} \;
   
find /tmp -name "O*.t" -mtime +7 -user USER -exec gzip -f {} \;

find . -name "*.t.Z" -mtime +30 -exec ls -l {} \;

find . -name "*.t" -exec gzip -f {} \;

nohup find . -name "*log*" -mtime +60 -exec ls -l {} \; &

nohup find . -name "*log*" -mtime +7 -exec gzip -f {} \; &

nohup find . -name "*log*" -mtime +30 -exec ls -l {} \; &

find . -name "*log*" -mtime +10 -exec gzip -f {} \;

find ./ -name "Events*" -mtime +30 -exec ls -l {} \;

find ./ -name "Events*" -mtime +30 -exec ls -ltrh {} \;

find ./ -name "Events*" -mtime +30 -exec ls -l {} \;

find . -name "*default-web-access.log.txt*" -mtime +30 -exec gzip -f {} \;

find ./ -name "XWII_BASE_APPS_*" -exec ls -l {} \;

*** find . ! -name . -prune -name "*.t" -mtime +30 -exec ls -ltr {} \; --->To find ".t" files without descending sub directories...

find . ! -name . -prune -name "*.t" -mtime +7 -exec gzip -f {} \;

find . -name "Exaaa*" -mtime +10 -exec ls -l {} \;

find ./ -name "*.trw" -mtime +60 -exec ls -l {} \;

nohup find ./ -name "*.trc*" -mtime +3 -exec gzip -f {} \; &

nohup find ./ -name "*.trw" -mtime +3 -exec gzip -f {} \; &

find ./ -name "*.out*" -mtime +30 -exec ls -l {} \;

find ./ -name "*.out" -mtime +30 -exec ls -l {} \;

find ./ -name "*.trc" -mtime +0 -exec ls -l {} \;

nohup find ./ -name "*.trc" -mtime +0 -exec gzip -f {} \;

nohup find ./ -name "*.trc*" -mtime +60 -exec ls -l {} \; &

nohup find ./ -name "*trc" -mtime +3 -exec gzip -f {} \; &

find ./ -name "*trc" -mtime +30 -exec ls -l {} \;

find . ! -name . -prune -size +100000000c -xdev -mtime +3 -exec ls -lrth {} \;

ls -l *.trc  |grep " Dec  7" | awk '{print $9}'

gzip -f `ls -l *.trc  |grep " Dec  7" | awk '{print $9}`

find /SID/backup/RMAN -name "data_full_*" -mtime +0 -type f -exec ls -l {} \;

zip error_log_pls.zip error_log_pls; >error_log_pls

zip stuck_rec.dbg.zip stuck_rec.dbg; >stuck_rec.dbg

zip stuck_rec.dbg.zip stuck_rec.dbg >/SID/backup/stuck_rec.dbg.zip

zip apps_pbgd1i.log.zip apps_pbgd1i.log; >apps_pbgd1i.log

zip SID.zip SID >/SID/applcsf/mailtmp/appclo1i.zip

zip mod_jserv.log_25sep2008.zip mod_jserv.log; >mod_jserv.log

zip Events148.log.zip Events148.log; >Events148.log

ls -l `ls -ltr *.dbf | grep  "Apr  3" | awk '{print $9}'`

ls -ltrh *.dbf | grep  "Apr  3"

ls -ltr *p*.zip* | awk '{ sum += $5 } { print "Size in GB " sum/1024/1024/1024}' |tail -1

nohup find /SID/applcsf/log -mtime +30 -exec ls -l {} \; &

nohup find /SID/applcsf/out -mtime +30 -exec ls -l {} \; &

nohup find /SID/applcsf/tmp -mtime +30 -exec ls -l {} \; &
find /tmp  \( -name '*.t' \) -mtime +3 -size +1000  -exec ls -l  {} \;

find ./ -name "*.trc" -mtime +1 -exec gzip -f {} \;
find ./ -name "tmp_*" -mtime +30 -exec ls -ltr {} \;

Finding directory sizes

du -k |sort  -nr |head -20
du -g |sort  -nr |head -20

find ./ -name "*.trc" -mtime +7 -exec ls -ltr {} \;

find . -size +100000000c -xdev -exec gzip {} \;
find . -size +100000000c -xdev -type f -exec ls -lh {} \;

find /appl/oracle/admin/SID/udump -name "*.trc*" -mtime +90 -exec ls -ltrh {} \;

find <path> -type f -print|xargs ls –l

Eg:

find /appl/formsA/oracle/product/dev6.0/reports60/server/cache -type f -print|xargs ls -l

find /data/a01/SID -name "*.arc" -mtime +5 -exec rm {} \;

find . -mtime +730 -type f –print  -exec tar -cvf /tempspace/repservr_cache_2years.tar . \;

find /tempspace -mtime +730 -type f –print  -exec tar -cvf /tempspace/repservr_cache_2years.tar {} \;

find /data/a01/SID -name "*.arc*" -mtime +5 -exec rm {} \;

find /data2/prod_export -name "comp_export_SID *.log" -mtime +30 -exec rm {} \;


Linux

ps -eo pid,user,vsz,rss,s,comm | sort -n -k 3,4
ps -eo pid,user,vsz,rss,args | sort -n -k 3,4 | tail -20

prstat -s rss
sar -W
swapon -s

SunOS

prstat -s rss
swap -l
- returns dev(vice)/low/blocks/free in 512-bytes blocks

ps -eo pid,user,vsz,rss,s,comm | sort -n -k 3,4  | tail -20
AIX

ps -efl | sort -n -k 10,10 | tail -50

ps -eo pid,user,vsz,rss,s,cmd | sort -n -k 3,4  | tail -20

ps -eo pid,user,vsz,comm | sort -n -k 3,3  | tail -20

SIZE

ps -eo pid,user,vsz,rss,s,comm | sort -n -k 3,4

vmstat
swapon -s

swap
usr/sbin/lsps -a

Real memory
usr/sbin/lsattr -HE -l sys0 -a realmem

HP-UX

ps -efl | sort -n -k 10,10 | tail -50
swapinfo

PCH

V linuxu:
pridat sloupec swap, dat do souboru a seradit:

top -b -n 1 >top.txt
cat top.txt | egrep "M|G" | sort -r -k 5,5 | more


V SUNu

ps -efl|sort -rk 10,10| head
Desaty sloupec je pouzita pamet ve strankach. Prikaz pagesize vypise velikost stranky, vynasobit a je to.

prstat -s rss

top -o size

swapinfo -t

swap -s

vmstat 4 6

HP-UX

swapinfo

swapinfo -m   --->Memory information (interms of MB)

vmstat -S

vmstat -s

sar -w 5 5

show parameter sga_max_size

free -m

ps -ef  |wc -l

No of processors

cat /proc/cpuinfo| grep processor| wc -l

cat /proc/cpuinfo | grep processor

Linux

free -m

Sun

prstat -t

prstat -s rss

/usr/sbin/prtconf | grep "Memory size"

df -k|grep swap

sar -w 5 5

prstat -t             prstat -u pdb2i25 -s size
top -o size
swap -s
free
swapon -s
vmstat 4 4
ps auxw |tail -10--hpunix

lsattr -E -l sys0 -a realmem  --- ram on aix
lsps -s  -- swap space on aix

vmstat
swap -l
prtconf | grep Mem
swap SUNOS:16106834.6

vmstat -p 3
mpstat

ps -eo pid,pcpu,args | sort +1n                         %cpu
ps -eo pid,vsz,args | sort +1n                             kilobytes of virtual memory

/usr/ucb/ps aux |more                                       Output is sorted with highest users (processes) of CPU and memory at the top

free --->swap information in kbytes
free -m -->swap information in mbytes
free -g -->swap information in gbytes

Solaris

$ /usr/sbin/prtconf grep -i "Memory size"
$ swap -s
$ df -k
$ /usr/local/bin/top
$ vmstat 5 100
$ sar -u 2 100
$ iostat -D 2 100
$ mpstat 5 100

For example:

$ man vmstat

Here is some sample output from these commands:

$ prtconf grep -i "Memory size"

Memory size: 4096 Megabytes

$ swap -s
total: 7443040k bytes allocated + 997240k reserved = 8440280k used, 2777096k available

$ df -k
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0t0d0s0 4034392 2171569 1822480 55% /
/proc 0 0 0 0% /proc
fd 0 0 0 0% /dev/fd
mnttab 0 0 0 0% /etc/mnttab
/dev/dsk/c0t0d0s3 493688 231339 212981 53% /var
swap 2798624 24 2798600 1% /var/run
swap 6164848 3366248 2798600 55% /tmp
/dev/vx/dsk/dcdg01/vol01
25165824 23188748 1970032 93% /u01
/dev/vx/dsk/dcdg01/vol02
33554432 30988976 2565456 93% /u02
...

$ top

last pid: 29570; load averages: 1.00, 0.99, 0.95 10:19:19
514 processes: 503 sleeping, 4 zombie, 6 stopped, 1 on cpu
CPU states: 16.5% idle, 17.9% user, 9.8% kernel, 55.8% iowait, 0.0% swap
Memory: 4096M real, 46M free, 4632M swap in use, 3563M swap free

PID USERNAME THR PRI NICE SIZE RES STATE TIME CPU COMMAND
29543 usupport 1 35 0 2240K 1480K cpu2 0:00 0.64% top-3.5b8-sun4u
13638 usupport 11 48 0 346M 291M sleep 14:00 0.28% oracle
13432 usupport 1 58 0 387M 9352K sleep 3:56 0.17% oracle
29285 usupport 10 59 0 144M 5088K sleep 0:04 0.15% java
13422 usupport 11 58 0 391M 3968K sleep 1:10 0.07% oracle
6532 usupport 1 58 0 105M 4600K sleep 0:33 0.06% oracle
...

$ vmstat 5 100
procs memory page disk faults cpu
r b w swap free re mf pi po fr de sr f0 s1 s1 s1 in sy cs us sy id
0 1 72 5746176 222400 0 0 0 0 0 0 0 0 11 9 9 4294967196 0 0 -19 -6 -103
0 0 58 2750504 55120 346 1391 491 1171 3137 0 36770 0 37 39 5 1485 4150 2061 18 8 74
0 0 58 2765520 61208 170 272 827 523 1283 0 3904 0 36 40 2 1445 2132 1880 1 3 96
0 0 58 2751440 58232 450 1576 424 1027 3073 0 12989 0 22 26 3 1458 4372 2035 17 7 76
0 3 58 2752312 51272 770 1842 1248 1566 4556 0 19121 0 67 66 12 2390 4408 2533 13 11 75
...

$ iostat -c 2 100
cpu
us sy wt id
15 5 13 67
19 11 52 18
19 8 44 29
12 10 48 30
19 7 40 34
...

$ iostat -D 2 100
sd15 sd16 sd17 sd18
rps wps util rps wps util rps wps util rps wps util
7 4 9.0 6 3 8.6 5 3 8.1 0 0 0.0
4 22 16.5 8 41 37.9 0 0 0.7 0 0 0.0
19 34 37.0 20 24 37.0 12 2 10.8 0 0 0.0
20 20 29.4 24 37 51.3 3 2 5.3 0 0 0.0
28 20 40.8 24 20 42.3 1 0 1.7 0 0 0.0
...
$ mpstat 2 100
CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl
0 115 3 255 310 182 403 38 72 82 0 632 16 6 12 66
1 135 4 687 132 100 569 40 102 68 0 677 14 5 13 68
2 130 4 34 320 283 552 43 94 63 0 34 15 5 13 67
3 129 4 64 137 101 582 44 103 66 0 51 15 5 13 67
HP-UX 11.0:

top
Glance/GlancePlus
sam
/etc/swapinfo -t
/usr/sbin/swapinfo -t
ipcs -mop

Would it be safe to say that to view memory usage by user, execute the
following:

UNIX95= ps -e -o ruser,pid,vsz=Kbytes

...and to view shared memory usage, such as for Oracle processes, using the
following:

ipcs -bmop

$ grep Physical /var/adm/syslog/syslog.log
$ df -k
$ sar -w 2 100
$ sar -u 2 100
$ /bin/top
$ vmstat -n 5 100
$ iostat 2 100
$ top

For example:

$ grep Physical /var/adm/syslog/syslog.log
Nov 13 17:43:28 rmtdchp5 vmunix: Physical: 16777216 Kbytes, lockable: 13405388 Kbytes, available: 15381944 Kbytes

$ sar -w 1 100

HP-UX rmtdchp5 B.11.00 A 9000/800 12/20/02

14:47:20 swpin/s bswin/s swpot/s bswot/s pswch/s
14:47:21 0.00 0.0 0.00 0.0 1724
14:47:22 0.00 0.0 0.00 0.0 1458
14:47:23 0.00 0.0 0.00 0.0 1999
14:47:24 0.00 0.0 0.00 0.0 1846
...

$ sar -u 2 100 # This command generates CPU % usage information.

HP-UX rmtdchp5 B.11.00 A 9000/800 12/20/02

14:48:02 %usr %sys %wio %idle
14:48:04 20 2 1 77
14:48:06 1 1 0 98
...
$ iostat 2 100

device bps sps msps

c1t2d0 36 7.4 1.0
c2t2d0 32 5.6 1.0
c1t0d0 0 0.0 1.0
c2t0d0 0 0.0 1.0
...

AIX:

$ /usr/sbin/lsattr -E -l sys0 -a realmem
$ /usr/sbin/lsps -s
$ vmstat 5 100
$ iostat 2 100
$ /usr/local/bin/top # May not be installed by default in the server

For example:

$ /usr/sbin/lsattr -E -l sys0 -a realmem

realmem 33554432 Amount of usable physical memory in Kbytes False

NOTE: This is the total Physical + Swap memory in the system.
Use top or monitor command to get better breakup of the memory.

$ /usr/sbin/lsps -s

Total Paging Space Percent Used
30528MB 1%

Linux [RedHat 7.1 and RedHat AS 2.1]:

$ dmesg grep Memory
$ vmstat 5 100
$ /usr/bin/top

For example:

$ dmesg grep Memory
Memory: 1027812k/1048568k available (1500k kernel code, 20372k reserved, 103k d)$ /sbin/swapon -s

Tru64

$ vmstat -P grep -i "Total Physical Memory ="
$ /sbin/swapon -s
$ vmstat 5 100


For example

$ vmstat -P grep -i "Total Physical Memory ="
Total Physical Memory = 8192.00 M

$ /sbin/swapon -s

Swap partition /dev/disk/dsk1g (default swap):
Allocated space: 2072049 pages (15.81GB)
In-use space: 1 pages ( 0%)
Free space: 2072048 pages ( 99%)
Total swap allocation:
Allocated space: 2072049 pages (15.81GB)
Reserved space: 864624 pages ( 41%)
In-use space: 1 pages ( 0%)
Available space: 1207425 pages ( 58%)

Please take at least 10 snapshots of the "top" command to get an idea
aboud most OS resource comsuming processes in the server and the different
snapshot might contain a few different other processes and that will indicate
that the use of resouces are varying pretty quickly amound many processes.

AIX:
/usr/sbin/lsattr -E -l sys0 -a realmem
/usr/sbin/lsps -s

HP-UX:
grep Physical /var/adm/syslog/syslog.log
/usr/sbin/swapinfo -t

Linux:
cat /proc/meminfo | grep MemTotal
/sbin/swapon -s

Solaris:
/usr/sbin/prtconf | grep "Memory size"
/usr/sbin/swap -s

Tru64:
vmstat -P| grep -i "Total Physical Memory ="
/sbin/swapon -s

LONG BIT

getconf LONG_BIT

Huge Pages

grep -i huge /etc/sysctl.conf