Feeds:
Posts
Comments

Export and Import AWR data

Why do I need to Export?
you want to save AWR data for comparing with another database or after upgrading database and so on reasons.
 
How do I save AWR report?
You can simply run awrrpt.sql located in <Oracle_home>\rdbms\admin directory and save the reports.
Another way is to export the AWR report itself similar to database export.
Below script let you export AWR data to dump file.
 
BEGIN
DBMS_SWRF_INTERNAL.AWR_EXTRACT (DMPFILE   => 'awr_data.dmp', -- Dump file name
DMP_DIR   => 'DB_DIR', -- Database directory
BID       => 4283, -- Beginning snapshot
EID       => 4315 –- Ending snapshot
);
END;
 
Prerequisites: (1) you need to have Execute privs on the DBMS_SWRF_INTERNAL package, (2) Set-up a Database directory. You can use existing directory [ query from DBA_DIRECTORIES]. In above case, I have directory called DB_DIR
 
BID and EID represents the beginning and ending snapshot ids. You can obtain available snapshots from DBA_HIST_SNAPSHOT view
  
Once you complete the exporting, you can import back into different/same database using below statements. This is two step process
 
BEGIN      DBMS_SWRF_INTERNAL.AWR_LOAD (SCHEMA => "DEV",      
  DMPFILE   => 'awr_data.dmp', -- Dump file name exported earlier.      
  DMP_DIR   => 'DB_DIR', -- Database directory ); END;
 
Above statement loads AWR data into DEV schema, later that needs to be moved to SYS.
Same pre-requisites applies. Note that if database is located on different server, you need to move dump file where the Database directory is configured on destination database.
 
Second part of the import is move imported data to SYS using below statement
 
BEGIN
     DBMS_SWRF_INTERNAL.MOVE_TO_AWR (SCHEMA => “DEV
);
END;
 
 

Do you know the fact that when you kill the data pump job in Oracle 10g (using KILL_JOB), it leaves the dump file created thus far . In order to re-execute the job, either you have to delete the dump file or use the set REUSE_DUMPFILES=Y. Some time you also need to change the JOB_NAME as master table is left behind OR table needs to be dropped.

But in 11g, it is improved so killing Data pump jobs cleans up – partially created dump files, Master table. It is very useful as we don’t have to perform manual steps in order to re-execute same data pump.

Top Queries

This blog has nice Oracle 12c developer feature for  fetching Top queries such as Top 5, last 5 .

http://blog.iadvise.eu/2013/10/28/another-5-neat-12c-features-for-oracle-developers/

This feature is most useful in web pagination too.

In the past oracle allowed only one index for same columns. if you try to create another index (with same columns), Oracle throws an error ORA-01408 : such column list already indexed

With 12c, Oracle allows multiple indexes on same column(s). However, only one index is visible any time. It also allows having one function based index + one regular index to be visible.

If you would like to return or display pl/sql program unit name when there is an error, you can use code some thing like below.

create or replace procedure test….

is…

begin…

exception

when others then

dbms_output.put_line($$PLSQL_UNIT||’ caused error’);

end;

Using the PL/SQL Directive $$PLSQL_UNIT displays the pl/sql unit name (in this case test, is the name of the procedure ), when exception raised.

Another directive is $$PLSQL_LINE can be used to identify the line number in the program unit.

Oracle 12c database introduced new directives to display additional information as below.

$$PLSQL_OWNER – Owner of the object

$$PLSQL_TYPE - Whether it is procedure, function etc.

This addition would be very useful for developers to identify the owner of the program unit and type.

Oracle 12c database

Oracle released 12c database on jun 25, 2013 and it is available for Linux and Solaris platforms as of today.

Here I’m going to demonstrate excluding one or more table partitions while exporting table using Datapump.

I shown in below screen I have table with 5 partitions, each for the year of my data [Though I don’t have data in this case]. So I refer them as partion 2008, partition 2009 etc for short.

image001

Script 1 shows the contents of the parameter file. Notice a line with “exclude” that I’m excluding two partitions 2011 and 2012. Down below you notice the log of Export dump it only exported other partitions only.

Now look at the Script 2 and notice a difference in “exclude” statement. The difference is one of the partition that I want to exclude is listed in lower case which is partition 2011. Now when I execute Export dump notice that partion 2011 still exported.

I believe Export dump is looking for partitions that match the case , but always DBA/USER_tables (and others) always have the objects  names in the upper case.

If you have to perform this kind of export make sure to have them listed in the Upper case. I believe same goes with INCLUDE option.

Script 1

Contents of parameter file

DUMPFILE="test.dmp"
LOGFILE="test.log"
DIRECTORY=FILEOUTDIR
JOB_NAME='TEST_DUMP'
COMPRESSION=all
CONTENT=ALL
PARALLEL=4
reuse_dumpfiles=yes
TABLES=KP_PART_TAB
exclude=TABLE_DATA:"IN ('KP_PART_TAB_2012','KP_PART_TAB_2011')"

——————

c:\>expdp scott@testdb parfile=c:\temp\test.txt

Export: Release 11.2.0.1.0 – Production on Fri Nov 2 09:39:16 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit

Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

Starting “SCOTT”.”TEST_DUMP”:  scott/********@testdb parfile=c:\temp\tes

t.txt

Estimate in progress using BLOCKS method…

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 0 KB

. . exported “SCOTT”.”KP_PART_TAB”:”KP_PART_TAB_2008″      0 KB       0 rows

. . exported “SCOTT”.”KP_PART_TAB”:”KP_PART_TAB_2009″      0 KB       0 rows

. . exported “SCOTT”.”KP_PART_TAB”:”KP_PART_TAB_2010″      0 KB       0 rows

Processing object type TABLE_EXPORT/TABLE/TABLE

Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Processing object type TABLE_EXPORT/TABLE/STATISTICS/USER_PREF_STATISTICS

Master table “SCOTT”.”TEST_DUMP” successfully loaded/unloaded

******************************************************************************

Dump file set for SCOTT.TEST_DUMP is:

/var/opt/gaim/test.dmp

Job “SCOTT”.”TEST_DUMP” successfully completed at 09:39:26

Script 2

Now I changed parameter file so one of the partition name in lower case.

DUMPFILE="test.dmp"
LOGFILE="test.log"
DIRECTORY=FILEOUTDIR
JOB_NAME='TEST_DUMP'
COMPRESSION=all
CONTENT=ALL
PARALLEL=4
reuse_dumpfiles=yes
TABLES=KP_PART_TAB
exclude=TABLE_DATA:"IN ('KP_PART_TAB_2012','kp_part_tab_2011')"

c:\>expdp scott@testdb parfile=c:\temp\test.txt

Export: Release 11.2.0.1.0 – Production on Fri Nov 2 09:55:11 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Password:

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 – 64bit

Production

With the Partitioning, OLAP, Data Mining and Real Application Testing options

Starting “SCOTT”.”TEST_DUMP”:  scott/********@testdb parfile=c:\temp\tes

t.txt

Estimate in progress using BLOCKS method…

Processing object type TABLE_EXPORT/TABLE/TABLE_DATA

Total estimation using BLOCKS method: 0 KB

. . exported “SCOTT”.”KP_PART_TAB”:”KP_PART_TAB_2008″      0 KB       0 rows

. . exported “SCOTT”.”KP_PART_TAB”:”KP_PART_TAB_2009″      0 KB       0 rows

. . exported “SCOTT”.”KP_PART_TAB”:”KP_PART_TAB_2010″      0 KB       0 rows

. . exported “SCOTT”.”KP_PART_TAB”:”KP_PART_TAB_2011″      0 KB       0 rows

Processing object type TABLE_EXPORT/TABLE/TABLE

Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX

Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT

Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS

Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS

Processing object type TABLE_EXPORT/TABLE/STATISTICS/USER_PREF_STATISTICS

Master table “SCOTT”.”TEST_DUMP” successfully loaded/unloaded

******************************************************************************

Dump file set for SCOTT.TEST_DUMP is:

/var/opt/gaim/test.dmp

Job “SCOTT”.”TEST_DUMP” successfully completed at 09:55:22

Also tested in Oracle DB Version : 11.2.0.3 and Client version: 11.2.0.3

One of the issues that Database DBA face is oracle database memory set correctly or do I need to increase it.

Oracle 11g has cool solution.

Querying from table: v$memory_target_advice , tells if increasing memory size would provide any advantage, if so how much should you increase.

Here is the example:

Image

Review the results in the screen shot. Currently database has a memory_target of 2048 MB (First line shows this).It can be compared with DB initialization parameter MEMORY_TARGET.

Line 2 shows that increasing memory size to 2560 MB (which is 25% more) did not add further benefit which can be seen in the column : EST_DB_TIME_FACTOR value 1. However, in line 3 if Memory size is 3072 (i.e. 50% more ) estimated time factor is 0.9602 that means 1 sec process can be done in 0.9602 sec this is an improvement.

Even if you increase memory by 75%, there is no additional benefit, but there is very minor benefit when memory is  double.

In this scenario increasing memory by 50% is adding benefit for the database.

Do you want to track how long your sql script ran? Of course you can set-up your sql plus session with several options including elapsed time, dbms_utility.get_time, display time at the beginning and at the end of the script.

Do you know there is a timer functionality in SQL Plus? I recently realized and never come across before.

Here is how it works.

Syntax at sql plus…

timing start <label>
SQl statement 1.
sql statement 2.
sql statement 3
exec proc1
@mysql.sql
….
timing stop
 

You can have any number of sql statements (or call/exec sql scripts)  between  first and last line. <label> is just any name that you want to display. At the end of the script it display how long it took took for <label>

This is much convenient, I guess, to see how long the whole script took for executing including any waiting in between.

Below is the example output. I created attached files and running timer_ex.sql which calls timer_ex1.sql, timer_ex2.sql, timer_ex3.sql sql scripts. To pretend some thing is running for little longer, I used pause in between.

I waited longer than 1 min on the last pause, you can see that output says mytimer elapsed close to 2 min even I add up individual elapsed timing (enabled through timing on  on sqlplus) less than what shown for mytimer.

Example output.

SQL> @timer_ex.sql
SYSDATE
---------
22-FEB-13
Elapsed: 00:00:00.16
Looped through this number of iterations:1000000
PL/SQL procedure successfully completed.
Elapsed: 00:00:00.20
Paused ...[enter] to proceed.

SYSDATE
---------
22-FEB-13
Elapsed: 00:00:00.17
paused ...[enter]

'AB
---
abc
Elapsed: 00:00:00.15
paused ...[enter]     <-- Here I waited longer than 1 min.

'XY
---
xyz
Elapsed: 00:00:00.12
timing for: mytimer
Elapsed: 00:01:53.09
SQL>

So this timer functionality in sql plus is very useful if your project requires to run several sql script and want to know how long the process took.
What do you think ? Do you use this?

Scripts.

timer_ex1.sql
select sysdate from dual;
 
timer_ex2.sql
declare
rec_cnt number:=0;
begin
for i in 1..1000000 loop
rec_cnt := rec_cnt + 1;
end loop;
dbms_output.put_line(‘Looped through this number of iterations:’||rec_cnt);
end;
/
 
timer_ex3.sql
select sysdate from dual;
pause paused …[enter]
select ‘abc’ from dual;
pause paused …[enter]
select ‘xyz’ from dual;
 
 
timer_ex.sql
timing start mytimer
@timer_ex1.sql
@timer_ex2.sql
pause Paused …[enter] to proceed.
@timer_ex3.sql
timing stop

As application usage increases, data in the database grows pretty quickly. In some applications there are driving tables whose data grows very quickly as users adds more data to the system. If you are in a environment where database service providers who charges on the database size usage, will have to considering increasing expenses as database size increases.

Oracle has compression technique for long time, but it is not popular feature as it always needed a trade off between performance and space. Oracle 10g and up has introduced advanced compression techniques which is very attractive  and unique in a way it compresses data. Here is this works

  • Oracle eliminates duplicate values in the database block. Instead it uses pointers for repeated information (called symbols). Header will hold all the unique values.
  • Block is self-contained that is block contains all information needed to re-construct uncompressed data
  • Since it eliminates duplicates, it can compress to maximum potential especially in scenarios where applicaiton has audit table – has lot of repeated information (historical change record).
  • This compression works for all data types except for variants of LOBs
  • This compression do not trigger for every write operation; instead when block reaches a threshold (manager internally) compression occurs on that block. Only few transactions triggers this event.
  • Side effect:It is possible to cause fragmantation when there are many modification to table/partition. Ex – if a row is deleted, it is less likely new data will fit into freed-up space. For such tables re-compress the data.

So how do I compress table?
As simple command like the below can compress the table for OLTP operations

SQL> alter table test_tab compress for OLTP;

To compress the table partition use below command

SQL > alter table test_tab modify[move] partition test_tab_2011 compress for oltp;

However this do not compress already existing data. Let’s look at following statements for partitions.

Below statement gets me blocks and size occupied for the given partition

SQL> @temp
SQL> select blocks/1024 Blocks_KB, bytes/1024/1024 Bytes_KB
2 from user_segments
3 where segment_name = 'TEST_TAB' and partition_name = 'TEST_TAB_2011'
4 /
BLOCKS_KB BYTES_KB
---------- ----------
12 96
1 row selected.
Elapsed: 00:00:00.08

Now lets compress the table partition using modify option.

SQL> alter table TEST_TAB modify partition TEST_TAB_2011 compress for oltp
2 /
Table altered.
Elapsed: 00:00:00.03
SQL> @temp
SQL> select blocks/1024 Blocks_KB, bytes/1024/1024 Bytes_KB
2 from user_segments
3 where segment_name = 'TEST_TAB' and partition_name = 'TEST_TAB_2011'
4 /
BLOCKS_KB BYTES_KB
---------- ----------
12 96
1 row selected.

As you can see from above statement, after enabling compression on partition, same block and size remained as before. It means no data compressed, we just enabled compression.

Now let’s use Move option for alter table.

SQL> alter table TEST_TAB move partition TEST_TAB_2011 compress for oltp;
Table altered.
Elapsed: 00:00:12.41
SQL> @temp
SQL> select blocks/1024 Blocks_KB, bytes/1024/1024 Bytes_KB
  2  from user_segments
  3  where segment_name = 'TEST_TAB' and partition_name = 'TEST_TAB_2011'
  4  /
BLOCKS_KB   BYTES_KB
---------- ----------
         6         48
1 row selected.
Elapsed: 00:00:00.04

Now total size for partition is down to 48KB (earlier it was 96 KB). It means existing data is compressed.

Because of the above reasons, it is also possible that table/partition may contain compressed and non-compressed data.

How to compress existing data in a non partitioned table?

Here are options…

  • Export table data, drop data, enable compression using alter table statement (first statement) then re-load data [OR]
  • Re-name existing table, create new compressed table, load new compressed table then drop the renamed table.
  • Well you can use the above options…but better yet use move option with alter table command. Here is the example.
SQL> create table kp_test2 as select * from dba_objects;
Table created.
Elapsed: 00:00:02.07
SQL> select blocks/1024 Blocks_KB, bytes/1024/1024 Bytes_KB
 2 from user_segments
 3 where segment_name = 'KP_TEST2';
BLOCKS_KB BYTES_KB
---------- ----------
 1 8
Elapsed: 00:00:00.40
SQL> alter table kp_test2 compress for oltp;
Table altered.
Elapsed: 00:00:00.03
SQL> select blocks/1024 Blocks_KB, bytes/1024/1024 Bytes_KB
 2 from user_segments
 3 where segment_name = 'KP_TEST2';
BLOCKS_KB BYTES_KB
---------- ----------
 1 8
Elapsed: 00:00:00.13

With the above command, segment size remained same as before compression enabled.

SQL> alter table kp_test2 move compress for oltp;
Table altered.
Elapsed: 00:00:00.75
SQL> select blocks/1024 Blocks_KB, bytes/1024/1024 Bytes_KB
 2 from user_segments
 3 where segment_name = 'KP_TEST2';
BLOCKS_KB BYTES_KB
---------- ----------
 .375 3
Elapsed: 00:00:00.08

Wow,  size reduced to 3KB by using move option.