Amazon Oracle RDS database administration
Due to the fact that login to Oracle RDS server is not allowed, database administration of AWS RDS is sligthly different compared to Oracle on premises.
Performing Oracle database schema import on AWS RDS
To do schema import, dump file first needs to be uploaded to Amazon S3 storage and then transfered from S3 to RDS using AWS database package RDS_ADMIN. The S3 storage has to be in the same availablility zone as the RDS instance.
So, in summary, the import process is as follows:
- Upload export dump file to S3
- Use RDS_ADMIN to copy dump file from S3 to RDS
- Run import command
- Monitor import progress
Transfering file from S3 to RDS
To transfer file from S3 to RDS you have to use
rdsadmin.rdsadmin_s3_tasks.download_from_s3 function. This function returns task id which can then be used to verify operation's status.
SELECT rdsadmin.rdsadmin_s3_tasks.download_from_s3
( p_bucket_name => 'your S3 bucket name',
p_s3_prefix => 'hercules/hercules_schemas/SALES_EXP_SUN',
p_directory_name => 'DATA_PUMP_DIR' ) AS TASK_ID
FROM DUAL;
Check file transfer status
Previous command has returned task id, 1559355843647-94, which you need to be able to check file transfer status:
SELECT text
FROM table(rdsadmin.rds_file_util.read_text_file('BDUMP','dbtask-1559355843647-94.log'));
Run import
To submit import job do:
DECLARE
fileHandle NUMBER;
myJobName CONSTANT VARCHAR2(30) := 'import_SALES' ;
BEGIN
fileHandle := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name => myJobName);
DBMS_DATAPUMP.ADD_FILE( handle => fileHandle, filename => 'sales.dmp', directory => 'DATA_PUMP_DIR', filetype => 3);
DBMS_DATAPUMP.METADATA_FILTER(fileHandle,'SCHEMA_EXPR','IN (''SALES'')');
DBMS_DATAPUMP.START_JOB(fileHandle);
END;
/
Monitor progress of import job
List files in DATA_PUMP_DIR to verify that import_SALES.log has been created:
select * from table (rdsadmin.rds_file_util.listdir(p_directory => 'DATA_PUMP_DIR'));
Once the log file has been created means that import job has started so do following to list its contents:
SELECT text
FROM table(rdsadmin.rds_file_util.read_text_file('DATA_PUMP_DIR','import_SALES.log'));
TEXT
----------------------------------------------------------------------------------------------------
;;; Starting...
Master table "MASTERUSER"."import_SALES" successfully loaded/unloaded
Starting "MASTERUSER"."import_SALES":
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "SALES"."ORDERS" 1.037 MB 20074 rows
...
. . imported "SALES"."PARTY" 540 KB 984 rows
. . imported "SALES"."PARTY_STATUS_HISTORY" 10 KB 93 rows
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/PACKAGE/COMPILE_PACKAGE/PACKAGE_SPEC/ALTER_PACKAGE_SPEC
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/PACKAGE/PACKAGE_BODY
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/MATERIALIZED_VIEW
Processing object type SCHEMA_EXPORT/JOB
Processing object type SCHEMA_EXPORT/REFRESH_GROUP
Job "MASTERUSER"."import_SALES" successfully completed at Sat Jun 1 03:16:49 2019 elapsed 0 00:00:35
45 rows selected.
SQL>