You can migrate your Oracle Database to Autonomous Database using Database Migration Service (DMS). You can use the GUI or one of the many interfaces:
- REST API
- OCI CLI
- PL/SQL
- SDK for various programming languages (Java, Python, and others)
In this blog post, I use OCI CLI. You can install it on your computer or use the cloud shell.
The Scenario
This is my setup:
- Source database: Oracle Database 11.2.0.4 running on a Base Database System in OCI.
- Target database: Autonomous Database (transaction processing).
For simplicity, I will migrate all schemas in the database using Data Pump. You can cherrypick individual schemas or exclude/include specific objects if needed.
I have already created a Vault. DMS needs one to store sensitive information.
How to
I must execute all commands in the same shell. I need to specify a lot of information that I use later on:
#Specify a base name of the migration. All-migration related objects are prefixed with the name
export MIGRNAME=SALES
#Specify the OCID of the compartment where all the resources are running. This procedure assumes all resources are placed in the same compartment
export COMPARTMENTOCID="ocid1.compartment.oc1...."
#Vault details
export VAULTOCID="ocid1.vault.oc1...."
export VAULTKEYOCID="ocid1.key.oc1...."
Next, I specify information about the source database:
#Source database OCID
export SRCDBOCID="ocid1.database.oc1...."
#Network stuff
#Private IP address of the source DB System
export SRCHOSTIP=10.0.1.186
#Subnet OCID that the source DB System uses
export SRCSUBNETOCID="ocid1.subnet.oc1...."
#VCN OCID that the DB System uses
export SRCVCNOCID="ocid1.vcn.oc1...."
#Location of the private key file that can be used to communicate over SSH to the source host
export SRCHOSTKEYFILE=/Users/daniel/Documents/ssh/my-private-key
#Name and path of the database directory object. Remember to create the directory in the file system
export SRCDBDIRNAME=EXPDIR
export SRCDBDIRPATH=/u01/app/oracle/$SRCDBDIRNAME
#Connection details
#Details for non-CDB or PDB
export SRCPDBUSERNAME=SYSTEM
export SRCPDBPASSWORD=*****
export SRCPDBSVCNAME=SALESDB_fra1b4....oraclevcn.com
#If source is a PDB, fill in details for CDB. For non-CDB leave them blank
export SRCCDBUSERNAME=
export SRCCDBPASSWORD=
export SRCCDBSVCNAME=
Finally, I specify information about the target database – the autonomous database:
#Target ADB OCID
export TGTADBOCID="ocid1.autonomousdatabase.oc1...."
#Username and password - typically the ADMIN user
export TGTDBUSERNAME=ADMIN
export TGTDBPASSWORD=*****
Now, let the fun begin. I first create an object storage bucket which DMS uses to store dump files, log files, CPAT output and the like:
export BUCKETNAME=$MIGRNAME
export OSNAMESPACE=$(oci os bucket create \
--compartment-id $COMPARTMENTOCID \
--name $BUCKETNAME \
--query "data.namespace" \
--raw-output)
Then, I create a connection to the source database (non-CDB or PDB). If the source database is a PDB, I also create a connection to the source CDB:
#Create connection to source PDB/non-CDB
export SRCSUDOLOCATION=/usr/bin/sudo
export SRCSSHUSER=opc
export SRCPDBCONNNAME=$MIGRNAME"-SRC-PDB-CONN"
#The 'ssh-details' parameters include the contents of the private key file as a single-line string. Newlines from the file are converted to \n by the 'awk' command
export SRCPDBCONNOCID=$(oci database-migration connection create \
--compartment-id $COMPARTMENTOCID \
--database-type USER_MANAGED_OCI \
--admin-credentials '{"password":"'$SRCPDBPASSWORD'","username":"'$SRCPDBUSERNAME'"}' \
--vault-details '{"keyId":"'$VAULTKEYOCID'","vaultId":"'$VAULTOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
--database-id $SRCDBOCID \
--display-name $SRCPDBCONNNAME \
--connect-descriptor '{"connectString": "'$SRCHOSTIP':1521/'$SRCPDBSVCNAME'"}' \
--ssh-details '{"host":"'$SRCHOSTIP'","sudoLocation": "'$SRCSUDOLOCATION'","user":"'$SRCSSHUSER'","sshkey":"'"$(awk '{printf "%s\\n", $0}' $SRCHOSTKEYFILE)"'"}' \
--private-endpoint '{"subnetId":"'$SRCSUBNETOCID'","vcnId":"'$SRCVCNOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
--wait-for-state SUCCEEDED \
--max-wait-seconds 120 \
--query "data.resources[0].identifier" \
--raw-output)
#Create connection to source CDB, if needed
if [[ -n $SRCCDBUSERNAME ]];then
export SRCCDBCONNNAME=$MIGRNAME"-SRC-CDB-CONN"
export SRCCDBCONNOCID=$(oci database-migration connection create \
--compartment-id $COMPARTMENTOCID \
--database-type USER_MANAGED_OCI \
--admin-credentials '{"password":"'$SRCCDBPASSWORD'","username":"'$SRCCDBUSERNAME'"}' \
--vault-details '{"keyId":"'$VAULTKEYOCID'","vaultId":"'$VAULTOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
--database-id $SRCDBOCID \
--display-name $SRCCDBCONNNAME \
--connect-descriptor '{"connectString":"'$SRCHOSTIP':1521/'$SRCCDBSVCNAME'"}' \
--ssh-details '{"host":"'$SRCHOSTIP'","sudoLocation": "'$SRCSUDOLOCATION'","user":"'$SRCSSHUSER'","sshkey":"'"$(awk '{printf "%s\\n", $0}' $SRCHOSTKEYFILE)"'"}' \
--private-endpoint '{"subnetId":"'$SRCSUBNETOCID'","vcnId":"'$SRCVCNOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
--wait-for-state SUCCEEDED \
--max-wait-seconds 120 \
--query "data.resources[0].identifier" \
--raw-output)
fi
Next, I create a connection to the target autonomous database:
export TGTCONNNAME=$MIGRNAME"-TGT-CONN"
export TGTCONNOCID=$(oci database-migration connection create \
--compartment-id $COMPARTMENTOCID \
--admin-credentials '{"password":"'$TGTDBPASSWORD'","username":"'$TGTDBUSERNAME'"}' \
--database-type AUTONOMOUS \
--vault-details '{"keyId":"'$VAULTKEYOCID'","vaultId":"'$VAULTOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
--database-id $TGTADBOCID \
--display-name $TGTCONNNAME \
--wait-for-state SUCCEEDED \
--max-wait-seconds 120 \
--query "data.resources[0].identifier" \
--raw-output)
Now, I will create a migration object which describes the migration. No changes are made to the database yet:
#Create the migration
export MIGROBJNAME=$MIGRNAME
if [[ -n $SRCCDBCONNOCID ]];then
export $MIGRSRCCDBPARAM="--source-container-database-connection-id $SRCCDBCONNOCID"
else
export MIGRSRCCDBPARAM=""
fi
export MIGROBJOCID=$(oci database-migration migration create \
--compartment-id $COMPARTMENTOCID \
--vault-details '{"keyId":"'$VAULTKEYOCID'","vaultId":"'$VAULTOCID'","compartmentId":"'$COMPARTMENTOCID'"}' \
--source-database-connection-id $SRCPDBCONNOCID $MIGRSRCCDBPARAM \
--target-database-connection-id $TGTCONNOCID \
--type OFFLINE \
--display-name $MIGROBJNAME \
--data-transfer-medium-details '{"databaseLinkDetails": null,"objectStorageDetails": {"namespaceName": "'$OSNAMESPACE'","bucketName": "'$BUCKETNAME'"},"awsS3Details": null}' \
--datapump-settings '{"exportDirectoryObject": {"name": "'$SRCDBDIRNAME'","path": "'$SRCDBDIRPATH'"}}' \
--wait-for-state SUCCEEDED \
--max-wait-seconds 120 \
--query "data.resources[0].identifier" \
--raw-output
)
Now, I can perform an evaluation. This is a sanity check which performs a lot of checks upfront. The command runs until the evaluation finishes.
#Evaluate
oci database-migration migration evaluate \
--migration-id $MIGROBJOCID \
--wait-for-state SUCCEEDED \
--wait-for-state FAILED \
--max-wait-seconds 3600
I can check the evaluation outcome, including the Cloud Premigration Advisor Tool (CPAT) report. You can find this information in the object storage bucket as well. You can run the evaluation as many times as needed:
#Get the last job and the details about it
export MIGRLASTJOBOCID=$(oci database-migration job list \
--migration-id $MIGROBJOCID \
--limit 1 \
--sort-by timeCreated \
--sort-order desc \
--query "data.items[0].id" \
--raw-output
)
oci database-migration job get-job-output-content \
--job-id $MIGRLASTJOBOCID \
--file -
#Get the CPAT report
oci database-migration job get-advisor-report \
--job-id $MIGRLASTJOBOCID
Once I have cleared any issues preventing the migration, I can start the actual migration. The command will return control immediately when the migration is started. Optionally, I use the parameters --wait-for-state
and --max-wait-seconds
to keep it running until the command completes:
#Start the real migration
export MIGRSTARTJOBID=$(oci database-migration migration start \
--migration-id $MIGROBJOCID \
--query "data.id" \
--raw-output
)
I use the below two commands to monitor the migration. The first command gives me an overall status. The second command returns a log file with additional details.
#Get the current status
oci database-migration job get \
--job-id $MIGRSTARTJOBID
oci database-migration job get-job-output-content \
--job-id $MIGRSTARTJOBID \
--file -
That’s it!
I have migrated my database to an autonomous database.
Appendix
Additional Resources
- Database Migration documentation
- Database Migration known issues
- OCI CLI syntax
- Database Migration Demo on YouTube
Using GUI to find REST API calls
Although the OCI CLI commands are documented, it can be hard to figure out exactly which parameters to add and the exact syntax. My colleague, Alex Kotopoulis, gave me rock-star advice.
Switch to the GUI and configure the migration as you want. Turn on "Web developer tools" (might have a different name in your browser) and investigate the network traffic. You can see the REST API calls made by the GUI and easily translate those into OCI CLI parameters.
Thanks Alex!