But what about Data Guard? Refreshable clone PDB supports only deferred recovery, which means I must restore the PDB after the upgrade.
Let’s see how it works.
Upgrade
For the upgrade, I follow the procedure for a non-Data Guard upgrade.
In my AutoUpgrade config file, I must either:
Omit manage_standbys_clause.
Or, set it to manage_standbys_clause=none.
Both methods trigger AutoUpgrade to plug in with deferred recovery, which is the only supported option when using refreshable clone PDB.
Once the upgrade completes, I can proceed and restore the PDB to my standby database.
Standby Database
I execute all commands on the same host – the standby system.
On the standby, I verify that recovery status is disabled:
SQL> select open_mode, recovery_status
from v$pdbs where name='NEWPDB1';
OPEN_MODE RECOVERY_STATUS
____________ __________________
MOUNTED DISABLED
This means that I plugged in with deferred recovery.
The standby is not protecting this PDB.
Next, I connect to the standby using RMAN, and I restore the PDB:
connect target /
run{
allocate channel disk1 device type disk;
allocate channel disk2 device type disk;
set newname for pluggable database NEWPDB1 to new;
restore pluggable database NEWPDB1
from service <primary_service>
section size 64G;
}
You can add more channels depending on your hardware.
Replace NEWPDB1 with the name of your PDB.
<primary_service> is a connect string to the primary database.
Next, I connect to the standby database using Data Guard broker and turn off redo apply:
edit database <stdby_unique_name> set state='apply-off';
Back in RMAN, still connected to the standby, I switch to the newly restored data files:
switch pluggable database NEWPDB1 to copy;
Then, I connect to the standby and generate a list of commands that will online all the data files:
alter session set container=NEWPDB1;
select 'alter database datafile '||''''||name||''''||' online;' from v$datafile;
Save the commands for later.
There should be one row for each data file.
If my standby is an Active Data Guard, I must restart it into MOUNT mode.
alter session set container=CDB$ROOT;
shutdown immediate
startup mount
Now, I can re-enable recovery and online the data files:
alter session set container=NEWPDB1;
alter pluggable database enable recovery;
alter database datafile <file1> online;
alter database datafile <file2> online;
...
alter database datafile <filen> online;
I must connect to the PDB.
I must execute the alter database datafile ... online command for each data file.
I turn on redo apply:
edit database <stdby_unique_name> set state='apply-on';
At this point, the standby protects my PDB.
After a minute or two, I check the Data Guard config:
validate database <stdby_unique_name>;
Once my standby is in sync, I can do a switchover as the ultimate test:
switchover to <stdby_unique_name>;
Now, I connect to the new primary and ensure the PDB opens in read write mode and unrestricted:
select open_mode, restricted
from v$pdbs
where name='NEWPDB1';
OPEN_MODE RESTRICTED
_____________ _____________
READ WRITE NO
I specify the source and target Oracle homes. These are the Oracle homes of the source non-CDB and target CDB, respectively.
manage_standbys_clause=standbys=none instructs AutoUpgrade to plug the PDB in with deferred recovery.
export_rman_backup_for_noncdb_to_pdb instructs AutoUpgrade to export RMAN metadata using DBMS_PDB.EXPORTRMANBACKUP. This makes it easier to use pre-plugin backups (see appendix).
On the primary host, I start AutoUpgrade to plug in, upgrade, and convert:
The summary report lists the following details from the POSTCHECKS stage:
The following PDB(s) were created with standbys=none option. Refer to the postcheck result /home/oracle/autoupgrade/upgrade26/DB19/100/postchecks/db19_copenhagen_postupgrade.log for more details on manual actions needed.
DB19
I find additional details in the referenced log file:
[action] Manual steps need to be performed after upgrade to copy the files to the standby database and enable recovery of the PDB from PRIMARY to STANDBY. Refer to MOS document Doc ID 1916648.1 for detailed steps.
[broken rule] The following PDB(s) [DB19] were created with standbys=none option.
[rule] On a Data Guard configuration, the CREATE PLUGGABLE DATABASE statement needs to be executed with clause STANDBYS=NONE to avoid impacting redo apply. That clause allows for deferral of file instantiation on the standby and the physical standby database to continue to protect existing pluggable databases. The clause allows the general structure of the PDB to be created on all physical standbys but all files belonging to the PDB are marked as OFFLINE/RECOVER at the standby.
I deal with this issue in the next chapter.
2. Restore PDB on Standby
On the standby database, stop redo apply in the standby CDB using dgmgrl:
edit database CDB26_AARHUS set state='apply-off';
Use RMAN to connect to the standby CDB:
rman target sys@CDB26_AARHUS
Restore the PDB and switch to the new data files:
run {
allocate channel d1 device type disk;
allocate channel d2 device type disk;
set newname for pluggable database DB19 to new;
restore pluggable database DB19 from service CDB26_COPENHAGEN;
}
Connect to the standby CDB and enable recovery of the PDB:
alter session set container=DB19;
alter pluggable database enable recovery;
Then, online all the data files in the PDB:
select file#, status from v$datafile;
alter database datafile <file#> online;
alter database datafile <file#> online;
...
alter database datafile <file#> online;
Restart redo apply in the standby CDB:
edit database CDB26_AARHUS set state='apply-on';
Connect to the standby CDB and verify the PDB’s recovery status (should be ENABLED). Ensure that the recovery process is running (should be APPLYING_LOG):
select recovery_status
from v$pdbs
where name='DB19';
select process, status, sequence#
from v$managed_standby
where process like 'MRP%';
Optionally, but strongly recommended, perform a switchover as the ultimate test. Connect to dgmgrl using username and password:
dgmgrl sys as sysdba
Perform the switchover:
validate database "CDB26_AARHUS";
switchover to "CDB26_AARHUS";
Then, finally, I connect to the new primary CDB, CDB26_AARHUS. I ensure the PDB is open in READ WRITE mode and unrestricted. I check the status of all data files is SYSTEM or ONLINE:
alter pluggable database DB19 open;
alter session set container=DB19;
select open_mode, restricted, recovery_status
from v$pdbs;
select name, status
from v$datafile;
I update any profiles or scripts that use the database.
I remove the non-CDB entry from /etc/oratab and Grid Infrastructure on the standby. AutoUpgrade takes care of the primary.
I clean up and remove the old source non-CDB. On both primary and standby, I remove:
* Database files, like PFile, SPFile, password file, control file, and redo logs.
* Database directory, like diagnostic_dest, adump, or db_recovery_file_dest.
On the standby, I remove the source non-CDB data files using asmcmd:
cd DATA
rm -rf DATA/DB19_AARHUS/DATAFILE
Important: I don’t do that on the primary because I reused the data files during the plug-in.
That’s It!
I’m using AutoUpgrade for the entire upgrade; nice and simple. I must take care of the standby database after the migration.
When I reuse the data files on the primary database, I no longer have the non-CDB for rollback. Be sure to plan accordingly.
When I create the PDB, I reuse the data files from the primary database. The data files are in the OMF location of the non-CDB database, e.g.:
+DATA/DB19_COPENHAGEN/DATAFILE
However, after plug-in, the proper OMF location for my PDB data files is:
+DATA/CDB26_COPENHAGEN/<PDB GUID>/DATAFILE
The CDB doesn’t care about this anomaly. However, if I want to conform to the OMF naming standard, I must move the data files. Find the data files:
select file#, name from v$datafile;
I use online datafile move to move those files in the wrong location. I don’t specify a new data file name, so the database generates an OMF name:
alter database move datafile <file#>;
Online data file move creates a copy of the data files before switching to the new file and dropping the old one. So I need additional disk space, and the operation takes time while the database copies the file.
Standby Database
There is nothing to do on the standby database. When I restore the data files, they are placed in the right OMF location.
Rollback Options
When you convert a non-CDB to a PDB, you can’t use Flashback Database as a rollback method. You need to rely on other methods, like:
RMAN backups
Storage snapshots
Standby database (the source non-CDB standby database, which you leave behind)
Refreshable clone PDB
What If I Have Multiple Standby Databases?
You must repeat the relevant steps for each standby database.
You can mix and match standbys with deferred and enabled recovery. Let’s say that you want to use enabled recovery on DR standbys and deferred recovery on the reporting standbys.
Role
Name
Method
Local DR
CDB26_COPENHAGEN2
Enabled recovery
Local reporting
CDB26_COPENHAGENREP
Deferred recovery
Remote DR
CDB26_AARHUS
Enabled recovery
Remote reporting
CDB26_AARHUSREP
Deferred recovery
You would set the following in your AutoUpgrade config file:
You would need to merge the two procedures together. This is left as a reader’s exercise.
What If My Database Is A RAC Database?
There are no changes to the procedure if you have an Oracle RAC database. AutoUpgrade detects this and sets CLUSTER_DATABASE=FALSE at the appropriate time. It also removes the non-CDB from the Grid Infrastructure configuration.
Pre-plugin Backups
After converting a non-CDB to PDB, you can restore a PDB using a combination of backups from before and after the plug-in operation. Backups from before the plug-in is called pre-plugin backups.
A restore using pre-plugin backups is more complicated; however, AutoUpgrade eases that by exporting the RMAN backup metadata (config file parameter export_rman_backup_for_noncdb_to_pdb).
I suggest that you:
Start a backup immediately after the upgrade, so you don’t have to use pre-plugin backups.
Practice restoring with pre-plugin backups.
What If My Database Is Encrypted
AutoUpgrade fully supports upgrading an encrypted database. I can still use the above procedure with a few changes.
You’ll need to input the non-CDB database keystore password into the AutoUpgrade keystore. You can find the details in a previous blog post.
In the container database, AutoUpgrade always adds the database encryption keys to the unified keystore. After the conversion, you can switch to an isolated keystore.
Other Config File Parameters
The config file shown above is a basic one. Let me address some of the additional parameters you can use.
timezone_upg: AutoUpgrade upgrades the database time zone file after the actual upgrade. This requires an additional restart of the database and might take significant time if you have lots of TIMESTAMP WITH TIME ZONE data. If so, you can postpone the time zone file upgrade or perform it in a more time-efficient manner.
before_action / after_action: Extend AutoUpgrade with your own functionality by using scripts before or after the job.