Under the hood, AutoUpgrade uses the Parallel Upgrade Utility or catctl.pl to do the database upgrade. The Parallel Upgrade Utility has a long list of options that you can configure. The parallel options being the most notable. In your AutoUpgrade config file you can now specify a subset of options to catctl.pl using the parameter catctl_options.
Now, you should not expect a 10x improvement by adding a ton of CPUs to your upgrade. Mike Dietrich posted a really good article that explains what matters to an upgrade when it comes to performance. But f you want to squeeze out the very last resources on your system during upgrade, or want to fine-tune the distribution of CPUs the resource consumption, you can do it with AutoUpgrade.
How
First, you can find a list of catctl parameters in the documentation.
When you have determined the parameters that you want to use, specify them in your config file either globally (for all upgrades):
global.catctl_options=-N 8
or locally for a specific upgrade:
upg1.catctl_options=-N 8
The above examples will run the upgrade of PDBs using eight parallel processes.
You should have a look in the documentation to know the minimum and maximum values for the settings. At least for the parallel settings it is really good to know.
Non-CDB
For now, this does not work for non-CDB databases. It is in the plans for a coming release.
CDB and PDB
When you upgrade a CDB the following happen:
First, CDB$ROOT is upgraded using, by default, four parallel processes.
Next, a number of PDBs are upgraded concurrently, starting with PDB$SEED. This is controlled by the parameter n.
Each individual PDB is upgraded using a number of SQL processes as well. This is controlled by the parameter N.
This means that the level of parallelism is n * N.
If you are conservative you set this to the number of CPUs.
If you are bold, you could probably raise it further, because some of the phases in the upgrade runs serially or doesn’t use the full parallel capacity.
In the documentation there are some really good examples and explanations of using N and n together.
Put the following in your AutoUpgrade config file to run 6 PDB upgrades concurrently using 8 parallel processes:
upg1.catctl_options=-n 6 -N 8
Note, regardless of what you specify, when it comes to CDB$ROOT AutoUpgrade will always run with the maximum number of parallel processes. CDB$ROOT is special and it must be upgraded before any of the other ones can start. Hence, it makes sense to get it completed as fast as possible.
If you are doing an unplug-plug upgrade of a single PDB it could be a good idea to add more parallel processes to that single upgrade. If you want to use 8 parallel processes:
upg1.catctl_options=-N 8
Conclusion
It can be useful to override the default parallel settings during upgrades. You should not expect a 10x performance improvement, but you might squeeze out the very last resources.
What is the best setting? It depends. You should go with the defaults, or test it using your own databases.
Wow! Mike and I gave a webinar yesterday: Cool Features – not only for DBAs. We showed a lot of cool features – and the audience recognised that by asking really cool questions. We had more than 100 questions to answer live, and below you can find the answer to those questions that we had to investigate further.
Ahh – one of my new favourites. A viewer asked whether there is any syntax check when you make an expression based parameter. So lets try. First command should work:
SQL> alter session set resumable_timeout='3000/2';
Session Altered.
Now, let’s make a syntax error and see what happens:
SQL> alter session set resumable_timeout='3000//2';
ORA-32005: error while parsing size specification [3000//2]
SQL> alter system set resumable_timeout='3000//2';
ORA-32005: error while parsing size specification [3000//2]
SQL> alter system set resumable_timeout='3000//2' scope=both;
ORA-32005: error while parsing size specification [3000//2]
SQL> alter system set resumable_timeout='3000//2' scope=memory;
ORA-32005: error while parsing size specification [3000//2]
SQL> alter system set resumable_timeout='3000//2' scope=spfile;
ORA-32005: error while parsing size specification [3000//2]
Conclusion: Your expressions are checked when you issue the ALTER SESSION or ALTER SYSTEM command.
But take care when using environment variables in your expressions. If the environment variable is available when you issue the statement, the command will succeed. However, if the environment variable is not present when the database restarts, then you will have a problem. This could be the case if you forget to add the environment variable to your profile or the environment in Grid Infrastructure (srvctl setenv).
If this happens, the startup will error out.
And finally, the expressions you put into a pfile are not checked before the database starts using that pfile. If there are invalid expressions in your pfile, the database startup will error out as well.
Online Table Move
Indexes and LOBs
When you move a table online the indexes become unusable – unless you specify to update them as well. It is a little odd that the default behaviour doesn’t update the indexes when you move online. When you do online stuff it is to avoid disruptions – and disruptions you will get with unusable indexes:
SQL> alter table t1 move online tablespace data update indexes;
You can also move the index to a new tablespace if you want:
SQL> alter table t1 move online tablespace data update indexes(i1 tablespace data);
The LOB segments that are created to support the LOB columns remain in the same tablespace, even if you move the table. But if you want, you can also move the LOB segment as well:
SQL> alter table t1 move online tablespace data lob(c1) store as (tablespace data);
A viewer was concerned whether he could move a table online and also update indexes because one of the indexes was really wide. Apparently, in a previous version this had been an issue to them. I tested this out with an index on 10 columns defined as VARCHAR2(128 BYTE). And this was not a problem.
If you index is even wider; test it yourself – or consider whether you really need such an index.
It is supported move an IOT online as well – but not partitioned IOTs.
The Rest
If the table has unused columns when you move it, it will still have unused columns. They are not affected by a move operation. Actually, Connor McDonald made a really good video where he talks about unused columns.
Also, an attendee wanted to know whether the online command generated more UNDO or TEMP. Let’s see what happens with my 125 MB large table (no indexes):
Metric
Online move
Regular move
undo change vector size
96504
37988
session pga memory
5750416
4243088
As you can see it does require more UNDO to move the table online. But the numbers might change on a busy system with many indexes and LOBs. TEMP appears to be relative unaffected – it might change if I have unique indexes and must rebuild those.
Lesson learned – expect more resource usage – test before trying in production.
And finally – does online table move work on tables with OLS (Label Security) policies? I don’t know – so far I haven’t been able to get confirmation. My immediate answer is NO. Take for instance DBMS_REDEFINITION – it is not supported when the table has OLS policies, so I assume the same applies for online table move.
Online Convert To Partitioned Table
This feature only works on a non-partitioned (or regular table). If your table is already partitioned, and you want to change to a different partitioning method, you must use another method (like DBMS_REDEFINITION).
Online Data File Move
When you move a data file online, the database creates an exact copy of the data file in the new location. It is bit-by-bit identifical.
This also means that any free space in the data file is not reclaimed, nor is the High Water Mark affected.
Standby Database Operations
I demoed how the standby database in 19c can do automatic flashback, when the primary database flashes back. A viewer asked whether you can flash back to the same restore point multiple times without the standby database complaining.
My immediate answer was: yes, you can flash back to the same restore point multiple times. The documentation doesn’t mention such a restriction, so I was eager to try it out.
I made a data guard setup in OCI and I was able to flash back to the same restore point three times – and the standby database followed each time.
Gradual Password Rollover
With gradual password rollover a user can temporarily have two passwords. A viewer asked whether you can identify the sessions that are still connecting with the old password. And you can with unified audit trail provided you are auditing logons. In that case simply look at the AUTHENTICATION_TYPE column of the audit entries for LOGON actions:
SQL> select authentication_type, event_timestamp
from unified_audit_trail
where action_name='LOGON' and dbusername='APP_USER'
order by event_timestamp;
Look at the VERIFIER information. If a user is using the old password in a rollover period, the identifier is suffixed -OLD. Similar, the suffix is -NEW if the new password is used.
This is a very useful addition to gradual password rollover.
Conclusion
Thanks to everyone that participated today. We had so much fun preparing the webinar for you. And really – thanks for all the cool questions. Doing stuff remote is really hard, and you really feel disconnected from the audience. But it helps a lot with good questions.
You can use AutoUpgrade to upgrade a database using only a single command line. No config file is needed!
Whenever we talk about AutoUpgrade, we also mention the config file. The file that contains information about what has to be upgraded. A very simple version of such a config file could look like this:
This is the preferred and recommended way of using AutoUpgrade. But you can actually specify everything on the command line.
Upgrade in One Line
By using the command line option config_values you can now specify the config file entries on the command line. Instead of using the above config file you could execute:
Notice how I used config_values to specify the contents of the config file. The prefix that you have to use in the config file (in this case upg1) is not used here (only for global entries, like global.autoupg_log_dir).
If you have several databases to upgrade, you separate them with as asterisk (*):
Next, AutoUpgrade will create a config file using the information supplied either using config_values or from the environment.
From here on AutoUpgrade behaves as usual.
Conclusion
If needed, you can provide all input to AutoUpgrade in one command line. This is useful if you are using AutoUpgrade in scripts or from Ansible or similar orchestration tools.
I would still recommend the use of a config file. It is easier to read and write the options in a nice formatted text file. Further, you avoid the potential trouble of escaping characters on the command line. And, finally, you avoid having a very long an unreadable command line.
These arguments are, by the way, the same we use when we recommend using a parameter file (.par) for Data Pump.
AutoUpgrade now supports unplug-plug upgrades. You unplug a PDB from a lower release CDB and you plug it into a higher release CDB. After plug-in the PDB is upgraded and eventually it can be opened in normal, READ WRITE mode.
When it comes to upgrading in the multitenant world, I am a big fan of unplug-plug upgrades. The concept comes with a number of benefits:
It is much faster to upgrade an individual PDB using unplug-plug compared to a CDB with just one PDB in it. When you do an unplug-plug upgrade, the database just need to upgrade the PDB. Compare that to a CDB which first upgrades CDB$ROOT, and then PDB$SEED and any user PDBs.
You don’t have to arrange downtime for all the PDBs in the CDB. Downtime is just needed for the PDB that you will upgrade.
Combine it with refreshable PDBs and you can still have a really good fallback option. You can check out a previous blog post to see how you can use refreshable PDBs.
AutoUpgrade and Unplug-plug Upgrade
Starting from version 21, AutoUpgrade can now perform unplug-plug upgrades. A newer version of AutoUpgrade can upgrade to older database releases as well, so don’t worry if the AutoUpgrade version doesn’t match the Oracle Database release that you are upgrading to.
There are some requirements that must be met in order to perform unplug-upgrade, so I suggest that you take a look in the documentation.
You have to create the target CDB yourself. It is by design that AutoUpgrade doesn’t do this for you. First, creating a CDB requires a lot of information and it can be done in many different ways (ASM? Which components? RAC?). You would need a very long config file to supply all that information. Also, it takes time to create a CDB and if AutoUpgrade would have to do that inside the maintenance window, it would be prolonged considerably.
During unplug-plug upgrades AutoUpgrade also allows you to change the name of the PDBs and you can decide whether you want to reuse the unplugged data files or take a copy.
AutoUpgrade will unplug PDBs hr and logistics from CDB1 and plug them into CDB2. In addition, it will change the name of hr to people when it is plugged into CDB2. Finally, you must specify the Oracle Home of the two CDBs, so AutoUpgrade can set the environment correctly and connect to the databases.
If you use lsj command to monitor the progress it does actually look like you are only upgrading one of the PDBs:
upg> lsj
+----+-------+---------+---------+-------+--------------+--------+------------------+
|Job#|DB_NAME| STAGE|OPERATION| STATUS| START_TIME| UPDATED| MESSAGE|
+----+-------+---------+---------+-------+--------------+--------+------------------+
| 100| CDB1|DBUPGRADE|EXECUTING|RUNNING|20/12/22 15:25|15:29:03|13%Upgraded PEOPLE|
+----+-------+---------+---------+-------+--------------+--------+------------------+
Total jobs 1
But if you look into the details with status -job 100 you can see that both PDBs are upgraded in parallel:
upg> status -job 100
... (removed a lot of information)
Details:
[Upgrading] is [0%] completed for [cdb1-people]
+---------+-------------+
|CONTAINER| PERCENTAGE|
+---------+-------------+
| PEOPLE|UPGRADE [13%]|
|LOGISTICS|UPGRADE [13%]|
+---------+-------------+
When the upgrade completes, the PDBs are ready to be used. I suggest that you verify that the databases are open in READ WRITE mode and not in restricted mode. Finally, save the state, so the PDBs start automatically together with the CDB:
SQL> select name, open_mode, restricted from v$pdbs where name in ('PEOPLE', 'LOGISTICS');
SQL> --Verify open_mode=read write and restricted=no
SQL> alter pluggable database people save state;
SQL> alter pluggable database logistics save state;
Caution
With unplug-plug upgrades you can’t use Flashback Database as your fallback plan. It doesn’t work across the plug-in operation. You either have to:
Instruct AutoUpgrade to copy the unplugged data files before it plugs into the higher release CDB. That way, you still have the old unplugged data files, and just re-create the PDB in the lower release CDB. But you will have extra downtime because you need to copy the data files.
Use Refreshable PDBs to build a copy of your PDB in the higher release, target CDB. When you want to do the upgrade, perform the last refresh and upgrade the refreshable PDB.
Both of the above options require additional disk space to hold a copy of the database.
Of course, you can also use your regular backups as fallback.
What If
Your Target CDB Has a Standby Database?
For now, don’t use AutoUpgrade to make unplug-plug upgrades, if the target CDB has standby databases. A plug-in operation with a standby database is a tricky maneuvre, at least when you want to re-use the data files. We are still trying to figure out how to implement it in AutoUpgrade.
Having said that, it is absolutely doable. You can read more about in the following MOS notes:
For now, don’t use AutoUpgrade to perform unplug-plug upgrades, if any tablespace in the PDB is encrypted with TDE Tablespace Encryption. We are working on making AutoUpgrade capable of better interacting with the TDE keystore, so keep an eye out for coming versions.
If TDE Tablespace Encryption is enabled in the target CDB, you can still use AutoUpgrade. The PDB will be plugged in as an unencrypted PDB.
Conclusion
Doing unplug-plug upgrades is now supported by AutoUpgrade as of version 21. It includes useful features for renaming PDBs and using copies of unplugged data files.
There is a video on YouTube that shows the procedure. And while you are there, I suggest that you subscribe to our channel.
You can upgrade your database to a new release with AutoUpgrade and keep the Data Guard setup intact. The standby database(s) can be upgraded implicitly via the redo from the primary database, and there is no need to rebuild the standby database after upgrade.
The process:
In the following I will be using this setup:
In advance, you should install the new Oracle Home on both primary and standby host. The two Oracle Homes should have the same patches applied, and I recommend that you always apply the latest Release Update.
Before Upgrade
You must use AutoUpgrade version 21.1.1 or newer. A newer version of AutoUpgrade can upgrade to older database releases as well, so don’t worry if the AutoUpgrade version doesn’t match the Oracle Database release that you are upgrading to.
AutoUpgrade can handle a Data Guard environment that is manually configured or via Data Guard Broker.
The procedure starts right before you start AutoUpgrade in DEPLOY mode (or alternatively in FIXUPS mode). Downtime has started and users are no logged connected to the database.
Stop Data Guard
On the standby database, generate commands to copy the Data Guard broker config files. Don’t execute them yet:
PROD2 SQL> select 'cp ' || value || ' /tmp' as cmd from v$parameter where name like 'dg_broker_config_file%';
Shut down the standby database. Disabling the database is strictly speaking not necessary, but a better-safe-than-sorry approach:
Since redo transport has not been deferred yet in the primary database, it will complain about losing connection to the standby database. The alert log will contain an entry similar to this:
2020-12-03T06:30:12.751693+00:00
TT03 (PID:47477): Attempting LAD:2 network reconnect (3113)
TT03 (PID:47477): LAD:2 network reconnect abandoned
2020-12-03T06:30:12.752104+00:00
Errors in file /u01/app/oracle/diag/rdbms/prod1/PROD/trace/PROD_tt03_47477.trc:
ORA-03113: end-of-file on communication channel
TT03 (PID:47477): Error 3113 for LNO:3 to 'prod2'
It can be safely ignored, because it is after all a maintenance window and the database is about to be upgraded. Your monitoring system might detect this and start to complain.
Upgrade
Upgrade the database by starting AutoUpgrade in DEPLOY mode. AutoUpgrade will defer redo transport and stop Data Guard broker (if in use) automatically:
After the upgrade you should perform the necessary tests to validate the new database release. Only when you are convinced to go live on the new release, you should continue.
Remember that the standby database was left behind before we started touching anything, so if all other fails, simply restart the standby database, and connect your users to it.
After Upgrade
Restart Data Guard
Update the listener on the standby host. Be sure to update the Oracle Home information in the listener.ora entry. Note, that your listener.ora might be stored in a non-default location, so use lsnrctl status to get the location. Finally, reload the listener:
[grid@bm2]$ $GRID_HOME/bin/lsnrctl status
[grid@bm2]$ vi $GRID_HOME/network/admin/listener.ora
[grid@bm2]$ $GRID_HOME/bin/lsnrctl reload
For the next commands, I will be using the same prompt, and I will need the following environment variables:
Next, if the standby database is using TNS_ADMIN in the default location ($ORACLE_HOME/network/admin), then be sure to copy the relevant TNS aliases into the new tnsnames.ora. There should be TNS aliases to the primary and standby database. Or, if there are no other databases in the same Oracle Home, you can simply copy the files:
[oracle@bm2]$ #Back up files
[oracle@bm2]$ cp $NEW_HOME/network/admin/sqlnet.ora $NEW_HOME/network/admin/sqlnet.ora.backup
[oracle@bm2]$ cp $NEW_HOME/network/admin/tnsnames.ora $NEW_HOME/network/admin/tnsnames.ora.backup
[oracle@bm2]$ #Copy from old to new home
[oracle@bm2]$ cp $OLD_HOME/network/admin/sqlnet.ora $NEW_HOME/network/admin
[oracle@bm2]$ cp $OLD_HOME/network/admin/tnsnames.ora $NEW_HOME/network/admin
Now, you can edit /etc/oratab and update the information about the Oracle Home to match the new Oracle Home. In my example, the database is managed by GI, so I should not configure auto-start in /etc/oratab. If you are not managing your databases with GI, you probably want to configure the standby database to start automatically (see appendix):
[oracle@bm2]$ #Backup file
[oracle@bm2]$ cp /etc/oratab /tmp/oratab
[oracle@bm2]$ #Use sed to remove the line that starts with ORACLE_SID
[oracle@bm2]$ sed '/^'"$ORACLE_SID"':/d' /tmp/oratab > /etc/oratab
[oracle@bm2]$ #Add new entry
[oracle@bm2]$ echo "$ORACLE_SID:$ORACLE_HOME:N" >> /etc/oratab
Copy SPFile and password file to the new Oracle Home:
Copy the broker config files into the new Oracle Home. If you store your broker config files outside of the Oracle Home this might not be necessary to you:
Upgrade the database in GI, which updates the Oracle Home information, so GI will start the database in the correct Oracle Home. Next, re-enable and start the database:
Or, if you are not using GI, simply start the database in the new Oracle Home:
PROD2 SQL> startup mount
Re-enable Data Guard
To re-enable the Data Guard config use DG CLI:
[oracle@bm1]$ $ORACLE_HOME/bin/dgmgrl sys@PROD1
And re-enable redo transport:
DGMGRL SYS@PROD1> edit database prod1 set state=transport-on;
Now, redo is shipping to the standby database, and it will apply it. When the redo that was generated during the upgrade is applied on the standby database, it is implicitly upgraded. You can monitor the progress of the apply by looking at the Apply Lag information. The Apply Lag will decrease until the standby database eventually catches up and they are fully synchronized:
DGMGRL SYS@PROD1> show database prod2;
Test
Use the broker to ensure everything is fine:
DGMGRL SYS@PROD1> show configuration
DGMGRL SYS@PROD1> show database prod1
DGMGRL SYS@PROD1> show database prod2
You should have SUCCESS listed for both databases
Let’s validate the setup and try to make a switchover. The database will not allow a switchover if there are any problems in the Data Guard setup. It is a good way of checking things are fine:
If you don’t use Data Guard Broker, you use regular SQLs and SQLPlus to verify the Data Guard environment.
Conclusion
It is actually not that complicated to upgrade your database, even if it is part of a Data Guard setup. And with version 21.1.1 of AutoUpgrade is has become easier. A little extra legwork is needed to take care of the standby database. But the good thing is that your Data Guard setup is maintained throughout the process.
I made a video on YouTube that shows the procedure. And while you are there, I suggest that you subscribe to our channel.
Appendix
Config File
For your reference this is the config file, that I used. It contains only the required information. All other parameters have a default value:
When you run un AutoUpgrade in ANALYZE mode and check the preupgrade report, you will find this information message:
[checkname] SYNC_STANDBY_DB
[stage] PRECHECKS
[fixup_available] NO
[runfix] N/A
[severity] INFO
[action] Synchronize your standby databases before database upgrade.
[broken rule] The standby database is not currently synchronized with its associated primary database.
[rule] To keep data in the source primary database synchronized with its associated standby databases, all standby databases must be synchronized before database upgrade. See My Oracle Support Note 2064281.1 for details.
What does it say? Basically, it says that all redo generated on the primary database before the downtime window started, should be sent to and applied on the standby database. This way, your standby database is ready to replace your primary database at any time, if something goes really wrong.
Strictly speaking it is not necessary to ensure that, but it is strongly recommended.
GI-managed Database in /etc/oratab
When the database is managed by GI, you don’t need to have it configured in /etc/oratab. Personally, I like to have it anyway, because then you have a clear overview of what databases are on the server, and you can use /etc/oratab to set your environment, like when you are using oraenv script.
But I know that die-hard GI-folks might roll their eyes when I say it, but I like it this way.
We can all agree 2020 was a bad year. Let’s hope 2021 will be much better.
Over the last months I have made a lot of videos, which is really hard work. Especially since I am not a native English speaker, I often need to re-take a video several times. I saved all my bad recordings and I decided to create a bloopers video. I hope it can help you leave 2020 with a smile.
In case you are taking some time off in the coming days and you get a little bored, I suggest you head over to our YouTube channel and catch up on some tech-stuff.
Our developers just published a new version of AutoUpgrade. Head over to My Oracle Support and download version 21.1.1.
The version says 21, but it can still upgrade your databases to lower release. So, you should use this new version to upgrade to Oracle Database 19c as well.
What’s New
A total of 48 enhancements and bug fixes made it into the new release. It has been 35 working days since the last release (and that includes Thanksgiving week) which means that the team has put in – on average – close to 1,5 changes into AutoUpgrade every, single working day. I find that quite impressive. And it really supports our message that you should always use the latest version of AutoUpgrade.
In My Oracle Support note 2485457.1 you can find a complete change log. Also, you can also visit the 21c New Feature Guide for additional information.
Unplug/plug Upgrade
The first things that I would like to highlight is unplug/plug upgrade. Instead of upgrading an entire CDB, you can now choose to unplug one or many PDBs, plug them into a higher release CDB, and upgrade the individual PDBs.
How do you specify to upgrade only one PDB? Use target_cdb and pdbs?
When doing unplug/plug upgrades you must be aware of the COMPATIBLE setting in the higher release CDB. If COMPATIBLE is higher, when the PDB plugs in, COMPATIBLE will silently and automatically be raised by the CDB. This means that you lose the capability of doing downgrades and flashback database.
You can read more about unplug/plug upgrades in the documentation
AutoUpgrade and Oracle Data Guard
In addition, AutoUpgrade now detects that you are upgrading a primary database in a Data Guard setup. In that case, it will automatically defer redo log transport to the standby databases and disable the Data Guard broker. After the upgrade, the broker is automatically restarted, but you need to manually reconfigure the standby databases, and eventually re-enable redo log transport.
You might ask: Why don’t AutoUpgrade handle it all for me? This is to preserve your fallback capabilities. Even after a successful upgrade, you still might want to revert back to the old version. Let’s say your testing finds a critical issue. You might still have a restore point on the primary database, but what if – for some reason – flashing back fails? Then you still have your standby database which was disabled right before the upgrade.
You can read more about AutoUpgrade and Data Guard in the documentation.
More
A few hints for the interested reader to explore. Have a look at the config file option catctl_options. You can use that to control the level of parallelism for a specific upgrade.
Also, if you don’t want AutoUpgrade to handle your network files (e.g. sqlnet.ora, tnsnames.ora) you disable it using manage_network_files.
You can read more about those options in the documentation.
Conclusion
Get the latest version of AutoUpgrade and benefit from the many new features. Keep an eye out for additional blog posts that will dig deeper into the new features.
Recently, I have been asked a few similar questions:
What happens if my SSH session with AutoUpgrade is lost? (see appendix)
What happens if AutoUpgrade crashes?
What happens if I exit the console by mistake?
First, don’t panic. Second, just restart AutoUpgrade using the same command line. During startup, AutoUpgrade will figure out that it should recover the lost session, and will restart the upgrades.
When AutoUpgrade dies or is terminated, the database upgrades that it started, dies with it. This could happen if you lost your SSH session. The database upgrade stops, but the database is still running, most likely in UPGRADE mode.
If you exit AutoUpgrade by mistake (typing exit in the job console), it will first stop the upgrade, and then shutdown the database.
In any case, when you afterwards restart AutoUpgrade, it will figure out that a previous AutoUpgrade session was running. It will recover information from the previous session, and if needed restart the database. After that, it will restart the upgrade. If the previous database upgrade was at phase 54, AutoUpgrade will restart from phase 54. This means that all previous work in the upgrade is preserved, and you can resume as if nothing had happened.
Don’t Recover Previous Session
If you for some reason don’t want AutoUpgrade to recover the previous session. Let’s imagine that AutoUpgrade crashed, and you decided to restore the database. Now you want to start all over. In that case, you need to clear the recovery data, otherwise, AutoUpgrade will get confused.
You can read more about the parameters in the documentation.
The Little Hammer (Preferred)
You can clear the recovery for a specific job by adding clear_recovery_data on the command line and use jobs parameter to specific exactly for which jobs recovery data must be cleared.
Be advised, that this will happen for all the upgrades that are specified in the config file. Remember, that one of the big benefits of AutoUpgrade is that one config file can be used to upgrade 10s or 100s of databases.
The Sledgehammer
I would recommend the previous hammers but use this approach as the last solution: Delete all files that are used by AutoUpgrade.
First, delete the directory specified in global.autoupg_log_dir. Next, delete the directory specified in <prefix>.log_dir. Typically and by default, the second directory is a subdirectory to the first one, so in most cases you just have to delete the first directory. If you have multiple upgrades specified in the same config file you potentially need to delete multiple directories for prefix1.log_dir, prefix2.log_dir and so forth.
Be aware that you are clearing out all information that is used by AutoUpgrade. If you use the same global logging directory for multiple AutoUpgrade sessions (which I would not recommend), then you will be seriously messing things up. But if you are only upgrading this specific database on the server, then you can safely delete the directories to start all over.
Restoring a Test Database – Starting All Over
Very often a test database is upgraded multiple times. Even after a successful upgrade, you might want to retry the upgrade with different settings.
If you use AutoUpgrade you must clear the recovery data as specified above. AutoUpgrade doesn’t know that you have restored the database. For all it know, the previous upgrade was successful.
Conclusion
Resuming an AutoUpgrade session is very simple. Just start AutoUpgrade with the same command line. It identifies the previous AutoUpgrade session, and resumes automatically. All the previous work is recovered, and the upgrade will resume from where it was stopped.
Appendix
Lost SSH Session
I heard from several people that they experienced the SSH session timing out because AutoUpgrade didn’t produce any screen output while the upgrade took place. We have put into a our plans to make some sort of regular screen output, so this should be avioded.
Before it is implemented, I would suggest that you look at the keep alive options in SSH:
$ man ssh
Personally, I always start SSH this way, and you can put it into your SSH config:
These steps will guide you through a migration of a database using Full Transportable Export Import (FTEX) and incremental backups. I covered the concept in a previous blog post, which you should read to understand the basics. Remember Transportable Tablespaces and Full Transportable Export/Import requires Enterprise Edition.
My demo environment looks like this:
I have an 12.1.0.2 database that I want to migrate to a PDB in a new CDB that runs 19c.
Check Prerequisites
Create a new PDB called SALES in the target CDB:
TARGET/CDB1 SQL> create pluggable database sales admin user admin identified by admin;
TARGET/CDB1 SQL> alter pluggable database sales open;
TARGET/CDB1 SQL> alter pluggable database sales save state;
Prepare the database to use TDE Tablespace Encryption:
TARGET/CDB1 SQL> alter session set container=sales;
TARGET/CDB1 SQL> administer key management set key force keystore identified by <keystore-pwd> with backup;
Verify SQL*Net connectivity from source host to target PDB:
Verify database character set and national character set are the same:
SOURCE/SALES SQL> select property_name, property_value from database_properties where property_name in ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET');
TARGET/SALES SQL> select property_name, property_value from database_properties where property_name in ('NLS_CHARACTERSET', 'NLS_NCHAR_CHARACTERSET');
Ensure the source database is in ARCHIVELOG mode:
SOURCE/SALES SQL> select log_mode from v$database;
Enable block change tracking on source database. Requires Enterprise Edition (on-prem), DBCS EE-EP (cloud) or Exadata. Although strictly speaking not required, it is strongly recommended:
SOURCE/SALES SQL> select status, filename from v$block_change_tracking;
SOURCE/SALES SQL> alter database enable block change tracking;
Ensure that you can connect from the source to the target host as oracle:
[oracle@source]$ ssh <target ip> date
Identify Tablespaces
Identify all the tablespaces that you will migrate. With FTEX you should transport all the tablespaces, except those that contain Oracle maintained data, like SYSTEM, SYSAUX, UNDO and so forth:
SOURCE/SALES SQL> select tablespace_name from dba_tablespaces;
Save the list of tablespaces for later. In my demo, I only have the tablespace SALES except the Oracle maintained ones.
Next, on the target database ensure that any of the existing tablespaces doesn’t conflict with the ones you are transporting:
TARGET/SALES SQL> select tablespace_name from dba_tablespaces;
If there is a conflict of names, you have to drop or rename the tablespaces in the target database.
Download and Configure Perl Scripts
Create a folder to hold the perl scripts, download the scripts from MOS doc ID 2471245.1, and unzip:
[oracle@source]$ rm -rf /home/oracle/xtts
[oracle@source]$ mkdir /home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ --Download file from MOS
[oracle@source]$ unzip rman_xttconvert_VER4.3.zip
Create a working directory (aka. scratch location) which will hold the backups. Ensure that you have enough space at this location at both source and target database.
Now, you can start the first initial backup of the database. You take it while the source database is up and running, so it doesn’t matter if the backup/restore cycle take hours or days to complete:
[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
The perl script has been configured in such a way that it automatically transfers the backups to the target system. In addition to that, a small text file must be transferred as well:
Now, on the target system, you can restore the backup that was just taken. If needed, the data files are automatically converted to the proper endian format. If conversion is needed, you need space for a copy of all the data files:
[oracle@target]$ export TMPDIR=/home/oracle/xtts
[oracle@target]$ cd /home/oracle/xtts
[oracle@target]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore
Incremental Backup and Restore
You can – and should – run the incremental backup and restores as many times as possible. The more frequent you run them, the faster they will run because there will be fewer changes.
At least, close to the migration downtime window starts you should run them often, to minimize the time it will take to perform the final backup and restore:
[oracle@source]$ export TMPDIR=/home/oracle/xtts
[oracle@source]$ cd /home/oracle/xtts
[oracle@source]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --backup
[oracle@target]$ export TMPDIR=/home/oracle/xtts
[oracle@target]$ cd /home/oracle/xtts
[oracle@target]$ $ORACLE_HOME/perl/bin/perl xttdriver.pl --restore
Import Metadata Using FTEX
Create a directory object that points to the xtts folder:
TARGET/SALES SQL> create directory LOGDIR as '/home/oracle/xtts';
Next, create a database link to the source database that can be used to import the metadata. If the source database is already a PDB, ensure that the database link points directly into the PDB:
TARGET/SALES SQL> create public database link SRCLNK connect to system identified by <password> using '//<source_ip>:1521/<service_name>';
Test that it works:
TARGET/SALES SQL> select * from dual@srclnk;
Next, create a par file (sales_imp.par) that you can use for the Data Pump import (see appendix below for explanation):
Start Data Pump and perform the import. newsales is a TNS alias that points into the SALES PDB in the target CDB. If you have encrypted tablespaces, you should use the option encryption_pwd_prompt. It allows you to input the TDE password. It can be omitted if there are no encrypted tablespaces.
Once the import has completed, you should examine the Data Pump log file for any critical errors. Check the appendix (see below) for ignorable errors:
[oracle@target]$ vi /home/oracle/xtts/sales_imp.log
That’s it! Your data has been migrated. Now would be a good time to:
Test your application.
Start a backup.
Gather statistics – they were excluded from the export.
Drop the database link that points to the source database.
Cleanup the file system:
/home/oracle/xtts
/u01/app/oracle/xtts_scratch
Conclusion
Even huge, TB-sized, databases can be migrated with very little downtime by using incremental backups. By using the perl script from My Oracle Support and combined with Full Transportable Export/Import it
is a simple process. In addition, you can even migrate to a new endian format, to a higher release and into a PDB in one operation. It requires Enterprise Edition and you must have plenty of disk space – potentially twice the size of your database.
There is a video on our YouTube channel that you can watch. It demos the entire process. I suggest that you subscribe to our channel and get notified whenever there are new videos.
Thanks to my good colleague, Robert Pastijn, for supplying a runbook that was used as inspiration.
Appendix
If Source Database Is in OCI and Automatic Backup Is Enabled
If the source database is running in OCI and you have enabled automatic backup, you must make a few changes.
In xttprep.tmpl around line 319 change:
cp('backup for transport allow inconsistent ' ||
to
cp('set encryption off for all tablespaces;set compression algorithm "basic";backup for transport allow inconsistent ' ||
In xttdriver.pl around line 4268 change:
my $rman_str1 = "set nocfau;";
to
my $rman_str1 = "set nocfau;".
"set encryption off for all tablespaces ;".
"set compression algorithm 'basic' ;" ;
ORA-02085
If you get ORA-02085 when querying over the database link:
TARGET/SALES SQL> alter system set global_names=false;
Data Pump Parameters
Use network_link to specify the name of the database link that points back to the source database.
full=y and transportable=always instructs Data Pump to perform a full transportable export/import.
exclude=TABLE_STATISTICS,INDEX_STATISTICS exclude statistics from the import. It is better and faster to gather new, fresh statistics on the target database. If you insist on importing your statistics, you should use DBMS_STATS.
exclude=SYS_USER excludes the import of the SYS user. In a PDB that is not even allowed, and most likely you are not interested in importing the definition of the SYS user.
exclude=TABLESPACE:"IN('TEMP')" excludes the temporary tablespace from the import. Most likely there is already a temporary tablespace in the new, target PDB. It is faster to create a TEMP tablespace in advance – and name it the same as in the source database.
A change was made to Spatial in 19c and some Spatial admin users are removed. To avoid errors/noise in the log file you can safely exclude them from the import by specifying exclude=SCHEMA:"IN('SPATIAL_CSW_ADMIN_USR','SPATIAL_WFS_ADMIN_USR')".
transport_datafiles is used to specify the data files that make you the tablespace you are transporting. Specify the parameter multiple times to specify more data files. You can use asmcmd to get the data file paths and names.
Data Pump Ignorable Errors
Multimedia desupported in 19c, but code is still there. You can safely disregard this error:
Processing object type DATABASE_EXPORT/NORMAL_OPTIONS/TABLE
ORA-39342: Internal error - failed to import internal objects tagged with ORDIM due to ORA-00955: name is already used by an existing object.
Processing object type DATABASE_EXPORT/SYSTEM_PROCOBJACT/POST_SYSTEM_ACTIONS/PROCACT_SYSTEM
ORA-39083: Object type PROCACT_SYSTEM failed to create with error:ORA-04042: procedure, function, package, or package body does not exist
Failing sql is:
BEGIN
SYS.DBMS_UTILITY.EXEC_DDL_STATEMENT('GRANT EXECUTE ON DBMS_DEFER_SYS TO "DBA"');COMMIT; END;
I was helping a customer the other day together with Mike. They were upgrading from 18c to 19c and had to convert the database to a PDB as well. At first glance, it seemed pretty straightforward, but things got complicated because:
They have standby databases and want the Data Guard setup to survive the plug-in operation.
They are using ASM.
They are using TDE Tablespace Encryption and have also encrypted their SYSTEM and SYSAUX tablespace.
The DBA that will carry out the upgrade and plug-in is not allowed to have the TDE Keystore password. They have separation of duties, so only the security admins have the keystore password.
Can you do that? Yes, you can! Let me tell you how.
Upgrade
First, upgrade the database. You can easily maintain the Data Guard setup during an upgrade. I wrote a blog post about a little while ago.
In addition, to upgrade a database with encrypted tablespaces you don’t need the keystore password. You must configure the database to use an auto login keystore, and that’s it. If you are concerned about the use of an auto-login keystore, you can simply remove it again after the upgrade.
External Store for a Keystore Password
The plug-in operation will require the keystore password. But the DBA doesn’t know it – so we need to find a solution for that.
The solution is to store the keystore password in an external store. I also wrote a blog post about that a while ago. When you have it configured you can exchange the commands that require a keystore password, like:
SQL> ADMINISTER KEY MANAGEMENT ... KEYSTORE IDENTIFIED BY "S3c3tPassw0rd";
With this:
SQL> ADMINISTER KEY MANAGEMENT ... KEYSTORE IDENTIFIED BY EXTERNAL STORE;
The database will get the keystore password from an external store, which is basically a file in the file system which is encrypted with a password that only the database know.
The security admins would need to do this in the CDB that will receive the non-CDB database. They can do it in advance, so they can relax while the DBA carries out the operation in a maintenance window.
If the TDE keystore is already configured using the WALLET_ROOT parameter, you can use the feature right away. Otherwise, you need a database restart to configure it.
Like with the auto-login keystore, if you are concerned about the security, you can simply disable it again after the operation.
Plug In
Now things get complicated. When you plug in your non-CDB database the manifest file contains information on where the data files are located – but only on the primary database. This is an extract of a manifest file (the one you create with DBMS_PDB.DESCRIBE):
After plug-in, the CDB can start to use the data files right away. It reads from the manifest files where the data files are located. But there is no information on where files are located on the standby database.
To overcome this you must create aliases in the ASM instance on the standby host. The aliases will point back to the original data files (used by the standby database). So, when the plug-in happens and redo start to flow to the standby database, it will know which data files to recover. If you are storing data files in a regular file system, you could use soft links to serve the same purpose.
I won’t repeat the procedure as the above articles are really good. But these articles don’t consider the situation where your SYSTEM and/or SYSAUX tablespace is encrypted.
SQL> alter session set container=CDB$ROOT;
SQL> administer key management import keys ... keystore identified by external store ... ;
SQL> --Continue with step 17.2.2
SQL> create pluggable database .... ;
Dots and Underscores
When you follow the MOS notes you might wonder why the dots in the ASM aliases are replaced with underscores. At first glance, I had no idea, but it worked. I later learned the following:
The format for an ASM filename is [filetype|tablespacename].[ASM file number].[file incarnation], but basically it is three pieces of name separated by periods. We can’t create any filename or alias that mimics that format. So the scripts change those periods to underscores (‘_’). That is allowed.
ORA-15032 and ORA-15046
Most likely you get this error because there are already existing aliases on the ASM file. Only one alias is allowed per file.
You can verify that by using the ls command in ASMCMD. If it is an alias the Name column will look similar to this alias1 => +DATA/......
Ensure the database is not using the alias. If it does, rename the file in the database.
Remove the alias from ASM. It is strongly recommended to use rmalias. Although also possible with rm I consider it much safer to use rmalias.
Conclusion
You can upgrade and convert your database to a PDB without comprising your standby database. In addition to that, you can configure your database in such a way that you don’t even need to type in the TDE keystore password.