Grid Infrastructure 19c Out-Of-Place Patching Fails on AIX

I’m a strong advocate for out-of-place patching, and I can see that many of my blog readers are interested in that topic as well. Thank you for that!

But a reader notified me about a specific issue that occurs during out-of-place patching of Oracle Grid Infrastructure 19c. The issue occurs when using OPatchAuto as well as SwitchGridHome.

Normally, I recommend creating a new Oracle home using the base release (so 19.3.0) and then applying the latest Release Update on top:

# Unzipping base release, 19.3.0
unzip -oq /u01/software/LINUX.X64_193000_grid_home.zip
# Install and patch Oracle home
./gridSetup.sh -ignorePrereq -waitforcompletion -silent \
   -applyRU ...

However, that fails on AIX:

Preparing the home to patch...
Applying the patch /u01/software/36916690/36916690/36917416...
OPatch command failed while applying the patch. For details look at the logs 
from /u01/app/19.25.0/grid/cfgtoollogs/opatchauto/.

The log file has a little more detail:

DeleteAction : Destination File ''/u01/app/19.25.0/grid/perl/bin/perl'' is not writeable.
Copy Action: Destination File ''/u01/app/19.25.0/grid/perl/bin/perl'' is not writeable.

The Solution

There is already a MOS note that describes a potential workaround:

Out of place (OOP) patching of 19c Release Update (RU) fails on AIX (Doc ID 2948468.1)

But the reader leaving the comment asked for a few more words.

My Words

First, you should continue to use out-of-place patching despite the above issue.

Second, instead of using the base release (19.3.0) as the basis for any new Oracle home, you must create a new base release. One that doesn’t contain the error that leads to the above issue.

  1. On a non-prod system, create a brand-new Grid Infrastructure installation using the base release (19.3.0).
  2. Use in-place patching to patch it to the latest Release Update (currently 19.25.0). You need to add a few parameters to the opatchauto command:
    <path_to_temp_home>/OPatch/opatchauto \
       apply <path-to-patch> \
       -binary \
       -oh <path_to_temp_home> \
       -target_type cluster
    
  3. Create a gold image of this 19.25.0 Oracle home.
    export NEW_GRID_HOME=/u01/app/19.25.0/grid
    $NEW_GRID_HOME/gridSetup.sh -createGoldImage \
       -destinationLocation $GOLDIMAGEDIR \
       -name gi_gold_image.zip \
       -silent
    
  4. You now have a new base release. It is almost as pristine as the 19.3.0 base release. It just contains the additional Release Update (19.3.0 + 19.25.0).
  5. When you need to patch another system, use out-of-place patching using SwitchGridHome. But instead of using the base release 19.3.0, you use your new gold image that is already patched to 19.25.0.
    #Don't do this
    #unzip -oq /u01/software/LINUX.X64_193000_grid_home.zip
    #Do this
    unzip -oq /u01/software/gi_gold_image.zip
    
  6. When you install the using gridSetup.sh you don’t have to apply the Release Update because the gold image contains it already. You can still apply any one-offs you need.
    ./gridSetup.sh -ignorePrereq -waitforcompletion -silent \
       -applyOneOffs <path_to_one_offs> \
       ...
    
  • There are no other changes to the procedure.

The issue is fixed in bug 34962446. However, I doesn’t seem to be available in 19c, so you have to repeat the above process for every Release Update.

If you still run into patching the Perl component, take a look at this MOS note:

Final Words

  • Is it a viable workaround? Yes, I believe so. There’s a little more work to, on the other hard, you’ve now started to use gold images, which is a huge advantage.

  • If you continue patching in-place or out-of-place using OPatchAuto, be sure to clean up the Oracle home from time to time.

  • The issue occurs starting with Release Update 19.18 because that’s where Oracle started to add patches to Perl in the Oracle home.

  • Thanks to Axel Dellin for helping me with some details.

You should not let this little bump on the road prevent you from using out-of-place patching.

Happy Patching

Reflections On 2024

2024 is coming close to an end, and what a year!

What Happened In 2024?

  • 7 releases of AutoUpgrade which included many new features. The one I’m most proud of is AutoUpgrade Patching. Being able to patch a database with just one command is a great help.

  • My team started the Oracle DBAs run the world appreciation campaign. The biggest and most important systems have an Oracle Database underneath and a DBA next to it. A lot happens in tech these days, and I think DBAs need more appreciation. If we meet at a conference, say hello and get a sticker with the slogan. Put the sticker on your bosses’ door, so they are constantly reminded.

    Oracle DBAs run the world

  • My most popular blog post was How to Patch Oracle Grid Infrastructure 19c Using In-Place OPatchAuto. But did you know there’s a much better way to patch Oracle Grid Infrastructure?

  • The most popular video on our YouTube channel was Virtual Classroom Seminar #19: Move to Oracle 23ai – Everything about Multitenant – PART 1. You spent more than 1.000 hours watching it. If you need more tech content, check out the other webinars. All tech, no marketing!

  • What’s your wish for 2025? Leave a comment to make it happen.

My Wishes For Next Year

  • I wish for even more customer feedback. If you have a good idea, please don’t hesitate to reach out. This enhancement and this one come from customer feedback. Your Feedback Matters!

  • I wish you try AutoUpgrade Patching when you patch your database in January next year (you do that, right?). At least use it to download patches. It’s so much easier than going through My Oracle Support.

  • I wish the world would be inspired by the Oracle community. Despite all our differences, we meet each other with an open mind and a smile. We share knowledge and work together towards the greater good. A big shout-out to the many volunteers that make our community thrive!

Get Ready By Stopping

When this blog post is out, I’ll already be on Christmas holiday (I started early this year). 😎🎄

I love working in tech, but you need time off to recharge your batteries. Especially as a DBA, you get frequent adrenalin kicks when making changes in production. 🪫🔌🔋

Wind down, spend time with family and friends, work on your hobby, listen to music, read a book, and learn something new. 🤗🎶📖

I’m sure 2025 will be just as awesome!

Fast Refresh Materialized Views and Migrations Using GoldenGate

Following the blog post about fast materialized views during transportable tablespace migrations, here’s a similar one for migrations using Oracle GoldenGate.

You can migrate using Oracle GoldenGate and avoid a complete refresh during downtime. Recent versions of Oracle GoldenGate can replicate materialized views and materialized view logs and in some cases a simple GoldenGate configuration will work fine.

However, if you are faced with a complex migration and the extract or replicat processes become a bottleneck, the below approach offers a fairly simple way to reduce the load on extract and replicat without having to perform a complete refresh during downtime.

The Basics

Like in the previous blog post, I will use an example based on a master table named SALES. If you need to catch up on materialized views, you can also check the previous blog post.

Overview of materialized view refresh process

Remote Master Table, Local Materialized View

In this example, you are migrating a database from source to target. This database holds the materialized view, and a remote database acts as the master database, where the master table and materialized view log reside.

Migrating a database which holds a materialized view that uses a remote master database

  1. In the source database, register the extract process and exclude the materialized view:
    TABLEEXCLUDE <schema>.SALES_MV
    
  2. If you configure DDL replication, I recommend excluding the materialized view and handling such changes in a different way:
    DDL EXCLUDE objtype 'snapshot'
    
  3. Perform the initial load on the target database.
    • This creates the materialized view and the database link to the remote database.
    • Start the replicat process.
  4. In the target database, perform a complete refresh of the materialized view:
    exec dbms_mview.refresh ('sales_mv','c');
    
    • This is before downtime, so who cares how long it takes to perform the complete refresh.
    • Although a fast refresh might be enough, it is better to be safe and avoid any problems with missing data.
    • You can configure a period fast refresh of the materialized view.
  5. In the remote database, both source and target databases have registered as materialized views.
    select owner, mview_site
    from dba_registered_mviews
    where name='SALES_MV';
    
  6. Downtime starts.
  7. Complete the migration tasks needed to move over to the target database. This is out of scope of this blog post.
  8. Shut down the source database.
  9. In the remote database, purge materialized view log entries that are related to the materialized view in the source database:
    exec dbms_mview.purge_mview_from_log('<mview-owner>', 'SALES_MV', '<source-db>');
    
  10. Unregister the materialized view in the source database:
    exec dbms_mview.unregister_mview('<mview-owner>', 'SALES_MV', '<source-db>');
    

If you want to reverse the application after moving to the target database and preserve the source database as a fallback, you postpone tasks 8-10.

Local Master Table, Remote Materialized View

In this example, you are migrating a database from source to target. This database holds the master table and materialized view log, while a remote database contains the materialized view.

Migrating a database which acts as master database holding a master table and materialized view log

  1. In the source database, register the extract process and include the master table, but exclude the materialized view log:
    TABLE <schema>.SALES
    TABLEEXCLUDE <schema>.MLOG$_SALES
    
  2. If you configure DDL replication, Oracle GoldenGate should automatically exclude the materialized view log. However, you can explicitly exclude it to be safe:
    DDL EXCLUDE objtype 'snapshot log'
    
  3. Perform the initial load on the target database.
    • This creates the master table and the materialized view log.
  4. In the target database, no remote database is using the master table yet. But replicat is keeping it up-to-date. However, the materialized view log might have orphan rows from the source database.
    • Drop and recreate the materialized view log.
  5. In the remote database, create a new database link to the target database and a new materialized view based on the master table in the target database.
    create database link ... using '<target-tns-alias>';
    create materialized view sales_mv2 ... ;
    
    • SALES_MV2 should look exactly like SALES_MV except that it fetches from the target database instead of the source database.
  6. Perform an initial complete refresh of SALES_MV2:
    exec dbms_mview.refresh ('sales_mv2','c');
    
    • The materialized view is not used by queries yet, so who cares how long it takes to perform the complete refresh.
    • You can configure a periodic refresh of the materialized view.
  7. Create a synonym that initially points to SALES_MV – the materialized view based on the source database. You will change it later on.
    create synonym sales_syn for sales_mv;
    
  8. Change your queries to reference SALES_SYN instead of SALES_MV directly.
    • You do this in a controlled manner ahead of the downtime window.
    • You can use auditing to detect usages of the materialized view (SALES_MV) and change all of them to use the synonym (SALES_SYN). Displaying the use of synonyms
  9. Downtime starts.
  10. Complete the migration tasks needed to move over to the target database. This is out of scope of this blog post.
  11. In the remote database, change the synonym to point to the materialized view that accesses the target database.
    create or replace synonym sales_syn for sales_mv2;
    
    • No application changes are needed because you made the applications use the synonym instead.
    • When you change the synonym to point to the new materialized view, this change is completely transparent to the application. Displaying the use of synonyms
  12. Drop the materialized view that accesses the source database.
    drop materialized view sales_mv;
    
  13. Shut down the source database.

If you want to reverse the application after moving to the target database and preserve the source database as a fallback, you postpone tasks 12-13.

If you can’t change the application to use the synonym (with a different name), then there’s another approach:

  • Keep accessing the SALES_MV until the downtime window. Don’t create the synonym yet.
  • Drop the original materialized view: drop materialized view sales_mv.
  • Create the synonym: create synonym sales_mv for sales_mv2.

Local Master Table, Local Materialized View

In this example, you are migrating a database from source to target. This database holds the master table, the materialized view log, and the materialized view. There is no remote database involved.

Migrating a database which acts as master database holding a master table and materialized view log

  1. In the source database, register the extract process and include the master table, but exclude the materialized view and materialized view log:
    TABLE <schema>.SALES
    TABLEEXCLUDE <schema>.SALES_MV
    TABLEEXCLUDE <schema>.MLOG$_SALES
    
  2. If you configure DDL replication, I recommend excluding the materialized view and materialized view log and handling such changes in a different way:
    DDL EXCLUDE objtype 'snapshot', EXCLUDE objtype 'snapshot log'
    
  3. Perform the initial load on the target database.
    • This creates the master table, the materialized view, and the materialized view log.
  4. In the target database, the replicat process replicates changes to the master table. No replication takes place on the materialized view or the materialized view log.
  5. Perform an initial complete refresh of SALES_MV:
    exec dbms_mview.refresh ('sales_mv','c');
    
    • The refresh uses the master table in the target database. The replicat process is keeping the master table up-to-date.
    • The materialized view is not used by queries yet, so who cares how long it takes to perform the complete refresh.
    • You can configure a periodic refresh of the materialized view.
  6. Downtime starts.
  7. Complete the migration tasks needed to move over to the target database. This is out of scope of this blog post.
  8. Shut down the source database.
  9. Keep an eye on the materialized view log (MLOG$_SALES) and ensure it doesn’t grow beyond reason.

If you want to reverse the application after moving to the target database and preserve the source database as a fallback, you postpone task 8.

Further Reading

Fast Refresh Materialized Views and Migrations Using Transportable Tablespaces

In some databases, fast refresh materialized views are important for good performance. During a migration using transportable tablespaces, how do you ensure the materialized views are kept up-to-date and avoid a costly complete refresh or stale data?

The Basics

The database populates a materialized view with data from a master table. To enable fast refresh on a materialized view, you must first create a materialized view log on the master table. The materialized view log captures changes on the master table. Then, the database applies the changes to the materialized view, thus avoiding a complete refresh.

When a user changes data in the master table (SALES), those changes are recorded in the materialized view log (MLOG$_SALES) and then used to refresh the materialized view (SALES_MV).

The master table and the materialized view log must reside in the same database (known as the master database). Hence, they are depicted in bluish colors. The materialized view is often in a different database, and then uses a database link to get the changes.

You only need materialized view logs if the materialized view is a fast refresh type.

A fast refresh materialized view needs to perform a complete refresh the first time, before it can move on with fast refreshes (thanks to my good friend, Klaus, for leaving a comment).

Remote Master Table, Local Materialized View

In this example, you are migrating a database from source to target. This database holds the materialized view, and a remote database acts as the master database, where the master table and materialized view log reside.

Migrating a database which holds a materialized view that uses a remote master database

The materialized view and the underlying segment are stored in a tablespace in the source database. That segment is used to recreate the materialized view in the target database without any refresh needed.

  1. You must perform the following in the migration downtime window.
  2. In the source database, stop any periodic refresh of the materialized view.
  3. Optionally, perform a fast refresh of the materialized view:
    exec dbms_mview.refresh ('sales_mv','f');
    
  4. Start the migration using transportable tablespaces.
    • Set the tablespaces read-only.
    • Perform a final incremental backup.
    • Data Pump sets the tablespaces read-write in the target database.
    • Shut down the source database. Otherwise, you might risk it interfering with the target database’s refresh.
  5. In the target database, you can now perform a fast refresh.
    • Data Pump has already recreated the database link to the remote master database.
    • Although the materialized view is now in a different database, it can fetch the recent changes from the master database starting with the last refresh in the source database.
  6. In the master database, both materialized views from the source and target database are now registered:
    select * from dba_registered_mviews where name='SALES_MV';
    
  7. Purge materialized view log entries that are related to the materialized view in the source database:
    exec dbms_mview.purge_mview_from_log('<mview-owner>', 'SALES_MV', '<source-db>');
    
  8. Unregister the materialized view in the source database:
    exec dbms_mview.unregister_mview('<mview-owner>', 'SALES_MV', '<source-db>');
    
  9. In the target database, re-enable periodic refresh of the materialized view.

Local Master Table, Remote Materialized View

In this example, you are migrating a database from source to target. This database holds the master table and materialized view log, while a remote database contains the materialized view.

Migrating a database which acts as master database holding a master table and materialized view log

The master table and materialized view log are stored in a tablespace in the source database. The migration moves the data to the target database. The materialized view is in the same database, so no remote database or database link is involved.

  1. You must perform the following in the migration downtime window.
  2. In the remote database, stop any periodic refresh of the materialized view.
  3. Optionally, perform a fast refresh of the materialized view:
    exec dbms_mview.refresh ('sales_mv','f');
    
  4. Start the migration using transportable tablespaces.
    • Set the tablespaces read-only.
    • Perform a final incremental backup.
    • Data Pump sets the tablespaces read-write in the target database.
    • Shut down the source database. Otherwise, you might risk it interfering with the target database’s refresh.
  5. In the remote database, ensure that the database link now points to the new target database.
    • If the database link uses a TNS alias, you can update it.
    • Or recreate the database link with a new connect descriptor.
  6. Perform a fast refresh.
    exec dbms_mview.refresh ('sales_mv','f');
    
    • If you hit ORA-04062: timestamp of package "SYS.DBMS_SNAPSHOT_UTL" has been changed, run the refresh again.
  7. Re-enable periodic refresh of the materialized view.
  8. In the target database, ensure that the materialized view log is now empty.
    select * from mlog$_sales;
    

Local Master Table, Local Materialized View

In this example, you are migrating a database from source to target. This database holds the master table, the materialized view log, and the materialized view. There is no remote database involved.

Migrating a database which acts as master database holding a master table and materialized view log

The master table and materialized view log are stored in a tablespace in the source database. The migration moves the data to the target database. The materialized view is in the same database, so there is no remote database or database link involved.

  1. You must perform the following in the migration downtime window.
  2. In the source database, stop any periodic refresh of the materialized view.
  3. Perform a fast refresh of the materialized view:
    exec dbms_mview.refresh ('sales_mv','f');
    
  4. Ensure the materialized view log is empty, i.e., all rows have been refreshed into the materialized view.
    select count(*) from mlog$_sales;
    
    • The query must return 0 rows. If more rows, then perform an additional fast refresh.
    • If a remote materialized view uses the materialized view log then it is acceptable to move on if you are sure the local materialized view is completely updated.
  5. Start the migration using transportable tablespaces.
    • Set the tablespaces read-only.
    • Perform a final incremental backup.
    • Data Pump sets the tablespaces read-write in the target database.
    • Shut down the source database. Otherwise, you might risk it interfering with the target database’s refresh.
  6. In the target database, perform a fast refresh.
    exec dbms_mview.refresh ('sales_mv','f');
    
  7. Re-enable periodic refresh of the materialized view.
  8. Ensure that the materialized view log is now empty.
    select * from mlog$_sales;
    

A word of caution here. The materialized view must be completely up-to-date in the source database before the migration. After the migration, the same materialized view won’t be able to refresh the pre-migration rows. That is why you are checking for rows in mlog$_sales.

Any new changes made in the target database will sync fine.

Further Reading

AutoUpgrade New Features: Upgrade RMAN Catalog Schema

With the latest version, 24.8, AutoUpgrade can upgrade the RMAN catalog schema after patching and upgrading. This is useful to those who take RMAN backups and duplicate their RMAN metadata to a catalog database.

If you don’t upgrade the catalog schema after patching and upgrading, you’ll see this message in the RMAN output:

PL/SQL package RCO.DBMS_RCVCAT version 19.24.00.00. in RCVCAT database is not current
PL/SQL package RCO.DBMS_RCVMAN version 19.24.00.00 in RCVCAT database is not current

Details

  • After patching or upgrading, AutoUpgrade upgrades the RMAN catalog schema in the postupgrade stage.
  • AutoUpgrade connects with RMAN to the recovery catalog and issues the upgrade catalog command.
  • AutoUpgrade does not execute dbmsrmansys.sql. Normally, this is only needed for the upgrade of the first catalog schema of a given release (like for the first database on Oracle Database 23ai), and even then, it might not be needed.

How To

  • Specify the connect string to the catalog database in the AutoUpgrade config file:

    <prefix>.rman_catalog_connect_string=catalogdb
    
    • catalogdb is a TNS alias to the catalog database.
  • Start AutoUpgrade to load the username and password for the recovery catalog:

    java -jar autoupgrade.jar -config ... -load_password
    
  • Switch to the password group RMAN:

    group rman
    
  • Add the username and password for a specific database:

    add <ORACLE_SID> -user <catalog_schema_name>
    
    • AutoUpgrade prompts for the password
  • Save the changes and exit the load password console.

    save
    exit
    
  • Start AutoUpgrade in deploy mode:

    java -jar autoupgrade.jar -config ... -mode deploy
    

Happy Upgrading, Happy Patching

  • You can enhance the solution using an after_action script that starts a level 1 backup after the job. The after_action script takes place after the postupgrade stage, where AutoUpgrade upgrades the catalog schema.

  • Version 24.8 of AutoUpgrade does not support this feature when you use the -patch command line option. This is coming in a later version.

Appendix

Further Reading

Invalid credentials

  • When you enter the catalog credentials into the AutoUpgrade keystore, AutoUpgrade validates the credentials. Any errors result in AutoUpgrade returning the following message:

    Invalid credentials, please try again.
    
  • To debug, run the following command:

    $ORACLE_HOME/bin/rman TARGET / rcvcat <catalog_user>@<rman_catalog_connect_string>
    
  • Check the log files in:

    <global_log_dir>/log/cfgtoollogs/upgrade/auto
    

AutoUpgrade New Features: Drop Database Link When Using Refreshable Clones

With the latest version, 24.8, AutoUpgrade can drop the database link after using refreshable clones.

Details

  • Certain jobs in AutoUpgrade require a database link to the source database.
  • Whenever you specify a database link using source_dblink, you can optionally instruct AutoUpgrade to drop it.
  • The default value is no, meaning AutoUpgrade leaves the database link in place.

How To

  • Get the latest version of AutoUpgrade:

    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  • Instruct AutoUpgrade to drop the database link after completing the job:

    upg1.drop_dblink=yes
    

Happy Upgrading

I’m a huge fan of using refreshable clones for upgrades and non-CDB to PDB migrations.

Granted, this is not the most ground-breaking enhancement we’ve introduced. But it’s yet another thing that makes your life a little easier.

What do you think could make AutoUpgrade even easier to use? Leave a comment and let us know.

Control The Final Refresh When Using Refreshable Clone PDBs in AutoUpgrade

When you migrate or upgrade with refreshable clone PDBs, you sometimes want to decide when the final refresh happens. Perhaps you must finish certain activities in the source database before moving on.

Here’s how to do that in AutoUpgrade.

The Final Refresh Dilemma

In AutoUpgrade, the final refresh happens at the time specified by the config file parameter start_time.

Overview of the phases when using refreshable clone PDBs

You specify start_time in the config file, but once you start the job, you cannot change it. Remember that you normally start AutoUpgrade a long time before start_time to allow the creation of the new PDB.

In some situations, you want more control. You might want to finish some work on the source database before AutoUpgrade starts the final refresh. Perhaps you need to kick users off or coordinate activities with other teams.

In that case, a fixed start time is not very flexible.

The Solution

Update: Check out the new proceed command.

Imagine my downtime window starting on Saturday, 30 November, 02:00.

At that time, I need to ask the application team to shut down the applications in the database, I need to run certain pre-migration tasks, and finally kill sessions if needed. So, I don’t want AutoUpgrade to start at 02:00 – I want to decide at which point after 02:00 that AutoUpgrade should start.

Here’s my approach

  • I create a config file and set the start_time parameter to the start of my downtime window.
    upg1.start_time=30/11/2024 02:00:00
    
  • I start AutoUpgrade in deploy mode before my downtime starts:
    java -jar autoupgrade.jar -config ... -mode deploy
    
    • I must start well before the downtime window so AutoUpgrade has enough time to copy the database.
    • Imagine my tests show it takes around four hours to copy the database. I decide to start on Friday, 29 November, 16:00, so the copy should end around 20:00 – well enough time before my downtime window.
  • AutoUpgrade now starts the CLONEPDB phase:
    +----+-------+--------+---------+-------+----------+-------+---------------------------+
     |Job#|DB_NAME|   STAGE|OPERATION| STATUS|START_TIME|UPDATED|                    MESSAGE|
     +----+-------+--------+---------+-------+----------+-------+---------------------------+
     | 100|   TEAL|CLONEPDB|EXECUTING|RUNNING|  02:00:00| 4s ago|Creating pluggable database|
     +----+-------+--------+---------+-------+----------+-------+---------------------------+
    
    • Note the START_TIME value. It is the time when the final refresh happens.
  • I wait for AutoUpgrade to create the PDB and enter the REFRESHPDB phase:
    +----+-------+----------+---------+-------+----------+-------+----------------------+
    |Job#|DB_NAME|     STAGE|OPERATION| STATUS|START_TIME|UPDATED|               MESSAGE|
    +----+-------+----------+---------+-------+----------+-------+----------------------+
    | 100|   TEAL|REFRESHPDB|EXECUTING|RUNNING|  02:00:00| 2s ago|PDB TEAL was refreshed|
    +----+-------+----------+---------+-------+----------+-------+----------------------+
    
  • Then I stop the job:
    upg> stop -job 100
    
    • If I exit AutoUpgrade after stopping the job, don’t worry. As soon as I restart AutoUpgrade, it will pick up from where it left and continue with the job.
  • When I stop the job, there is no periodic refresh. I should refresh the PDB in the target CDB manually at regular intervals:
    SQL> alter pluggable database teal refresh;
    
    • If I don’t perform any periodic refresh, the redo will accumulate, and the final refresh will take longer. Keep the final refresh shorter by refreshing more often.
  • After the start of my downtime window (the start_time parameter), when I’m done on the source database and want to proceed with the final refresh, I resume the job in AutoUpgrade.
    upg> resume -job 100
    
  • AutoUpgrade now realizes it is past the defined start_time and immediately moves on with the final refresh and the rest of the job.

Wrapping Up

Ideally, AutoUpgrade should offer better control over the process. We have a task on our backlog to come up with a better solution.

Update: Use the proceed command in AutoUpgrade to control the start time

However, refreshable clone PDBs are still a fantastic method for non-CDB to PDB migrations and for upgrades of individual PDBs.

There are a few quirks to be aware of, and if you are using Data Guard bear in mind that you can only plug in with deferred recovery. Other than that – it’s just to say…

Happy Migrating!

Further Reading

AutoUpgrade New Features: Update Enterprise Manager Configuration

With the latest version, 24.8, AutoUpgrade can interact with Oracle Enterprise Manager:

  • Blackout the database
  • Modify target configuration

Details

  • AutoUpgrade interacts with Enterprise Manager through the command line interface, EM CLI.
  • In the DRAIN phase, AutoUpgrade creates a blackout of the target. It will also remove the blackout again after the job.
  • The blackout is named blackout_<SID>.
  • AutoUpgrade updates the target configuration in Enterprise Manager to reflect the new Oracle home if the job succeeds.
  • If you restore the job (returning to the pre-upgrade state with Flashback Database), AutoUpgrade restores the original target configuration.

How To

  • Install and configure EM CLI on the database host.

  • Persist the credentials on the database host using the autologin feature.

  • Get the latest version of AutoUpgrade:

    wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar
    
  • Specify the path to EM CLI in the AutoUpgrade config file:

    upg1.emcli_path=/u01/app/oracle/oem
    
  • Specify the target name from Enterprise Manager:

    upg1.em_target_name=ORCL_myhost.domain.int
    

Happy Upgrading

You can use the functionality when you patch and upgrade your Oracle Database.

If you use Oracle Enterprise Manager, AutoUpgrade can handle even more tasks in your runbook.

What are the next tasks that AutoUpgrade should do for you? Leave a comment, and we’ll consider it.

Easier Patching of Oracle Database on Windows

We just released a new version, 24.8, of AutoUpgrade that supports the new patching features on Windows.

Further, we also made it easier to download the latest version of AutoUpgrade:

wget https://download.oracle.com/otn-pub/otn_software/autoupgrade.jar

How Does It Work?

Here’s a simple demo of the new functionality.

Also, you can get much more information about the new patching features in general in our webinar One-Button Patching – makes life easier for every Oracle DBA.

A Few Notes

Credential File

Currently, if you use a credential file to store the username/password of the Windows service, then AutoUpgrade delete it afterwards. We consider that the most secure approach.

But that’s not suitable for automation, so in the next version of AutoUpgrade, a config file parameter will allow you to re-use the credential file repeatedly.

Missing Registry Entries

If you use the credential file – depending on your configuration – you might see that entries are missing in Windows Registry for the database instance.

You can add those manually after patching by executing:

%NEW_ORACLE_HOME%\bin\oradim.exe -edit -sid %ORACLE_SID% -startmode auto

Oradim will see that registry keys are missing and add those.

This will also be automated in a future version of AutoUpgrade. We’re also working on adding support for virtual users.

Happy Patching

AutoUpgrade patching makes it easier to patch your Oracle Database.

Give it a try on Windows, and let us know if something is missing.

Patching Oracle Database should be as easy as patching a smartphone.

How to Patch Oracle Restart 19c and Oracle Data Guard Using Out-Of-Place Switch Home

I strongly recommend that you always patch out-of-place. Here’s an example of how to do it on Oracle Restart and Oracle Data Guard.

This procedure is very similar to patching without Oracle Data Guard, so I’ll often refer to a previous blog post.

My demo system

  • Data Guard setup with two single instance databases in Oracle Restart configuration
  • GI and database home are currently on 19.24

I want to:

  • patch to 19.25
  • patch both the GI and database home in one operation

Preparation

  • I download the same patches are mentioned in the Preparations.

  • I place the software in /u01/software which is an NFS share accessible to both servers.

How to Patch Oracle Restart 19c and Oracle Database

1. Prepare a New GI Home

I can do this in advance. It doesn’t affect my current environment and doesn’t cause any downtime.

2. Prepare a New Database Home

I can do this in advance. It doesn’t affect my current environment and doesn’t cause any downtime.

3. Prepare Database

I can do this in advance. It doesn’t affect my current environment and doesn’t cause any downtime.

4. Standby Database: Switch to the New GI and Database Homes

Now, I can complete the patching process on the standby database by switching to the new Oracle homes.

  • I perform 4. Switch to the New GI and Database Homes on the standby host.

  • There is no downtime on the primary database. But the standby database will be shortly down which might mean something to you if you use Active Data Guard.

  • I make sure the standby database is brought back to the desired mode (mount or open). I check redo apply is running.

5. Primary Database: Switch to the New GI and Database Homes

Next, I can complete the patching process on the primary database by switching to the new Oracle homes.

6. Complete Patching

Now, both databases in my Data Guard configuration run out of the new Oracle Homes.

Only proceed with this step once all databases run out of the new Oracle Home.

That’s it! I have now patched my Oracle Restart and Data Guard deployment.

Happy Patching!

Appendix

Check the appendix in the related blog post for additional information and troubleshooting.

Further Reading

Other Blog Posts in This Series