Why Are You Not Using Bigfile Tablespaces?

Recently, I worked on two cases that could be solved using bigfile tablespaces.

Bigfile tablespaces have been around for ages but are typically not the default. If you use the CREATE TABLESPACE command and don’t specify a tablespace type, you will probably get a smallfile tablespace.

In a smallfile tablespace in an 8k block size database, each data file can only be 32 GB. For a huge tablespace – double-digit TB – you end up with many files. In a bigfile tablespace, there is only one data file, but it can grow up to 32 TB. Further, Oracle Autonomous Database comes with bigfile tablespaces.

Frustrated over the problems with smallfile tablespaces, I wanted to understand the reluctance to use bigfile tablespace.

I asked around: Why are you not using bigfile tablespaces?

The Answers

They Take a Long Time to Back Up

True, unless you use multisection backups. If so, multiple channels can work on the same data file and you should not see performance bottlenecks on bigfile tablespaces.

It’s very simple to perform multisection backups. Simply add the section size clause to the RMAN backup command.

RMAN> backup database ... section size 30G ...

Recovering a Bigfile Data File Is a Pain

If you accidentally delete a bigfile data file, it is very time-consuming to restore it. It could be huge – potentially 32 TB. In contrast, if you delete a smallfile data file, it is much smaller – maximum 32 GB.

How often do you accidentally delete a data file? OK – some of us might know a horror story from back in the days – but it is very rare.

Should it happen, you can switch to your standby database while restoring the data file.

Plus, if you use Oracle Automatic Storage Management (ASM), you can’t accidentally delete a data file. If a data file is in use by a database, you can’t delete the file in asmcmd.

Incompatibility With File System

I know at least one operating system, which, in older versions, had a maximum limit of files of 16 TB. This is no longer a problem in modern operating systems. Nor is it a problem if you use ASM.

Someone even claimed it was the case in certain tape systems, although I couldn’t verify it.

It’s Always Been Like This

In the old days, I understand why you would want to use smallfile tablespaces. But not anymore. The downsides don’t exist anymore.

But, it’s hard to migrate to bigfile tablespaces!

No, it is not.

In recent versions of Oracle Database, you can move tables into a new tablespace without downtime. The same applies to indexes which you can rebuild into a new tablespace. It’s all online operations. No downtime, no interruptions!

LOB segments might be a little more tricky. DBMS_REDEFINITION could be a solution.

Some of these options require additional licenses. Always check the license guide before you start to use a new feature.

Still, I do understand that for existing databases, you don’t want to migrate. But for new tablespaces and new databases, I’d always go with bigfile tablespaces.

Let Me Know

I don’t see a reason for using smallfile tablespaces anymore.

But I’m curious. Why are you not using bigfile tablespaces? Please leave a comment with your reason or concern.

By the way – usually, an Oracle Database can hold only a certain amount of data files. When you start to go into Petabytes, you’ll need bigfile tablespaces.

Updates to Blog Post

The More Data Files, the Longer Switchover Time

Ludovico Caldara, the Data Guard PM, pointed out on LinkedIn, that the more data files you have, the longer a Data Guard switchover takes.

Contention When File Expands

dbroots left a comment referencing this MOS note: Alter Tablespace Resize on Bigfile Tablespace Causes Sessions to Hang with ‘Enq: TX – contention’ and ‘Buffer busy waits’ (Doc ID 2089689.1).

For very busy systems, you might see a short wait event while the data file expands in a bigfile tablespace. If it becomes too big a problem, you need to expand your data file in off-peak periods.

Further Reading

26 thoughts on “Why Are You Not Using Bigfile Tablespaces?

  1. You cannot use multi section backup in a dataguard environment where backups are scheduled in order to relieve the primary environment off any performance issues during a backup

    Like

  2. Hi Daniel,

    PDB cloning does so far at least until 19c not support a parallel copy of a single datafile as only one process is used per datafile. If the PDB has just one large bigfile tablespace is will take terribly long compared with smallfile tablespace. Are you aware about this limitation and will Oracle provide in future a solution to this problem?

    Regards,
    David

    Like

  3. First: I totally support that point of view. And I’d really like to migrate certain databases to a bigfile tablespace. But currently the migration path might be a bit complex. For instance, we have some databases with more than 11.000 segments in the tablespace. Migrating that to a new tablespace with a new name can be a lot of work depending on the segement type. So there’s still missing some easy way doing that with all segments in one command while retaining the old tablespace name (which is required for certain products).

    Like

  4. Avid smallfile TS user here – reasons:

    – many smaller databases consistency in handling.

    – No bigfile tablespaces on UNIX like AIX or HP/UX => consistency in all databases in a multiplatform OS zoo. And yes: or biggest Oracle databases still run under Unix flavorus like AIX or HP/UX. But replatforming is coming up …

    Christian

    Like

  5. I do have such a >30TB table on 256 partitions with 1 smallfile tablespace for each partition, but on 32k blocksize to have less issues with those small files.

    On the point I made that decision

    1st and most important: all my experience was with smallfile tablespaces, so I feared to go on a feature I never used before and run into unexpected stuff on production
    2nd I guessed more files allow better parallelism, as e.g. file resize probably lockes the whole file for a split second and this distributes better with more files. Not sure if this would really be an issue … but just guessing. For backups this is true, but I wasn’t aware of that sectionsize feature.
    3rd migrating now afterwards with more knowledge is to much of a pain and test and risk while my 128GB files are no real pain at all. I am pretty lucky with the way this is build now.
    4th DBMS_REDEF… can do that online, I know. But it is to much of a magic black box to allow that to work on >30TB of critical data for days, ignoring that I will need additional >30TB to even attempt that

    So no reasons to not using bigfiles, but reasons to keep a running system untouched ;-).

    Regards,
    Robert

    PS: Nice blog … I enjoy reading it and learn a lot on the way.

    Like

  6. Hi,

    We prefer smallfile tablespace when we have Oracle single instance on Windows and Linux.

    It may be a challenge to extend the Disk in Windows OS and filesystem partition in Linux when we have Bigfile tablespace.

    In many of our projects running on Oracle RAC on ASM we have Bigfile tablespace and it is working fine without any issues.

    Regards,
    Thiru

    Like

  7. Hi Daniel, very good article, we use bigfile by default, but in some cases we still get some wait events when extending it. Is there still something related to the doc below?
    Alter Tablespace Resize on Bigfile Tablespace Causes Sessions to Hang with ‘Enq: TX – contention’ and ‘Buffer busy waits’ (Doc ID 2089689.1)

    Like

  8. Hi Daniel,

    Maybe it’s not a fully good example. But when you use small files you can probably reduce/divide I/O operations when you split them to different diskgroups. This is the only thing what came to my mind.

    Like

  9. Hi,
    That might be a good argument, however, nowadays, storage systems have become more advanced with different tiers of storage and auto-tiering and flash caches. Further, with ASM you shouldn’t need this kind of file management. But I guess in some cases it could make sense.
    Regards,
    Daniel

    Like

  10. Hi,
    I didn’t know that. Thanks for sharing. If the issue becomes too much of a problem (which I’d expect happening on very busy systems only), then you could expand the data file in off-peak hours.
    But it’s good to know.
    Regards,
    Daniel

    Like

  11. Is that still a problem? It’s been many years since I worked as a DBA on Windows. I do recall having some bigfile tablespaces. But OK – file management in Windows can be a little tricky.

    Like

  12. I totally understand your point. Is it really worth the effort to migrate existing tablespaces? Perhaps not – leave existing tablespace as is. But new tablespaces, new databases – I’d go bigfile all the time.

    Like

  13. Hi David,
    I didn’t know that. Thanks for sharing.
    My gut feeling is that the problem is solved in 19c. In Autonomous Database, only bigfile tablespaces are used. And in ADB, the system is moving PDBs around all the time. If such a restriction still exists, it would really be a pain in ADB.

    Nevertheless, I’ve made a note of it. If I find time (and a large system) to test, then I’ll investigate and raise it internally, if needed.
    If you find yourself creating an SR about it, please forward the number to me.

    Regards,
    Daniel

    Like

  14. Hi Daniel,

    I created today SR 3-34697912071 : Slow PDB cloning because of one large bigfile tablespace.

    Many thanks for your contribution. Would be great if something will get improved.

    Regards,
    David

    Like

  15. I _think_ that your last comment (2023-10-25 23:20) was in regards to the PDB cloning.

    We have that exact issue, too. It’s actually the reason why we moved a tablespace from bigfile to smallfile. There are two possible ways to clone a PDB. The “classic” way using SQL*Plus, and the “modern” way using RMAN.

    The classic way will do multiple datafiles in parallel, but each datafile will be handled completely sequentially by one process. That will take a LONG time for tablespaces with multiple TB. Even if it’s all on the same Exadata.

    The modern way allows to use Section Size to do parallelized cloning of datafiles. BUT: We then had the issue that the datafiles of the cloned PDB were not in the GUID subdirectory but in the datafile target of the CDB$ROOT. Not good. That topic couldn’t be solved by support in time so we discarded that option, again.

    Regards,
    Jan

    Like

  16. We have cases of random corruption in data files, on standby databases. The resolution from support, sometimes, is to re copy the datafile from primary. Doing that on a 25TB datafile is much worse than only 32GB…

    The issue discussed in 2089689.1 is also very concerning. We have some databases that don’t have low activity windows. Any window will impact the users.

    Like

  17. Hi Eric,

    Thanks for sharing. I haven’t seen real-life situations myself where this contention has been a problem, so I lack insight into the problem. But I agree that it does sound a little concerning. On the other hand, in Autonomous Database Oracle is only using BIGFILE tablespaces, and they do run some very busy systems.

    With regards to the corruptions – are they “newer” issues on Oracle Database 19c or later? But anyway, having to copy a small file is obviously much easier than a larger file.

    Regards,
    Daniel

    Like

  18. Hi Daniel,

    The SR 3-34697912071 is in the meantime closed. We cloned a PDB so far with PARALLEL=8. By increasing this value to 24 or 32 and accordingly parallel_max_servers to something higher we were able to see that Oracle is able to clone a large TS also in parallel. It appears that there is a certain magic when and how it works fast. For us it was at the end easier to optimize create pluggable database command rather then to use rman duplicate PDB command.

    Thanks and regards
    David

    Like

  19. Hi David,

    I’m glad you found a solution to your problem. I agree that CREATE PLUGGABLE DATABASE is much more flexible than using RMAN (although RMAN is super cool).

    Thanks for sharing the solution. This is new to me, but really nice info.

    Regards,
    Daniel

    Like

  20. David, if I understood you correctly, your SR was about the slow cloning with bigfile tablespaces because the tablespace is not parallelized? Was there some code change with this so that it now works for you?

    Like

  21. Hi Jan,

    I don’t think any code changes were made. You need to specify enough channels for the cloning to happen in parallel. Best, is to let the database handle it by itself (omit the parallel clause and the database finds a suitable parallel degree). If you set parallel low, there is a chance only one worker will process the bigfile tablespace.
    If you want to be more in control, then you should use RMAN instead.
    Regards,
    Daniel

    Like

Leave a comment