If during the backup process of a block there is a write operation on the block then the backup will contain a before image and an after image of the oracle block, the complete block in the backup media will be corrupt.
Let's remember that the oracle block is the minimum IO unit, and an oracle block is made out of several OS blocks let's assume a block size of 8K and an OS block of 512b, this will give 16 OS blocks. The datafiles with the backup in progress will still allow read/write operations just as a regular datafile, I/O activity is not frozen.Įach time a row is modified, not only the row, but the complete block is recorded to the redo log file, this will only happen the first time the block is modified, subsequent transactions on the block will only record the transaction just as normal.ĭuring the user managed backup process the "fractured block" event may be present. When the command is issued, at this point in time a checkpoint is performed against the target tablespace then the datafile header is frozen, so no more updates are allowed on it (the datafile header), this is for the database to know which was the last time the tablespace had a consistent image of the data. Both ways to perform an online backup work in a similar way, as I will further explain later, rman is more efficient than the user managed backup. The database is required to be in archivelog mode for it to be able to perform an online backup. There are two ways to generate a hot backup, the first one is by a user managed backup and the second one with recovery manager. Myth #3: When a hot backup is in progress the target datafile is frozen. Myth #2: The archivelog mode "dramatically slows down" the database Myth #1: The hot backup generates "a lot" of redo information There are a couple of myths around the hot backup or online backup process: