datablogs: Backup
Showing posts with label Backup. Show all posts
Showing posts with label Backup. Show all posts

Tuesday, September 16, 2025

Oracle Standard Edition Log Shipping in Windows

In today’s always-on world, database downtime can cost businesses a fortune. Setting up a reliable Disaster Recovery (DR) solution ensures your organization can quickly recover from unexpected failures.

This guide walks you through a practical, step-by-step process of implementing Oracle 11g Standard Edition DR, from enabling archive logs to recovering the database at the DR site.


Why This Matters

  • Business Continuity: Minimize downtime during outages.

  • Data Protection: Ensure no transaction is lost.

  • Compliance: Meet RTO/RPO requirements.


Prerequisites

Before starting, ensure:

  • Oracle 11g Standard Edition is installed on both DC (Primary) and DR (Standby) servers.

  • Network connectivity & firewall rules allow archive log and backup file transfer between DC and DR.

  • Adequate storage is provisioned for backups and archive logs.


High-Level DR Workflow

[DC: Primary Database] | |--- Archive Logs + RMAN Backups ---> [DR: Standby Database] | --> Restore + Recover --> Open DB

Step-by-Step Implementation

Step 1: Install Oracle 11g

Install Oracle 11g Standard Edition on both DC and DR servers.


Step 2: Enable Archive Logging (DC)

Archive logs capture every committed change — critical for recovery.

SQL>SELECT LOG_MODE FROM V$DATABASE; SQL>SHUTDOWN IMMEDIATE; SQL>STARTUP MOUNT; SQL>ALTER DATABASE ARCHIVELOG; SQL>ALTER DATABASE OPEN;

Step 3: Force Logging & Supplemental Logs

Ensure redo logs are always generated and enough data is captured for recovery.

SQL>ALTER DATABASE FORCE LOGGING; SQL>ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;

Step 4: Configure Recovery Parameters

Increase recovery area size if required:

SQL>SHOW PARAMETER db_recovery; SQL>ALTER SYSTEM SET db_recovery_file_dest_size=10g;

Step 5: Configure RMAN

Enable control file autobackup and retention policy:

RMAN>CONFIGURE CONTROLFILE AUTOBACKUP ON; RMAN>CONFIGURE RETENTION POLICY TO REDUNDANCY 2;

Step 6: Take Full Backup (DC)

RMAN>BACKUP DATABASE;

This will back up all datafiles and perform an autobackup of control file & SPFILE.


Step 7: Prepare DR Instance

  • Shutdown instance

  • Remove old datafiles (oradata)

  • Start in NOMOUNT mode

STARTUP NOMOUNT;

Step 8: Move Backup Files to DR

Transfer flash_recovery_area folder to the DR server.


Step 9: Restore Control File & Database

RMAN>RESTORE CONTROLFILE FROM AUTOBACKUP; RMAN>ALTER DATABASE MOUNT; RMAN>RESTORE DATABASE;

Step 10: Recover Database

Update the catalog in RMAN before proceeding Recover Database 

RMAN> CATALOG START WITH 'C:\app\Administrator\flash_recovery_area\orcl\ARCHIVELOG';
searching for all files that match the pattern C:\app\Administrator\flash_recovery_area\orcl\ARCHIVELOG no files found to be unknown to the database

Use RMAN or SQL*Plus:
SQL>RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;

Enter AUTO to apply logs automatically.

SQL> RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
ORA-00279: change 1046243 generated at 09/16/2025 08:25:59 needed for thread 1
ORA-00289: suggestion : C:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2025_09_16\O1_MF_1_20_NDL89QG0_.ARC
ORA-00280: change 1046243 for thread 1 is in sequence #20
Specify log: {<RET>=suggested | filename | AUTO | CANCEL}
AUTO
ORA-00279: change 1048238 generated at 09/16/2025 08:35:03 needed for thread 1
ORA-00289: suggestion : C:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2025_09_16\O1_MF_1_21_NDL89XF1_.ARC
ORA-00280: change 1048238 for thread 1 is in sequence #21
ORA-00278: log file 'C:\APP\ADMINISTRATOR\FLASH_RECOVERY_AREA\ORCL\ARCHIVELOG\2025_09_16\O1_MF_1_20_NDL89QG0_.ARC' no longer needed for this recovery

Finally open with resetlogs:

SQL>ALTER DATABASE OPEN RESETLOGS;

Step 11: Validate

Run validation queries:

SQL>SELECT thread#, low_sequence#, high_sequence# FROM v$archive_gap; SQL>SELECT file#, checkpoint_change# FROM v$datafile_header ORDER BY file#; SQL>SELECT checkpoint_change# FROM v$database;

Ensure no archive log gaps remain.


RMAN Handy Scripts

Here are some ready-to-use RMAN scripts for ongoing maintenance:

Catalog & Recover

RUN { ALLOCATE CHANNEL c1 DEVICE TYPE DISK; CATALOG START WITH 'C:\app\Administrator\flash_recovery_area\orcl\ARCHIVELOG'; RESTORE ARCHIVELOG ALL; RECOVER DATABASE; DELETE NOPROMPT ARCHIVELOG ALL COMPLETED BEFORE 'SYSDATE-1'; RELEASE CHANNEL c1; }

Point-in-Time Recovery

RUN { ALLOCATE CHANNEL c1 DEVICE TYPE DISK; SET UNTIL SEQUENCE 18 THREAD 1; RESTORE DATABASE; RECOVER DATABASE; RELEASE CHANNEL c1; }

Key Takeaways

  • Automate archive log shipping from DC to DR to ensure near real-time recovery capability.

  • Regularly run RMAN crosschecks to keep backup metadata in sync.

  • Validate DR site periodically by performing trial recovery exercises.

With this approach, you can be confident that your Oracle 11g environment is disaster-ready and downtime can be minimized.

Friday, February 24, 2023

How to Automate MongoDB Database Backups in Linux

We have setup of One Primary with Multiple Secondary

Even if we configured highly available setup and backups , native backup are so special to take it and keep it somewhere in the cloud 

using the below script we can easily schedule backup in Linux environments  

https://github.com/selvackp/MongoNativeBackup-/blob/main/mongo_dump.sh

export PATH=/bin:/usr/bin:/usr/local/bin #Decalre Today Date TODAY=`date +"%d%b%Y"` #Declare Variables Required to pass for mongo dump command DB_BACKUP_PATH='/mnt/mongobackup' MONGO_HOST='localhost' MONGO_PORT='27017' MONGO_USER='xxxxxxxxxxx' MONGO_PASSWD='xxxxxxxxxxxxx' DATABASE_NAMES='ALL' #Remove Old Backup Files find ${DB_BACKUP_PATH} -name "*.zip" -type f -mtime +3 -delete find ${DB_BACKUP_PATH} -type d -mtime +3 -exec rm -rf {} \; #Create Directory for Backup mkdir -p ${DB_BACKUP_PATH}/${TODAY} cd ${DB_BACKUP_PATH}/${TODAY}/ if [ ${DATABASE_NAMES} = "ALL" ]; then echo "You have choose to backup all database" mongodump --uri="mongodb://${MONGO_USER}:${MONGO_PASSWD}@${MONGO_HOST}:${MONGO_PORT}" else echo "Running backup for selected databases" for DB_NAME in ${DATABASE_NAMES} do mongodump --uri="mongodb://${MONGO_USER}:${MONGO_PASSWD}@${MONGO_HOST}:${MONGO_PORT}/${DB_NAME}" done fi #Compress The Backup cd ${DB_BACKUP_PATH}/${TODAY} zip -r ${DB_BACKUP_PATH}_${TODAY}.zip ${DB_BACKUP_PATH}/${TODAY} cd ${DB_BACKUP_PATH}/${TODAY} #Copy the Compressed file into Azure Container using Shared Access Token azcopy cp ${DB_BACKUP_PATH}_${TODAY}.zip "https://xxxxxxxxxxx.blob.core.windows.net/xxxxxxxxxxxx?sp=w&st=xxxxxTxxxxxxxZ&se=xxxxxxZ&spr=https&sv=2021-06-08&sr=c&sig=csdfcdsxxxxxxxxxxxxxxx" --recursive=true #Send Mail with Backup Logs if [ $? -ne 0 ] then echo "Mongo Native backup Failed in $(hostname) $(date). Please contact administrator." | mail -r mail@datablogs.com -s "Mongo Native backup Failed $(hostname)" dbsupport@datablogs.com < /mongodata/cronscripts/mongo_backup_log.log else echo "Mongo Native backup completed in $(hostname)." | mail -r mail@datablogs.com -s "Mongo Native backup completed in $(hostname)" dbsupport@datablogs.com < /mongodata/cronscripts/mongo_backup_log.log fi

Friday, January 21, 2022

Export/Import Data on Amazon Oracle RDS using Data Pump utility

Its easy to refresh the schemas in Oracle RDS , We can do one time for Testing or Development Purpose 

But if its needs to run on daily basis , here is the automated code for achieve the solution  

GitHub Link for Source Code : https://github.com/selvackp/Oracle_RDS_Import_and_Export

Wednesday, October 14, 2020

RDS Snapshot Share with Automated or Manual Backups ?

Objective :

Launch production server clone in to another AWS account using RDS snapshots

We can do ?

There are multiple ways to do RDS data extract in AWS . We can take using manual snapshot , automated snapshot , export data to S3 and AWS backup service

What we can do most of time ?

To create clone of production server , it is easy way to create snapshot and launch new instance

But , Our Requirement to copy the snapshot in to different AWS account and launch copy of production .

Mostly we try with RDS automated snapshots to copy or share into different account . But in this case we can't share automated snapshots to different account

Automated snapshots

We can export data to S3 as well , but its required some efforts as well .

So possibilities is to take a manual snapshot and share into different account like below steps . Below screenshot refers manual snapshot of MySQL RDS in source AWS account

Source AWS account

Snapshot sharing into target AWS account

share snapshot to target account

Shared RDS snapshot on target AWS account

target AWS account

Then we can proceed to launch clone of production instance into different account .

Monday, December 23, 2019

Docker with Percona Backup for MongoDB



Sounds interesting from Percona Backup tool for MongoDB !!! I just wants to try and explore the tool with docker on today !!! Docker is first time for me , but in few days docker become favorite one to use for all kind of HA scenario works

Lets move into today's practices and issues ,

Note : Percona Backup for MongoDB supports Percona Server for MongoDB or MongoDB Community Server version 3.6 or higher with MongoDB replication enabled

Step 1 : Launched the Ubuntu 16.04 machine from AWS , then updated the latest packages and installed the docker 

sudo apt-get update 

sudo apt install docker.io

sudo systemctl start docker

sudo systemctl enable docker  

Once docker installed with latest package verify the docker version using docker --version 

Step 2 : Installed two docker mongo containers with replica set enabled 


docker run --detach --name datablogs-mongo-primary --volume /var/lib/mongo:/data/db --volume /etc/mongodb.conf:/etc/mongo.conf --publish 44444:27017 mongo --replSet datablogs-repl-set

docker run --detach --name datablogs-mongo-secondary --volume /var/lib/mongo-slave:/data/db --volume /etc/mongodb-slave.conf:/etc/mongo.conf --publish 55555:27017 mongo --replSet datablogs-repl-set

We need to access the mongoDB instances outside the world , so I have publised the mongo db ports with different one 
                                                             --publish 44444:27017

                                                             --publish 55555:27017

To access the mongoDB , we need to check the IP Address of both containers using below command

docker inspect datablogs-mongo-primary | grep IPAddress

docker inspect datablogs-mongo-secondary | grep IPAddress


-- usage of extensions refer with docker help

Step 3 : Configure mongo replica 

We need to login with docker command line and configure and start the replication between mongo servers . login into primary mongo container and execute below commands in mongo shell ,

docker exec -it datablogs-mongo-primary /bin/bash

config = {"_id" : "datablogs-repl-set","members" : [{"_id" : 0,"host" : "172.17.0.2:27017"},{"_id" : 1,"host" : "172.17.0.3:27017"}]};

rs.initiate(config);

Once we initiated the replication primary mongo shell will be changed 


Step 4 : Install Percona Backup and Configure 

Before proceeding this activity , we need to update and upgrade the packages using apt-get on each mongo containers

Installed the percona backup for mongoDB with below reference URL , we need to follow percona site for proper installation 



Once installed the pbm tool , login each mongo containers  set storage path and start the pbm agent . I have used local storage path for mongo backup

storage.yaml : 

type: filesystem
filesystem:
path: /tmp
pbm store set --config=storage.yaml --mongodb-uri="mongodb://127.0.0.1:55555/"

pbm-agent --mongodb-uri mongodb://172.17.0.2:27017 &


Step 5 : Backup and restore the collections using pbm 

Once completed the setup running the backup in secondary mongo server 

pbm backup --mongodb-uri mongodb://127.0.0.1:27017


Dropped the datablogs db and restored the backup using pbm


Finally verfied the db and collections in primary server 


Am Really happy tested percona backup for mongoDB with Docker today !!! Keep learning !!!