Its easy to recover mysql data like Amazon RDS , Its Sort and Easy Method if you are not able to understand manual method
https://github.com/selvackp/mysql-binlog-restore-using-python
Any additonal help please command with us
Its easy to recover mysql data like Amazon RDS , Its Sort and Easy Method if you are not able to understand manual method
https://github.com/selvackp/mysql-binlog-restore-using-python
Any additonal help please command with us
AWS Glue is a powerful data engineering platform when designed, tuned, and governed correctly.But We are treating it as a simple ETL utility often leads to cost, performance, and reliability issues.
As a Data Friend we need to have solid understanding on the AWS Glue
Myth 1: AWS Glue is only for simple ETL
Reality:
AWS Glue supports complex transformations including joins, aggregations, schema evolution, incremental processing, and large-scale distributed processing using Apache Spark. It is suitable for enterprise-grade data engineering workloads.
Myth 2: AWS Glue is serverless, so performance tuning is not required
Reality:
While infrastructure management is serverless, Glue jobs still require tuning
Myth 3: AWS Glue works only with Amazon S3
Reality:
AWS Glue integrates with multiple data sources
Myth 4: AWS Glue is very expensive
Reality:
Glue becomes expensive mainly due to design issues
With optimized design, Glue is often more cost-effective than always-on Spark clusters.
Myth 5: Glue Crawlers automatically handle schema management
Reality:
Crawlers may
Production systems typically require controlled schema management and governance.
Myth 6: AWS Glue replaces data warehouses
Reality:
AWS Glue is a data integration and transformation service. It complements data warehouses by preparing and transforming data before loading into analytics platforms.
Myth 7: Glue jobs are difficult to debug
Reality:
Glue supports debugging through
Most challenges arise from limited Spark expertise rather than Glue itself.
Myth 8: AWS Glue supports only batch processing
Reality:
AWS Glue also supports
It is not limited to scheduled batch workloads.
Myth 9: AWS Glue is a set-and-forget service
Reality:
Production Glue pipelines require
Glue jobs should be treated as production-grade software.
Myth 10: AWS Glue is only for data engineers
Reality:
With Glue Studio, SQL-based transformations, and visual workflows, Glue can be effectively used by analytics teams, architects, and platform teams.
If you having issues , Please connect with us for instant help !!!
USE tempdb;EXEC sp_helpfile;
USE master;ALTER DATABASE tempdb MODIFY FILE (NAME = tempdev, FILENAME ='D:\SQLData\TempDB\tempdb.mdf');ALTER DATABASE tempdb MODIFY FILE (NAME = templog, FILENAME ='D:\SQLData\TempDB\templog.ldf');
USE tempdb;EXEC sp_helpfile;
Problem Overview
-- Table definitionCREATE TABLE Customer (CustomerID INT,CustomerCode VARCHAR(20));-- Query from applicationSELECT * FROM Customer WHERE CustomerCode = N'CUST001'; --Notice the 'N' prefix (nvarchar)Even though an index exists on CustomerCode, SQL Server converts the column duringexecution:CONVERT_IMPLICIT(nvarchar(4000), [CustomerCode], 0)
Dear all my DBA Friends its time to upgrade to 8.4 . Yes We are getting notification from all the cloud partners as well on the customer side requests move to latest version
Here is detailed step for upgradtion on Ubuntu 22 . Before upgrading make sure you have noticed all the incompatible changes between 8.0 to 8.4 ,
https://dev.mysql.com/doc/relnotes/mysql/8.4/en/news-8-4-6.html
Best Practices for Updating MySQL from 8.0 to 8.4 | SQLFlash
Lets move on the steps ,
Environment :
mysqldump -u root -p --routines --triggers --events --max_allowed_packet=1024M am80db > am80db_before_8.4_Upgrade.sql
wget https://dev.mysql.com/get/mysql-apt-config_0.8.33-1_all.debsudo dpkg -i mysql-apt-config_0.8.33-1_all.deb
root@ip-11-1-23-23:~# wget https://dev.mysql.com/get/mysql-apt-config_0.8.33-1_all.deb
sudo dpkg -i mysql-apt-config_0.8.33-1_all.deb
--2025-10-17 13:45:36-- https://dev.mysql.com/get/mysql-apt-config_0.8.33-1_all.deb
Resolving dev.mysql.com (dev.mysql.com)... 104.120.82.77, 2600:140f:1e00:486::2e31, 2600:140f:1e00:4b2::2e31
Connecting to dev.mysql.com (dev.mysql.com)|104.120.82.77|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://repo.mysql.com//mysql-apt-config_0.8.33-1_all.deb [following]
--2025-10-17 13:45:37-- https://repo.mysql.com//mysql-apt-config_0.8.33-1_all.deb
Resolving repo.mysql.com (repo.mysql.com)... 23.10.47.157, 2600:140f:1e00:a86::1d68, 2600:140f:1e00:a8f::1d68
Connecting to repo.mysql.com (repo.mysql.com)|23.10.47.157|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 18072 (18K) [application/x-debian-package]
Saving to: ‘mysql-apt-config_0.8.33-1_all.deb.1’
mysql-apt-config_0.8.33-1_all.deb.1 100%[============================================================================>] 17.65K --.-KB/s in 0.001s
2025-10-17 13:45:37 (25.2 MB/s) - ‘mysql-apt-config_0.8.33-1_all.deb.1’ saved [18072/18072]
(Reading database ... 99983 files and directories currently installed.)
Preparing to unpack mysql-apt-config_0.8.33-1_all.deb ...
Unpacking mysql-apt-config (0.8.33-1) over (0.8.33-1) ...
Setting up mysql-apt-config (0.8.33-1) ...
root@ip-11-1-23-23:~# sudo dpkg-reconfigure mysql-apt-config
File '/usr/share/keyrings/mysql-apt-config.gpg' exists. Overwrite? (y/N) y
root@ip-11-1-23-23:~# sudo apt update
Hit:1 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy InRelease
Hit:2 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates InRelease
Hit:3 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-backports InRelease
Hit:4 http://repo.mysql.com/apt/ubuntu jammy InRelease
Hit:5 http://security.ubuntu.com/ubuntu jammy-security InRelease
Get:6 http://repo.mysql.com/apt/ubuntu jammy/mysql-8.4-lts Sources [965 B]
Get:7 http://repo.mysql.com/apt/ubuntu jammy/mysql-8.4-lts amd64 Packages [14.5 kB]
Fetched 15.5 kB in 4s (3798 B/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
10 packages can be upgraded. Run 'apt list --upgradable' to see them.
While configure it will ask to choose mysql-8.4 LTS and Click OK . Make sure you have selected 8.4 not 8.0
then update the package list in the ubuntu
apt update
root@ip-11-1-23-23:~# sudo apt update
Hit:1 http://repo.mysql.com/apt/ubuntu jammy InRelease
Hit:2 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy InRelease
Hit:3 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates InRelease
Hit:4 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-backports InRelease
Hit:5 http://security.ubuntu.com/ubuntu jammy-security InRelease
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
10 packages can be upgraded. Run 'apt list --upgradable' to see them.
Step 3 :
Lets start installing the package , make sure selecting conf options properly
sudo apt install mysql-server -y
sudo systemctl restart mysqlsudo systemctl status mysql
root@ip-11-1-23-23:~# sudo systemctl restart mysql
root@ip-11-1-23-23:~# sudo systemctl status mysql
● mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2025-10-17 13:48:46 UTC; 7s ago
Docs: man:mysqld(8)
http://dev.mysql.com/doc/refman/en/using-systemd.html
Process: 95024 ExecStartPre=/usr/share/mysql-8.4/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
Main PID: 95063 (mysqld)
Status: "Server is operational"
Tasks: 36 (limit: 9387)
Memory: 444.4M
CPU: 1.905s
CGroup: /system.slice/mysql.service
└─95063 /usr/sbin/mysqld
Oct 17 13:48:42 ip-10-1-94-82 systemd[1]: Starting MySQL Community Server...
Oct 17 13:48:46 ip-10-1-94-82 systemd[1]: Started MySQL Community Server.
Step 5 :
Dont try mysql_upgrade is no longer exist on MySQL 8.0.16 then lets complete mysql_secure_installation and update proper password for the root user
mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH caching_sha2_password BY 'ewewewewewe';
Query OK, 0 rows affected (0.01 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
Step 6 :
Lets verify the MySQL Upgrade has been success or not on the log file , Highlighted logs clarifing upgrade successfully completed
sudo tail -n 50 /var/log/mysql/error.log
2025-10-17T13:47:41.670888Z 0 [System] [MY-015015] [Server] MySQL Server - start.
2025-10-17T13:47:42.054060Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.4.6) starting as process 94072
2025-10-17T13:47:42.129906Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2025-10-17T13:47:49.394972Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2025-10-17T13:47:49.429761Z 1 [System] [MY-011090] [Server] Data dictionary upgrading from version '80023' to '80300'.
2025-10-17T13:47:53.024004Z 1 [System] [MY-013413] [Server] Data dictionary upgrade from version '80023' to '80300' completed.
2025-10-17T13:47:57.289846Z 4 [System] [MY-013381] [Server] Server upgrade from '80043' to '80406' started.
2025-10-17T13:48:07.600863Z 4 [System] [MY-013381] [Server] Server upgrade from '80043' to '80406' completed.
2025-10-17T13:48:08.000645Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2025-10-17T13:48:08.000744Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2025-10-17T13:48:08.081364Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2025-10-17T13:48:08.082336Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.4.6' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
2025-10-17T13:48:39.677032Z 0 [System] [MY-013172] [Server] Received SHUTDOWN from user <via user signal>. Shutting down mysqld (Version: 8.4.6).
2025-10-17T13:48:41.686113Z 0 [Warning] [MY-010909] [Server] /usr/sbin/mysqld: Forcing close of thread 10 user: 'Ruban'.
2025-10-17T13:48:41.689689Z 0 [Warning] [MY-010909] [Server] /usr/sbin/mysqld: Forcing close of thread 11 user: 'Ruban'.
2025-10-17T13:48:42.651459Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.4.6) MySQL Community Server - GPL.
2025-10-17T13:48:42.651514Z 0 [System] [MY-015016] [Server] MySQL Server - end.
2025-10-17T13:48:43.120978Z 0 [System] [MY-015015] [Server] MySQL Server - start.
2025-10-17T13:48:43.492756Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.4.6) starting as process 95063
2025-10-17T13:48:43.529553Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2025-10-17T13:48:45.123068Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2025-10-17T13:48:46.095152Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2025-10-17T13:48:46.095299Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2025-10-17T13:48:46.181691Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2025-10-17T13:48:46.182507Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.4.6' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
In today’s always-on world, database downtime can cost businesses a fortune. Setting up a reliable Disaster Recovery (DR) solution ensures your organization can quickly recover from unexpected failures.
This guide walks you through a practical, step-by-step process of implementing Oracle 11g Standard Edition DR, from enabling archive logs to recovering the database at the DR site.
Business Continuity: Minimize downtime during outages.
Data Protection: Ensure no transaction is lost.
Compliance: Meet RTO/RPO requirements.
Before starting, ensure:
Oracle 11g Standard Edition is installed on both DC (Primary) and DR (Standby) servers.
Network connectivity & firewall rules allow archive log and backup file transfer between DC and DR.
Adequate storage is provisioned for backups and archive logs.
[DC: Primary Database]
|
|--- Archive Logs + RMAN Backups ---> [DR: Standby Database]
|
--> Restore + Recover --> Open DB
Install Oracle 11g Standard Edition on both DC and DR servers.
Archive logs capture every committed change — critical for recovery.
SQL>SELECT LOG_MODE FROM V$DATABASE;
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP MOUNT;
SQL>ALTER DATABASE ARCHIVELOG;
SQL>ALTER DATABASE OPEN;
Ensure redo logs are always generated and enough data is captured for recovery.
SQL>ALTER DATABASE FORCE LOGGING;
SQL>ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
Increase recovery area size if required:
SQL>SHOW PARAMETER db_recovery;
SQL>ALTER SYSTEM SET db_recovery_file_dest_size=10g;
Enable control file autobackup and retention policy:
RMAN>CONFIGURE CONTROLFILE AUTOBACKUP ON; RMAN>CONFIGURE RETENTION POLICY TO REDUNDANCY 2;
RMAN>BACKUP DATABASE;
This will back up all datafiles and perform an autobackup of control file & SPFILE.
Shutdown instance
Remove old datafiles (oradata)
Start in NOMOUNT mode
STARTUP NOMOUNT;
Transfer flash_recovery_area folder to the DR server.
RMAN>RESTORE CONTROLFILE FROM AUTOBACKUP; RMAN>ALTER DATABASE MOUNT; RMAN>RESTORE DATABASE;
Update the catalog in RMAN before proceeding Recover Database
SQL>RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
Enter AUTO to apply logs automatically.
Finally open with resetlogs:
SQL>ALTER DATABASE OPEN RESETLOGS;
Run validation queries:
SQL>SELECT thread#, low_sequence#, high_sequence# FROM v$archive_gap;
SQL>SELECT file#, checkpoint_change# FROM v$datafile_header ORDER BY file#;
SQL>SELECT checkpoint_change# FROM v$database;
Ensure no archive log gaps remain.
Here are some ready-to-use RMAN scripts for ongoing maintenance:
RUN { ALLOCATE CHANNEL c1 DEVICE TYPE DISK; CATALOG START WITH 'C:\app\Administrator\flash_recovery_area\orcl\ARCHIVELOG'; RESTORE ARCHIVELOG ALL; RECOVER DATABASE; DELETE NOPROMPT ARCHIVELOG ALL COMPLETED BEFORE 'SYSDATE-1'; RELEASE CHANNEL c1; }
RUN { ALLOCATE CHANNEL c1 DEVICE TYPE DISK; SET UNTIL SEQUENCE 18 THREAD 1; RESTORE DATABASE; RECOVER DATABASE; RELEASE CHANNEL c1; }
Automate archive log shipping from DC to DR to ensure near real-time recovery capability.
Regularly run RMAN crosschecks to keep backup metadata in sync.
Validate DR site periodically by performing trial recovery exercises.
With this approach, you can be confident that your Oracle 11g environment is disaster-ready and downtime can be minimized.
Had a some conversation with friend he is facing this issue for long back and unable to solve it . But i had an thought why we cant replicate in our test environment and fix this quickly . After a long time had a nice troubleshooting hours
Lets deep dive into the troubleshooting , before enter into the troubleshooting requested few sample output for the same
First things requested to login without grant tables and asked to repair the tables
root@ip-100-23-45-122:sudo mysqld_safe --skip-grant-tables --skip-networking --skip-plugin-load &
root@ip-100-23-45-122:/var/lib/mysql/mysql# mysqlcheck -u root -p --repair --databases mysqlEnter password:mysql.columns_priv OKmysql.db OKmysql.engine_costError : Table 'mysql.engine_cost' doesn't existstatus : Operation failedmysql.event OKmysql.func OKmysql.general_log OKmysql.gtid_executedError : Table 'mysql.gtid_executed' doesn't existstatus : Operation failedmysql.help_categoryError : Table 'mysql.help_category' doesn't existstatus : Operation failedmysql.help_keywordError : Table 'mysql.help_keyword' doesn't existstatus : Operation failedmysql.help_relationError : Table 'mysql.help_relation' doesn't existstatus : Operation failedmysql.help_topicError : Table 'mysql.help_topic' doesn't existstatus : Operation failedmysql.innodb_index_statsError : Table 'mysql.innodb_index_stats' doesn't existstatus : Operation failedmysql.innodb_table_statsError : Table 'mysql.innodb_table_stats' doesn't existstatus : Operation failedmysql.ndb_binlog_index OKmysql.plugin
MySQL ERROR 1146 (42S02): Table 'datablogs.tbl_followup' doesn't exist
So issue not with master table corruption , something different
Troubleshooting Step 2 :
I have gone through so many links and references tried below things as well ,
mysql> create database datablogs;Query OK, 1 row affected (0.01 sec)mysql> use datablogs;Database changedmysql> CREATE TABLE tbl_followup (id int(11) NOT NULL AUTO_INCREMENT,table_header text,action varchar(100) DEFAULT NULL,action_button_text varchar(100) DEFAULT NULL,parent_template text,created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,modified_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,PRIMARY KEY (id)) ENGINE=InnoDB DEFAULT CHARSET=latin1;Query OK, 0 rows affected (0.03 sec)mysql> ALTER TABLE datablogs.tbl_followup DISCARD TABLESPACE;Query OK, 0 rows affected (0.01 sec)
Copied only tbl_followup.ibd file from the corrupted server to new server then imported the tablespace
mysql> ALTER TABLE datablogs.tbl_followup IMPORT TABLESPACE;Query OK, 0 rows affected, 1 warning (0.04 sec)mysql> select * from datablogs.tbl_followup;Empty set (0.01 sec)
So overall while reading it very simple but its very hard troubleshooting did in lifetime !!!
Anyway instead of doing multiple solution follow my steps to resolve the issue sortly !!!
If we grow bigger in the business , seamlessly our customer and transaction data also increases . In the meantime performance needs to consider as well
So in this case of bigger tables indexes will not help us to achieve good performance on peak times . Alternatively we have partitioning to split the tables data into multiple pieces on all the relational database environments
Like wise we are going to do range partition on sample table in PostgreSQL Database , In PostgreSQL three types of partition methods are available ,
Below are the important concern in the PostgreSQL Partition ,
So based on the above formula , we have tried to transform regular table into partitioned one for your reference
Any one can use this example and perform partitioning in AWS PostgreSQL RDS easily
Click GitHub Link for Code : AWS-PostgreSQL-RDS-Table-Partition
Step 1 : Create base datablogspaycheck table and insert some sample records
DROP TABLE IF EXISTS datablogspaycheck CASCADE;
DROP SEQUENCE IF EXISTS public.paycheck_id_seq;
CREATE SEQUENCE public.paycheck_id_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
create table datablogspaycheck
(
payment_id int NOT NULL DEFAULT nextval('public.paycheck_id_seq'::regclass),
created timestamptz NOT NULL,
updated timestamptz NOT NULL DEFAULT now(),
amount float,
status varchar DEFAULT 'new'
);
CREATE INDEX idx_paycheck ON datablogspaycheck (created);
INSERT INTO datablogspaycheck (created) VALUES (
generate_series(timestamp '2023-01-01'
, now()
, interval '5 minutes') );
Step 2 : Rename base table with new name
ALTER TABLE datablogspaycheck RENAME TO datablogspaycheck_basetable;
Step 3 : Create Partitioned table
create table datablogspaycheck
(
payment_id int NOT NULL DEFAULT nextval('public.paycheck_id_seq'::regclass),
created timestamptz NOT NULL,
updated timestamptz NOT NULL DEFAULT now(),
amount float,
status varchar DEFAULT 'new'
)PARTITION BY RANGE (created);
Step 4 : Create Separate Partition for each create date
CREATE TABLE datablogspaycheck_202303 PARTITION OF datablogspaycheck
FOR VALUES FROM ('2023-01-01') TO ('2023-03-01');
CREATE TABLE datablogspaycheck_20230304 PARTITION OF datablogspaycheck
FOR VALUES FROM ('2023-03-01') TO ('2023-04-01');
CREATE TABLE datablogspaycheck_202304 PARTITION OF datablogspaycheck
FOR VALUES FROM ('2023-04-01') TO ('2023-05-01');
CREATE TABLE datablogspaycheck_202311 PARTITION OF datablogspaycheck
FOR VALUES FROM ('2023-05-01') TO ('2023-11-01');
CREATE TABLE datablogspaycheck_2024 PARTITION OF datablogspaycheck
FOR VALUES FROM ('2023-11-01') TO ('2024-01-01');
Step 5 : Migrate the all records
insert into datablogspaycheck (payment_id,created,updated,amount,status) select payment_id,created,updated,amount,status from datablogspaycheck_basetable;
Step 6 : Validate each partition
select * from datablogspaycheck_202303 order by 2 desc
select * from datablogspaycheck_20230304 order by 2 desc
select * from datablogspaycheck_202311 order by 2 desc
Its done , Easily migrated normal table data into partitioned table
Thanks for Reading !!!
Wow !!! If suppose on the migration projects we need to more stuffs and things to convert when coming to procedures , functions and other database objects
But AWS is providing good things to migrate with easy steps , Ha Ha ... Don't overthink still you need to do 40% code migration works
In this part Babelfish Compass is giving various options to support migration the codes from SQL Server to PostgreSQL with bebelfish feature enabled PaaS servers
1.Download Compass Tool in Below
https://github.com/babelfish-for-postgresql/babelfish_compass/releases/tag/v.2023-03-a
Needs to download .zip file to work with Babelfish Compatibility
2.Unzip and Place the files in separate folder
Database name has been highlighted in below ,
C:\Users\Admin\Downloads\BabelfishCompass_v.2023-03-a\BabelfishCompass>BabelfishCompass.bat reportfinal datablogsdbprod.sql
5.Finally Reports are generated in Documents Directory
6.We can review the reports in any format , for me its easy with in HTML browser
Just double click the HTML document , So like below we will get supported and unsupported features details in depth .
We can directly go and debug the code . Also bebelfish compass is having plenty of methods to rewrite the code , we will check it in next blog
Oracle Audit Log :
Oracle Audit Log refers to the feature in Oracle
Database that records and stores information about various database activities
and events. It provides a mechanism to track and monitor user activities,
system events, and changes made to the database.
The Oracle Audit Log provides an essential tool for security, compliance, and troubleshooting purposes.
Types of Auditing in Amazon RDS for Oracle :
We are going to see , how do we enable Standard auditing in Oracle RDS
How to enable Audit Log in Oracle RDS?
Make sure you have enabled custom parameter group for
Oracle RDS
Audit_Trail - DB, EXTENDED
AUDIT DELETE ANY TABLE;
AUDIT DELETE TABLE BY USER_01 BY ACCESS;
AUDIT DELETE TABLE BY USER_02 BY ACCESS;
AUDIT ALTER, GRANT, INSERT, UPDATE, DELETE ON DEFAULT;
AUDIT READ ON DIRECTORY datapump_dir;
Its all done , we have enabled required logs to capture for security purpose
How to we monitor Audit Logs ?
We can just run the below command get the captured audit logs in Oracle RDS ,
SELECT * FROM DBA_AUDIT_TRAIL order by 1 desc
Its just for normal scenario , explained the process . Still we can separate Audit Table space and many further things are available in Oracle . Let see on another blogs
Happy Auditing !!!
We have setup of One Primary with Multiple Secondary
Even if we configured highly available setup and backups , native backup are so special to take it and keep it somewhere in the cloud
using the below script we can easily schedule backup in Linux environments
https://github.com/selvackp/MongoNativeBackup-/blob/main/mongo_dump.sh
export PATH=/bin:/usr/bin:/usr/local/bin #Decalre Today Date TODAY=`date +"%d%b%Y"` #Declare Variables Required to pass for mongo dump command DB_BACKUP_PATH='/mnt/mongobackup' MONGO_HOST='localhost' MONGO_PORT='27017' MONGO_USER='xxxxxxxxxxx' MONGO_PASSWD='xxxxxxxxxxxxx' DATABASE_NAMES='ALL' #Remove Old Backup Files find ${DB_BACKUP_PATH} -name "*.zip" -type f -mtime +3 -delete find ${DB_BACKUP_PATH} -type d -mtime +3 -exec rm -rf {} \; #Create Directory for Backup mkdir -p ${DB_BACKUP_PATH}/${TODAY} cd ${DB_BACKUP_PATH}/${TODAY}/ if [ ${DATABASE_NAMES} = "ALL" ]; then echo "You have choose to backup all database" mongodump --uri="mongodb://${MONGO_USER}:${MONGO_PASSWD}@${MONGO_HOST}:${MONGO_PORT}" else echo "Running backup for selected databases" for DB_NAME in ${DATABASE_NAMES} do mongodump --uri="mongodb://${MONGO_USER}:${MONGO_PASSWD}@${MONGO_HOST}:${MONGO_PORT}/${DB_NAME}" done fi #Compress The Backup cd ${DB_BACKUP_PATH}/${TODAY} zip -r ${DB_BACKUP_PATH}_${TODAY}.zip ${DB_BACKUP_PATH}/${TODAY} cd ${DB_BACKUP_PATH}/${TODAY} #Copy the Compressed file into Azure Container using Shared Access Token azcopy cp ${DB_BACKUP_PATH}_${TODAY}.zip "https://xxxxxxxxxxx.blob.core.windows.net/xxxxxxxxxxxx?sp=w&st=xxxxxTxxxxxxxZ&se=xxxxxxZ&spr=https&sv=2021-06-08&sr=c&sig=csdfcdsxxxxxxxxxxxxxxx" --recursive=true #Send Mail with Backup Logs if [ $? -ne 0 ] then echo "Mongo Native backup Failed in $(hostname) $(date). Please contact administrator." | mail -r mail@datablogs.com -s "Mongo Native backup Failed $(hostname)" dbsupport@datablogs.com < /mongodata/cronscripts/mongo_backup_log.log else echo "Mongo Native backup completed in $(hostname)." | mail -r mail@datablogs.com -s "Mongo Native backup completed in $(hostname)" dbsupport@datablogs.com < /mongodata/cronscripts/mongo_backup_log.log fi
Its easy to recover MongoDB Backup using Percona Backup for MongoDB
Its took little long time to derive our approach tunning on azure data factory with Azure Synapse why because we need to run the system atleast two months validate our approach is smooth
Yes its all running good as expected performance on the ETL loads and Processes
Here are the major things we need to take care on Azure Synapse Dedicated Pool ,
Dedicated SQL Pool Scaling :
We have decide below metrices to optimize the Azure Synapse
Before start our critical process , we can automate upscale process with ADF Pipelines itself . So many blogs available to configure that
Best method configure the authentication method with service principle ,
Dedicated SQL Pool Workload Management :
We have decide below metrices to prepare workload management
Based on the classifications , we have to split workload group for above queries
Step 1 :
We need to create login and user for workload management in Dedicated SQL Pool
--CREATE LOGIN [Analyticsprocess] WITH PASSWORD='xxxxxxxxx'
--CREATE USER [Analyticsprocess] FOR LOGIN [Analyticsprocess]
--GRANT CONTROL ON DATABASE::[sql-datablogs-dw] TO
Analyticsprocess
Step 2 :
Consider you have upscaled instance into DW400c below are the resources allocation for the DW400c instance concurrency requirements
In the workload group --> New workload group --> Click ELT
Consider analytics process user is used for high intensive queries we have to allocate as much as minimum resource for workload group
Click Classifiers --> Add Classifiers --> Name it as ELT --> and specify Member should be the login and Label is important to mention
Once click Add , we will get below concurrency range based on DW400c
By Default , its having system level workload group to handle the queries but its not effective we have to force our workload group
Step 3 :
This is very important to utilize the workload group properly . We need to specify Label on the heavy processing queries so that it will utilized properly
CREATE TABLE rpt_datablogs_finalreport.smgreport WITH (HEAP,DISTRIBUTION = REPLICATE) AS select * into rpt_datablogs_finalreport.smgreport_vw from rpt_datablogs_finalreport.vw_smgreport OPTION (LABEL='highintensiveprocess')