Its easy to recover mysql data like Amazon RDS , Its Sort and Easy Method if you are not able to understand manual method
https://github.com/selvackp/mysql-binlog-restore-using-python
Any additonal help please command with us
Its easy to recover mysql data like Amazon RDS , Its Sort and Easy Method if you are not able to understand manual method
https://github.com/selvackp/mysql-binlog-restore-using-python
Any additonal help please command with us
USE tempdb;EXEC sp_helpfile;
USE master;ALTER DATABASE tempdb MODIFY FILE (NAME = tempdev, FILENAME ='D:\SQLData\TempDB\tempdb.mdf');ALTER DATABASE tempdb MODIFY FILE (NAME = templog, FILENAME ='D:\SQLData\TempDB\templog.ldf');
USE tempdb;EXEC sp_helpfile;
Problem Overview
-- Table definitionCREATE TABLE Customer (CustomerID INT,CustomerCode VARCHAR(20));-- Query from applicationSELECT * FROM Customer WHERE CustomerCode = N'CUST001'; --Notice the 'N' prefix (nvarchar)Even though an index exists on CustomerCode, SQL Server converts the column duringexecution:CONVERT_IMPLICIT(nvarchar(4000), [CustomerCode], 0)
Dear all my DBA Friends its time to upgrade to 8.4 . Yes We are getting notification from all the cloud partners as well on the customer side requests move to latest version
Here is detailed step for upgradtion on Ubuntu 22 . Before upgrading make sure you have noticed all the incompatible changes between 8.0 to 8.4 ,
https://dev.mysql.com/doc/relnotes/mysql/8.4/en/news-8-4-6.html
Best Practices for Updating MySQL from 8.0 to 8.4 | SQLFlash
Lets move on the steps ,
Environment :
mysqldump -u root -p --routines --triggers --events --max_allowed_packet=1024M am80db > am80db_before_8.4_Upgrade.sql
wget https://dev.mysql.com/get/mysql-apt-config_0.8.33-1_all.debsudo dpkg -i mysql-apt-config_0.8.33-1_all.deb
root@ip-11-1-23-23:~# wget https://dev.mysql.com/get/mysql-apt-config_0.8.33-1_all.deb
sudo dpkg -i mysql-apt-config_0.8.33-1_all.deb
--2025-10-17 13:45:36-- https://dev.mysql.com/get/mysql-apt-config_0.8.33-1_all.deb
Resolving dev.mysql.com (dev.mysql.com)... 104.120.82.77, 2600:140f:1e00:486::2e31, 2600:140f:1e00:4b2::2e31
Connecting to dev.mysql.com (dev.mysql.com)|104.120.82.77|:443... connected.
HTTP request sent, awaiting response... 302 Moved Temporarily
Location: https://repo.mysql.com//mysql-apt-config_0.8.33-1_all.deb [following]
--2025-10-17 13:45:37-- https://repo.mysql.com//mysql-apt-config_0.8.33-1_all.deb
Resolving repo.mysql.com (repo.mysql.com)... 23.10.47.157, 2600:140f:1e00:a86::1d68, 2600:140f:1e00:a8f::1d68
Connecting to repo.mysql.com (repo.mysql.com)|23.10.47.157|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 18072 (18K) [application/x-debian-package]
Saving to: ‘mysql-apt-config_0.8.33-1_all.deb.1’
mysql-apt-config_0.8.33-1_all.deb.1 100%[============================================================================>] 17.65K --.-KB/s in 0.001s
2025-10-17 13:45:37 (25.2 MB/s) - ‘mysql-apt-config_0.8.33-1_all.deb.1’ saved [18072/18072]
(Reading database ... 99983 files and directories currently installed.)
Preparing to unpack mysql-apt-config_0.8.33-1_all.deb ...
Unpacking mysql-apt-config (0.8.33-1) over (0.8.33-1) ...
Setting up mysql-apt-config (0.8.33-1) ...
root@ip-11-1-23-23:~# sudo dpkg-reconfigure mysql-apt-config
File '/usr/share/keyrings/mysql-apt-config.gpg' exists. Overwrite? (y/N) y
root@ip-11-1-23-23:~# sudo apt update
Hit:1 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy InRelease
Hit:2 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates InRelease
Hit:3 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-backports InRelease
Hit:4 http://repo.mysql.com/apt/ubuntu jammy InRelease
Hit:5 http://security.ubuntu.com/ubuntu jammy-security InRelease
Get:6 http://repo.mysql.com/apt/ubuntu jammy/mysql-8.4-lts Sources [965 B]
Get:7 http://repo.mysql.com/apt/ubuntu jammy/mysql-8.4-lts amd64 Packages [14.5 kB]
Fetched 15.5 kB in 4s (3798 B/s)
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
10 packages can be upgraded. Run 'apt list --upgradable' to see them.
While configure it will ask to choose mysql-8.4 LTS and Click OK . Make sure you have selected 8.4 not 8.0
then update the package list in the ubuntu
apt update
root@ip-11-1-23-23:~# sudo apt update
Hit:1 http://repo.mysql.com/apt/ubuntu jammy InRelease
Hit:2 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy InRelease
Hit:3 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-updates InRelease
Hit:4 http://ap-south-1.ec2.archive.ubuntu.com/ubuntu jammy-backports InRelease
Hit:5 http://security.ubuntu.com/ubuntu jammy-security InRelease
Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
10 packages can be upgraded. Run 'apt list --upgradable' to see them.
Step 3 :
Lets start installing the package , make sure selecting conf options properly
sudo apt install mysql-server -y
sudo systemctl restart mysqlsudo systemctl status mysql
root@ip-11-1-23-23:~# sudo systemctl restart mysql
root@ip-11-1-23-23:~# sudo systemctl status mysql
● mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
Active: active (running) since Fri 2025-10-17 13:48:46 UTC; 7s ago
Docs: man:mysqld(8)
http://dev.mysql.com/doc/refman/en/using-systemd.html
Process: 95024 ExecStartPre=/usr/share/mysql-8.4/mysql-systemd-start pre (code=exited, status=0/SUCCESS)
Main PID: 95063 (mysqld)
Status: "Server is operational"
Tasks: 36 (limit: 9387)
Memory: 444.4M
CPU: 1.905s
CGroup: /system.slice/mysql.service
└─95063 /usr/sbin/mysqld
Oct 17 13:48:42 ip-10-1-94-82 systemd[1]: Starting MySQL Community Server...
Oct 17 13:48:46 ip-10-1-94-82 systemd[1]: Started MySQL Community Server.
Step 5 :
Dont try mysql_upgrade is no longer exist on MySQL 8.0.16 then lets complete mysql_secure_installation and update proper password for the root user
mysql> ALTER USER 'root'@'localhost' IDENTIFIED WITH caching_sha2_password BY 'ewewewewewe';
Query OK, 0 rows affected (0.01 sec)
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
Step 6 :
Lets verify the MySQL Upgrade has been success or not on the log file , Highlighted logs clarifing upgrade successfully completed
sudo tail -n 50 /var/log/mysql/error.log
2025-10-17T13:47:41.670888Z 0 [System] [MY-015015] [Server] MySQL Server - start.
2025-10-17T13:47:42.054060Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.4.6) starting as process 94072
2025-10-17T13:47:42.129906Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2025-10-17T13:47:49.394972Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2025-10-17T13:47:49.429761Z 1 [System] [MY-011090] [Server] Data dictionary upgrading from version '80023' to '80300'.
2025-10-17T13:47:53.024004Z 1 [System] [MY-013413] [Server] Data dictionary upgrade from version '80023' to '80300' completed.
2025-10-17T13:47:57.289846Z 4 [System] [MY-013381] [Server] Server upgrade from '80043' to '80406' started.
2025-10-17T13:48:07.600863Z 4 [System] [MY-013381] [Server] Server upgrade from '80043' to '80406' completed.
2025-10-17T13:48:08.000645Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2025-10-17T13:48:08.000744Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2025-10-17T13:48:08.081364Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2025-10-17T13:48:08.082336Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.4.6' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
2025-10-17T13:48:39.677032Z 0 [System] [MY-013172] [Server] Received SHUTDOWN from user <via user signal>. Shutting down mysqld (Version: 8.4.6).
2025-10-17T13:48:41.686113Z 0 [Warning] [MY-010909] [Server] /usr/sbin/mysqld: Forcing close of thread 10 user: 'Ruban'.
2025-10-17T13:48:41.689689Z 0 [Warning] [MY-010909] [Server] /usr/sbin/mysqld: Forcing close of thread 11 user: 'Ruban'.
2025-10-17T13:48:42.651459Z 0 [System] [MY-010910] [Server] /usr/sbin/mysqld: Shutdown complete (mysqld 8.4.6) MySQL Community Server - GPL.
2025-10-17T13:48:42.651514Z 0 [System] [MY-015016] [Server] MySQL Server - end.
2025-10-17T13:48:43.120978Z 0 [System] [MY-015015] [Server] MySQL Server - start.
2025-10-17T13:48:43.492756Z 0 [System] [MY-010116] [Server] /usr/sbin/mysqld (mysqld 8.4.6) starting as process 95063
2025-10-17T13:48:43.529553Z 1 [System] [MY-013576] [InnoDB] InnoDB initialization has started.
2025-10-17T13:48:45.123068Z 1 [System] [MY-013577] [InnoDB] InnoDB initialization has ended.
2025-10-17T13:48:46.095152Z 0 [Warning] [MY-010068] [Server] CA certificate ca.pem is self signed.
2025-10-17T13:48:46.095299Z 0 [System] [MY-013602] [Server] Channel mysql_main configured to support TLS. Encrypted connections are now supported for this channel.
2025-10-17T13:48:46.181691Z 0 [System] [MY-011323] [Server] X Plugin ready for connections. Bind-address: '::' port: 33060, socket: /var/run/mysqld/mysqlx.sock
2025-10-17T13:48:46.182507Z 0 [System] [MY-010931] [Server] /usr/sbin/mysqld: ready for connections. Version: '8.4.6' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server - GPL.
Had a some conversation with friend he is facing this issue for long back and unable to solve it . But i had an thought why we cant replicate in our test environment and fix this quickly . After a long time had a nice troubleshooting hours
Lets deep dive into the troubleshooting , before enter into the troubleshooting requested few sample output for the same
First things requested to login without grant tables and asked to repair the tables
root@ip-100-23-45-122:sudo mysqld_safe --skip-grant-tables --skip-networking --skip-plugin-load &
root@ip-100-23-45-122:/var/lib/mysql/mysql# mysqlcheck -u root -p --repair --databases mysqlEnter password:mysql.columns_priv OKmysql.db OKmysql.engine_costError : Table 'mysql.engine_cost' doesn't existstatus : Operation failedmysql.event OKmysql.func OKmysql.general_log OKmysql.gtid_executedError : Table 'mysql.gtid_executed' doesn't existstatus : Operation failedmysql.help_categoryError : Table 'mysql.help_category' doesn't existstatus : Operation failedmysql.help_keywordError : Table 'mysql.help_keyword' doesn't existstatus : Operation failedmysql.help_relationError : Table 'mysql.help_relation' doesn't existstatus : Operation failedmysql.help_topicError : Table 'mysql.help_topic' doesn't existstatus : Operation failedmysql.innodb_index_statsError : Table 'mysql.innodb_index_stats' doesn't existstatus : Operation failedmysql.innodb_table_statsError : Table 'mysql.innodb_table_stats' doesn't existstatus : Operation failedmysql.ndb_binlog_index OKmysql.plugin
MySQL ERROR 1146 (42S02): Table 'datablogs.tbl_followup' doesn't exist
So issue not with master table corruption , something different
Troubleshooting Step 2 :
I have gone through so many links and references tried below things as well ,
mysql> create database datablogs;Query OK, 1 row affected (0.01 sec)mysql> use datablogs;Database changedmysql> CREATE TABLE tbl_followup (id int(11) NOT NULL AUTO_INCREMENT,table_header text,action varchar(100) DEFAULT NULL,action_button_text varchar(100) DEFAULT NULL,parent_template text,created_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP,modified_at timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,PRIMARY KEY (id)) ENGINE=InnoDB DEFAULT CHARSET=latin1;Query OK, 0 rows affected (0.03 sec)mysql> ALTER TABLE datablogs.tbl_followup DISCARD TABLESPACE;Query OK, 0 rows affected (0.01 sec)
Copied only tbl_followup.ibd file from the corrupted server to new server then imported the tablespace
mysql> ALTER TABLE datablogs.tbl_followup IMPORT TABLESPACE;Query OK, 0 rows affected, 1 warning (0.04 sec)mysql> select * from datablogs.tbl_followup;Empty set (0.01 sec)
So overall while reading it very simple but its very hard troubleshooting did in lifetime !!!
Anyway instead of doing multiple solution follow my steps to resolve the issue sortly !!!
If we grow bigger in the business , seamlessly our customer and transaction data also increases . In the meantime performance needs to consider as well
So in this case of bigger tables indexes will not help us to achieve good performance on peak times . Alternatively we have partitioning to split the tables data into multiple pieces on all the relational database environments
Like wise we are going to do range partition on sample table in PostgreSQL Database , In PostgreSQL three types of partition methods are available ,
Below are the important concern in the PostgreSQL Partition ,
So based on the above formula , we have tried to transform regular table into partitioned one for your reference
Any one can use this example and perform partitioning in AWS PostgreSQL RDS easily
Click GitHub Link for Code : AWS-PostgreSQL-RDS-Table-Partition
Step 1 : Create base datablogspaycheck table and insert some sample records
DROP TABLE IF EXISTS datablogspaycheck CASCADE;
DROP SEQUENCE IF EXISTS public.paycheck_id_seq;
CREATE SEQUENCE public.paycheck_id_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
create table datablogspaycheck
(
payment_id int NOT NULL DEFAULT nextval('public.paycheck_id_seq'::regclass),
created timestamptz NOT NULL,
updated timestamptz NOT NULL DEFAULT now(),
amount float,
status varchar DEFAULT 'new'
);
CREATE INDEX idx_paycheck ON datablogspaycheck (created);
INSERT INTO datablogspaycheck (created) VALUES (
generate_series(timestamp '2023-01-01'
, now()
, interval '5 minutes') );
Step 2 : Rename base table with new name
ALTER TABLE datablogspaycheck RENAME TO datablogspaycheck_basetable;
Step 3 : Create Partitioned table
create table datablogspaycheck
(
payment_id int NOT NULL DEFAULT nextval('public.paycheck_id_seq'::regclass),
created timestamptz NOT NULL,
updated timestamptz NOT NULL DEFAULT now(),
amount float,
status varchar DEFAULT 'new'
)PARTITION BY RANGE (created);
Step 4 : Create Separate Partition for each create date
CREATE TABLE datablogspaycheck_202303 PARTITION OF datablogspaycheck
FOR VALUES FROM ('2023-01-01') TO ('2023-03-01');
CREATE TABLE datablogspaycheck_20230304 PARTITION OF datablogspaycheck
FOR VALUES FROM ('2023-03-01') TO ('2023-04-01');
CREATE TABLE datablogspaycheck_202304 PARTITION OF datablogspaycheck
FOR VALUES FROM ('2023-04-01') TO ('2023-05-01');
CREATE TABLE datablogspaycheck_202311 PARTITION OF datablogspaycheck
FOR VALUES FROM ('2023-05-01') TO ('2023-11-01');
CREATE TABLE datablogspaycheck_2024 PARTITION OF datablogspaycheck
FOR VALUES FROM ('2023-11-01') TO ('2024-01-01');
Step 5 : Migrate the all records
insert into datablogspaycheck (payment_id,created,updated,amount,status) select payment_id,created,updated,amount,status from datablogspaycheck_basetable;
Step 6 : Validate each partition
select * from datablogspaycheck_202303 order by 2 desc
select * from datablogspaycheck_20230304 order by 2 desc
select * from datablogspaycheck_202311 order by 2 desc
Its done , Easily migrated normal table data into partitioned table
Thanks for Reading !!!
Oracle Audit Log :
Oracle Audit Log refers to the feature in Oracle
Database that records and stores information about various database activities
and events. It provides a mechanism to track and monitor user activities,
system events, and changes made to the database.
The Oracle Audit Log provides an essential tool for security, compliance, and troubleshooting purposes.
Types of Auditing in Amazon RDS for Oracle :
We are going to see , how do we enable Standard auditing in Oracle RDS
How to enable Audit Log in Oracle RDS?
Make sure you have enabled custom parameter group for
Oracle RDS
Audit_Trail - DB, EXTENDED
AUDIT DELETE ANY TABLE;
AUDIT DELETE TABLE BY USER_01 BY ACCESS;
AUDIT DELETE TABLE BY USER_02 BY ACCESS;
AUDIT ALTER, GRANT, INSERT, UPDATE, DELETE ON DEFAULT;
AUDIT READ ON DIRECTORY datapump_dir;
Its all done , we have enabled required logs to capture for security purpose
How to we monitor Audit Logs ?
We can just run the below command get the captured audit logs in Oracle RDS ,
SELECT * FROM DBA_AUDIT_TRAIL order by 1 desc
Its just for normal scenario , explained the process . Still we can separate Audit Table space and many further things are available in Oracle . Let see on another blogs
Happy Auditing !!!
We have setup of One Primary with Multiple Secondary
Even if we configured highly available setup and backups , native backup are so special to take it and keep it somewhere in the cloud
using the below script we can easily schedule backup in Linux environments
https://github.com/selvackp/MongoNativeBackup-/blob/main/mongo_dump.sh
export PATH=/bin:/usr/bin:/usr/local/bin #Decalre Today Date TODAY=`date +"%d%b%Y"` #Declare Variables Required to pass for mongo dump command DB_BACKUP_PATH='/mnt/mongobackup' MONGO_HOST='localhost' MONGO_PORT='27017' MONGO_USER='xxxxxxxxxxx' MONGO_PASSWD='xxxxxxxxxxxxx' DATABASE_NAMES='ALL' #Remove Old Backup Files find ${DB_BACKUP_PATH} -name "*.zip" -type f -mtime +3 -delete find ${DB_BACKUP_PATH} -type d -mtime +3 -exec rm -rf {} \; #Create Directory for Backup mkdir -p ${DB_BACKUP_PATH}/${TODAY} cd ${DB_BACKUP_PATH}/${TODAY}/ if [ ${DATABASE_NAMES} = "ALL" ]; then echo "You have choose to backup all database" mongodump --uri="mongodb://${MONGO_USER}:${MONGO_PASSWD}@${MONGO_HOST}:${MONGO_PORT}" else echo "Running backup for selected databases" for DB_NAME in ${DATABASE_NAMES} do mongodump --uri="mongodb://${MONGO_USER}:${MONGO_PASSWD}@${MONGO_HOST}:${MONGO_PORT}/${DB_NAME}" done fi #Compress The Backup cd ${DB_BACKUP_PATH}/${TODAY} zip -r ${DB_BACKUP_PATH}_${TODAY}.zip ${DB_BACKUP_PATH}/${TODAY} cd ${DB_BACKUP_PATH}/${TODAY} #Copy the Compressed file into Azure Container using Shared Access Token azcopy cp ${DB_BACKUP_PATH}_${TODAY}.zip "https://xxxxxxxxxxx.blob.core.windows.net/xxxxxxxxxxxx?sp=w&st=xxxxxTxxxxxxxZ&se=xxxxxxZ&spr=https&sv=2021-06-08&sr=c&sig=csdfcdsxxxxxxxxxxxxxxx" --recursive=true #Send Mail with Backup Logs if [ $? -ne 0 ] then echo "Mongo Native backup Failed in $(hostname) $(date). Please contact administrator." | mail -r mail@datablogs.com -s "Mongo Native backup Failed $(hostname)" dbsupport@datablogs.com < /mongodata/cronscripts/mongo_backup_log.log else echo "Mongo Native backup completed in $(hostname)." | mail -r mail@datablogs.com -s "Mongo Native backup completed in $(hostname)" dbsupport@datablogs.com < /mongodata/cronscripts/mongo_backup_log.log fi
Its easy to recover MongoDB Backup using Percona Backup for MongoDB
Its took little long time to derive our approach tunning on azure data factory with Azure Synapse why because we need to run the system atleast two months validate our approach is smooth
Yes its all running good as expected performance on the ETL loads and Processes
Here are the major things we need to take care on Azure Synapse Dedicated Pool ,
Dedicated SQL Pool Scaling :
We have decide below metrices to optimize the Azure Synapse
Before start our critical process , we can automate upscale process with ADF Pipelines itself . So many blogs available to configure that
Best method configure the authentication method with service principle ,
Dedicated SQL Pool Workload Management :
We have decide below metrices to prepare workload management
Based on the classifications , we have to split workload group for above queries
Step 1 :
We need to create login and user for workload management in Dedicated SQL Pool
--CREATE LOGIN [Analyticsprocess] WITH PASSWORD='xxxxxxxxx'
--CREATE USER [Analyticsprocess] FOR LOGIN [Analyticsprocess]
--GRANT CONTROL ON DATABASE::[sql-datablogs-dw] TO
Analyticsprocess
Step 2 :
Consider you have upscaled instance into DW400c below are the resources allocation for the DW400c instance concurrency requirements
In the workload group --> New workload group --> Click ELT
Consider analytics process user is used for high intensive queries we have to allocate as much as minimum resource for workload group
Click Classifiers --> Add Classifiers --> Name it as ELT --> and specify Member should be the login and Label is important to mention
Once click Add , we will get below concurrency range based on DW400c
By Default , its having system level workload group to handle the queries but its not effective we have to force our workload group
Step 3 :
This is very important to utilize the workload group properly . We need to specify Label on the heavy processing queries so that it will utilized properly
CREATE TABLE rpt_datablogs_finalreport.smgreport WITH (HEAP,DISTRIBUTION = REPLICATE) AS select * into rpt_datablogs_finalreport.smgreport_vw from rpt_datablogs_finalreport.vw_smgreport OPTION (LABEL='highintensiveprocess')
Update of Previous post ,
As mentioned in below blog , past years we cant restore database into same SQL Server RDS
https://medium.com/@yogitha.o/backup-and-restore-in-same-sql-server-rds-3e667622f7ff
But now you can able to restore without any issues from recent releases of SQL Server RDS
Its Easy to achieve it in easy method ,
Three hours of long running pipeline is reduced to run in 35 Minutes when we given right kind of top bottom optimization like our Body !!!
Yes Sometimes if we give proper core and cross workouts you can run 10km in 55 Minutes !!!
Its happened to me But Not Now 😆😆😆
Oh Oh we are away from Data ground !!! Lets Move into Azure Synapse tunning,
In the starting phase every piece of code and pipeline process was really trouble us to optimize it , whatever we do on tunning it will run as same time
Waited long time until each process to complete as its running 3 hours !!!
But Reecha blog helped something to start with basic check and given some hope to fine tune it
Remember below piece of monitoring code always help us to too dig more
--Check Long Running Query in Azure Synapse
SELECT *
FROM sys.dm_pdw_exec_requests
WHERE status not in ('Completed','Failed','Cancelled')
AND session_id <> session_id()
ORDER BY submit_time DESC;--Find the distributed query plan steps for long running query SELECT * FROM sys.dm_pdw_request_steps WHERE request_id = 'xxxxxx' ORDER BY step_index; ***Replace your request_id from first query resultSample Running Query :
Oooooouch !!! I have deleted critical data in the ledger table !!!
When we ran delete or drop Script on any database without double check , Its Really big problem for the end users !!! Also you wont sleep 😁😁😁
If you have GODDBA , you are saved millions dollars of your data also your valuable customers .
Oh Great 😀😀😀 How we can achieve that ?
In SQL Server Multiple Database Backup Methods are available to tackle that ,
If SQL Server database is in full recovery model we can take transaction log backups on every hour/Minute/Seconds into local or remote drives .
So, Yes am taking Log Backup 😕😕😕 How to we restore particular time ?
Using with transaction logs we can bring back the data as much as you want depends on your log backup strategy .
To achieve that here is the automated point in time restore script for your valuable environment ,
Scripts are available in https://github.com/selvackp/SQLServerPointInTimeRestore.git
So finally you have saved your millions dollars worth of data in 10 Minutes !!!
USE master GO DECLARE @DatabaseOLDName sysname = 'test', @DatabaseNewName sysname = 'test1', @PrimaryDataFileName sysname = 'test', @SecDataFileName sysname = NULL, @DatabaseLogFileName sysname = 'test_log', @PrimaryDataFileCreatePath sysname = 'D:\MISC\Bkp\testdata.mdf', @SecDataFileCreatePath sysname = NULL, @SecDataFileCreatePath1 sysname = NULL, @DatabaseLogFileCreatePath sysname = 'D:\MISC\Bkp\test_log.ldf', @PITRDateTime datetime = '2022-08-11T20:44:11'; DECLARE @command nvarchar(MAX), @OldPhysicalPathName nvarchar(MAX), @FullBackupDateTime datetime, @DiffBackupDateTime datetime, @LogBackupDateTime datetime, @message nvarchar(MAX); SET @command = N'RESTORE DATABASE @DatabaseNewName FROM DISK = @OldPhysicalPathName WITH FILE = 1, NORECOVERY, NOUNLOAD, REPLACE, STATS = 5, STOPAT = @PITRDateTime, MOVE N''' + @PrimaryDataFileName + N''' TO N''' + @PrimaryDataFileCreatePath + N''',' + COALESCE(' MOVE N''' + @SecDataFileName + ''' TO N''' + @SecDataFileCreatePath + ''',', '') + N' MOVE N''' + @DatabaseLogFileName + N''' TO N''' + @DatabaseLogFileCreatePath + N''';'; SELECT TOP (1) @OldPhysicalPathName = bmf.physical_device_name,@FullBackupDateTime = bs.backup_start_date FROM msdb.dbo.backupset AS bs INNER JOIN msdb.dbo.backupmediafamily AS bmf ON bmf.media_set_id = bs.media_set_id WHERE bs.database_name = @DatabaseOLDName AND bs.type= 'D' AND bs.backup_start_date < @PITRDateTime ORDER BY bs.backup_start_date DESC; SET @message = N'Starting restore of full backup file '+ @OldPhysicalPathName + N', taken ' + CONVERT(nvarchar(30), @FullBackupDateTime, 120); RAISERROR(@message, 0, 1) WITH NOWAIT; EXEC sys.sp_executesql @command, N'@DatabaseNewName sysname, @OldPhysicalPathName nvarchar(260), @PITRDateTime datetime', @DatabaseNewName, @OldPhysicalPathName, @PITRDateTime; SET @command = N'RESTORE DATABASE @DatabaseNewName FROM DISK = @OldPhysicalPathName WITH FILE = 1, NORECOVERY, NOUNLOAD, REPLACE, STATS = 5, STOPAT = @PITRDateTime;'; SELECT TOP (1) @OldPhysicalPathName = bmf.physical_device_name,@DiffBackupDateTime = bs.backup_start_date FROM msdb.dbo.backupset AS bs INNER JOIN msdb.dbo.backupmediafamily AS bmf ON bmf.media_set_id = bs.media_set_id WHERE bs.database_name = @DatabaseOLDName AND bs.type = 'I' AND bs.backup_start_date >= @FullBackupDateTime AND bs.backup_start_date< @PITRDateTime ORDER BY bs.backup_start_date DESC; IF @@ROWCOUNT > 0 BEGIN; SET @message = N'Starting restore of differential backup file ' + @OldPhysicalPathName + N', taken ' + CONVERT(nvarchar(30), @DiffBackupDateTime, 120); RAISERROR(@message, 0, 1) WITH NOWAIT; EXEC sys.sp_executesql @command, N'@DatabaseNewName sysname, @OldPhysicalPathName nvarchar(260), @PITRDateTime datetime', @DatabaseNewName, @OldPhysicalPathName, @PITRDateTime; END; SET @command = N'RESTORE LOG @DatabaseNewName FROM DISK = @OldPhysicalPathName WITH FILE = 1, NORECOVERY, NOUNLOAD, REPLACE, STATS = 5, STOPAT = @PITRDateTime;'; DECLARE c CURSOR LOCAL FAST_FORWARD READ_ONLY TYPE_WARNING FOR SELECT bmf.physical_device_name, bs.backup_start_date FROM msdb.dbo.backupset AS bs INNER JOIN msdb.dbo.backupmediafamily AS bmf ON bmf.media_set_id = bs.media_set_id WHERE bs.database_name = @DatabaseOLDName AND bs.type = 'L' AND bs.backup_start_date >= COALESCE(@DiffBackupDateTime, @FullBackupDateTime) ORDER BY bs.backup_start_date ASC; OPEN c; FETCH NEXT FROM c INTO @OldPhysicalPathName, @LogBackupDateTime; WHILE @@FETCH_STATUS = 0 BEGIN; SET @message = N'Starting restore of log backup file ' + @OldPhysicalPathName + N', taken ' + CONVERT(nvarchar(30), @LogBackupDateTime, 120); RAISERROR(@message, 0, 1) WITH NOWAIT; EXEC sys.sp_executesql @command, N'@DatabaseNewName sysname, @OldPhysicalPathName nvarchar(260), @PITRDateTime datetime', @DatabaseNewName, @OldPhysicalPathName, @PITRDateTime; IF @LogBackupDateTime >= @PITRDateTime BREAK; FETCH NEXT FROM c INTO @OldPhysicalPathName, @LogBackupDateTime; END; CLOSE c; DEALLOCATE c; SET @command = N'RESTORE DATABASE @DatabaseNewName WITH RECOVERY;'; RAISERROR('Starting recovery', 0, 1) WITH NOWAIT; EXEC sys.sp_executesql @command, N'@DatabaseNewName sysname, @OldPhysicalPathName nvarchar(260), @PITRDateTime datetime', @DatabaseNewName, @OldPhysicalPathName, @PITRDateTime; GO