datablogs: Zero Downtime
Showing posts with label Zero Downtime. Show all posts
Showing posts with label Zero Downtime. Show all posts

Tuesday, December 6, 2022

Migrate and Sync RDS PostgreSQL to Amazon Redshift Serverless using AWS Glue

Always customer prefers cost less solutions to run the business . To help them their business and the requirements we also needs to provide efficient solutions 

Some cases cloud vendors provides good solutions for analytics load but cost will be very high , most of the time we don't want to recommend that but we need to do  

Like that one of the solution in AWS , its cost much but works much faster like anything 

We are talking about Amazon Redshift Solutions only , So recently they have launched Amazon Redshift Serverless solutions for few regions .

Whatever new comes , before customer catches the features we need to find and deliver the best approach to them . So ,

What is Amazon Redshift Serverless ? 

Amazon Redshift Serverless automatically provisions data warehouse capacity and intelligently scales the underlying resources. Amazon Redshift Serverless adjusts capacity in seconds to deliver consistently high performance and simplified operations for even the most demanding and volatile workloads.

With Amazon Redshift Serverless, you can benefit from the following features:

  • Access and analyze data without the need to set up, tune, and manage Amazon Redshift provisioned clusters
  • Use the superior Amazon Redshift SQL capabilities, industry-leading performance, and data-lake integration to seamlessly query across a data warehouse, a data lake, and operational data sources
  • Deliver consistently high performance and simplified operations for the most demanding and volatile workloads with intelligent and automatic scaling
  • Use workgroups and namespaces to organize compute resources and data with granular cost controls
  • Pay only when the data warehouse is in use

So , Overall no need of human interventions in the Redshift Serverless 

Everything is Fine , How to we migrate and sync Amazon RDS / EC2 Postgres / Aurora Postgres to utilize this Redshift Serverless 

What are the options available to migrate and Sync ?

  • DMS - Still Redshift Target is not available to migrate the Data
  • Export/Import - Yes we can perform , how to handle zero downtime migration . Syncing real-time data is not possible 
  • AWS Glue - Its Good Option , We can migrate and Sync real-time data from RDS to Redshift Serverless  
Lets start sample data migrate and sync into Amazon Redshift Serverless ,

Environment Setup Go Through ,

  1. RDS PostgreSQL
  2. AWS Glue
  3. Amazon Redshift Serverless 
  4. VPC S3 Endpoint
  5. IAM Role

RDS PostgreSQL : 


Amazon Redshift Serverless :


VPC S3 Endpoint :


IAM Role : 


Once Environment is completed , we can start adding connections and jobs in AWS Glue

How to add connections in AWS Glue , 

In AWS Glue Console --> Click Connections --> Create Connections 





Create source and target databases ,



For testing sample schema and data inserted into RDS PostgreSQL before creating the crawler 

Lets start to create separate crawlers for Source and Target to update catalog ,

Below data source is mapped to RDS PostgreSQL 


Also required policies updated role needs to attached ,


Choose appropriate databases for crawler , below is the for source 


Below is the for Target , 


Once its completed , we have already deployed sample schema scripts in both side to transfer the data . Lets run the crawler and check it 


So both source and target tables are updated 


Lets Create Glue Job in AWS Console ,




After all , Lets schedule the job every 5 Minutes and sync the data

Whatever data is inserted in RDS PostgreSQL will be sync into Redshift Serverless Every 5 Minutes ,


So Quickly we can migrate and save our cost 

Any Troubles and issues Please contact me immediately !!!

Wednesday, March 30, 2022

MongoDB 4.4.13 : Here to SRV you with easier replica set connections

We love MongoDB for extraordinary features as per business perspective 

Lets come to our Blog Discussion , Only in PaaS Environments we have features like DNS endpoints for database easily connect with primary or secondary at any single point of failure 

Mongo Atlas Providing all the features but small scale customers still using MongoDB with Virtual Machines or EC2 Instances . To handle point of failures in primary we can use DNS Seed List Connection Format in mongoDB . We will discuss in detail how to we configure this in AWS Cloud 

What is seed list ?

Seed list can be list of hosts and ports in DNS Entries . Using DNS we can configure available mongoDB servers in under one hood . When client connects to an common DNS , its also knows replica set members available in seed list . Single SRV identifies all the nodes associated with the cluster .  Like Below ,
root@ip-172-31-86-8:~# mongo "mongodb+srv://superuser:zU2iU9pF7mO7rZ4z@db.datamongo.com/?authSource=admin&readPreference=primary&ssl=false"
Percona Server for MongoDB shell version v4.4.13-13
connecting to: mongodb://db1.datamongo.com:27717,db3.datamongo.com:27717,db2.datamongo.com:27717/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb&readPreference=primary&replicaSet=db-replication&ssl=false


Environment Setup : 

For Testing Purpose , We have launched 3 Private Subnet Servers and 1 Public Subnet Server to use like Bastion . Create One Private Hosted Zone for DNS and Installed Percona Server for MongoDB 4.4.13 then configured Replication in it 

AWS EC2 Servers ,


Route 53 Hosted Zone ,


Creating A Records : 

We have launched private subnet instances , so we required to create A Records for private IP's . If Public IPv4 DNS available we can create CNAME Records

A Records Created for db1 server ,

Inside the datamongo.com hosted Zone , Just Click Create Record



Same like we need to create A Records for other two nodes 


Verify the A Records ,
root@ip-172-31-95-215:~# dig db1.datamongo.com

; <<>> DiG 9.11.3-1ubuntu1.17-Ubuntu <<>> db1.datamongo.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13639
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;db1.datamongo.com.             IN      A

;; ANSWER SECTION:
db1.datamongo.com.      10      IN      A       172.31.85.180

;; Query time: 2 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Tue Mar 29 11:58:09 UTC 2022
;; MSG SIZE  rcvd: 62

root@ip-172-31-95-215:~# dig db2.datamongo.com

; <<>> DiG 9.11.3-1ubuntu1.17-Ubuntu <<>> db2.datamongo.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 9496
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;db2.datamongo.com.             IN      A

;; ANSWER SECTION:
db2.datamongo.com.      300     IN      A       172.31.83.127

;; Query time: 3 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Tue Mar 29 12:06:28 UTC 2022
;; MSG SIZE  rcvd: 62

root@ip-172-31-95-215:~# dig db3.datamongo.com

; <<>> DiG 9.11.3-1ubuntu1.17-Ubuntu <<>> db3.datamongo.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46401
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 65494
;; QUESTION SECTION:
;db3.datamongo.com.             IN      A

;; ANSWER SECTION:
db3.datamongo.com.      300     IN      A       172.31.86.8

;; Query time: 2 msec
;; SERVER: 127.0.0.53#53(127.0.0.53)
;; WHEN: Tue Mar 29 12:06:33 UTC 2022
;; MSG SIZE  rcvd: 62

root@ip-172-31-95-215:~#


Creating SRV and TXT Records :

As like Atlas , Once we have the A Records for MongoDB Nodes , we can able to create SRV Records 

Again Inside the datamongo.com hosted Zone , Just Click Create Record


Once its created , again click create record and create TXT records 


Once all the records are created in hosted Zone , Just see the details in same page 


Reading SRV and TXT Records : 

We can use nslookup and verify the configured DNS Seeding ,
root@ip-172-31-95-215:~# nslookup
> set type=SRV
> _mongodb._tcp.db.datamongo.com
Server:         127.0.0.53
Address:        127.0.0.53#53

Non-authoritative answer:
_mongodb._tcp.db.datamongo.com  service = 0 0 27717 db2.datamongo.com.
_mongodb._tcp.db.datamongo.com  service = 0 0 27717 db3.datamongo.com.
_mongodb._tcp.db.datamongo.com  service = 0 0 27717 db1.datamongo.com.

Authoritative answers can be found from:
> set type=TXT
> db.datamongo.com
Server:         127.0.0.53
Address:        127.0.0.53#53

Non-authoritative answer:
db.datamongo.com        text = "authSource=admin&replicaSet=db-replication"

Authoritative answers can be found from:


Verify Connectivity : 

Its all done , We can verify the connectivity with DNS Seed List Connection format ,

By Default , it will connect with ssl true , but we have configured mongodb without SSL . If you required to configure with SSL please refer our blog and configure DNS Seeding with help of this blog 
root@ip-172-31-86-8:~# mongo "mongodb+srv://superuser:zU2iU9pF7mO7rZ4z@db.datamongo.com/?authSource=admin&readPreference=primary&ssl=false"
Percona Server for MongoDB shell version v4.4.13-13
connecting to: mongodb://db1.datamongo.com:27717,db3.datamongo.com:27717,db2.datamongo.com:27717/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb&readPreference=primary&replicaSet=db-replication&ssl=false
Implicit session: session { "id" : UUID("ee74effc-92c7-4189-9e97-017afb4b4ad4") }
Percona Server for MongoDB server version: v4.4.13-13
---
The server generated these startup warnings when booting:
        2022-03-29T11:32:47.133+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
---
db-replication:PRIMARY> rs.status().members.find(r=>r.state===1).name;
172.31.83.127:27717
db-replication:PRIMARY> rs.status().members.find(r=>r.state===1).stateStr;
PRIMARY
db-replication:PRIMARY> rs.status().members.find(r=>r.state===2).name;
172.31.85.180:27717
db-replication:PRIMARY> rs.status().members.find(r=>r.state===2).stateStr;
SECONDARY

Currently 172.31.83.127 is the primary server and 172.31.85.180 is secondary , to test connection we have stopped the primary server (172.31.83.127) in AWS console 



after stopping primary server (172.31.83.127) , mongodb failover happened to to 172.31.85.180 . Its verified without disconnecting the mongo shell 

root@ip-172-31-86-8:~# mongo "mongodb+srv://superuser:zU2iU9pF7mO7rZ4z@db.datamongo.com/?authSource=admin&readPreference=primary&ssl=false"
Percona Server for MongoDB shell version v4.4.13-13
connecting to: mongodb://db1.datamongo.com:27717,db3.datamongo.com:27717,db2.datamongo.com:27717/?authSource=admin&compressors=disabled&gssapiServiceName=mongodb&readPreference=primary&replicaSet=db-replication&ssl=false
Implicit session: session { "id" : UUID("ee74effc-92c7-4189-9e97-017afb4b4ad4") }
Percona Server for MongoDB server version: v4.4.13-13
---
The server generated these startup warnings when booting:
2022-03-29T11:32:47.133+00:00: Using the XFS filesystem is strongly recommended with the WiredTiger storage engine. See http://dochub.mongodb.org/core/prodnotes-filesystem
---
db-replication:PRIMARY> rs.status().members.find(r=>r.state===1).name;
172.31.83.127:27717
db-replication:PRIMARY> rs.status().members.find(r=>r.state===1).stateStr;
PRIMARY
db-replication:PRIMARY> rs.status().members.find(r=>r.state===2).name;
172.31.85.180:27717
db-replication:PRIMARY> rs.status().members.find(r=>r.state===2).stateStr;
SECONDARY
db-replication:PRIMARY> rs.status().members.find(r=>r.state===1).name;
172.31.85.180:27717
db-replication:PRIMARY> rs.status().members.find(r=>r.state===1).stateStr;
PRIMARY

Its working as expected and we have no worries if anything happens on mongoDB primary node in Cloud IaaS as Well !!!

Please contact us if any queries and concerns , we are always happy to help !!!

Monday, January 31, 2022

Migrate Azure MySQL Database to Amazon RDS for MySQL Using MySQL Replication

Its Nightmare Story struggled a lot to reduce the cut over time with Azure MySQL to AWS Migration MySQL Migration.


Tested Methods, 

  1. Backup and Restore - Its took 6 hours to complete
  2. AWS DMS - Text Data truncated and timestamp column data was 6 hours behind from the source

But Nothing helped due to Bugs, features unavailable etc.., But by default replication helped us to work. 

Two types replication available in Azure MySQL ,

1.GTID based Replication

    This is we already discussed with multiple topics in our blog for GCP cloud, also we will explain how to configure GTID based replication from Azure MySQL to AWS RDS in next blog

MySQL GTID Replication and lower_case_table_names

Multi Source Replication from GCP Cloud MySQL to Percona XtraDB Cluster

2.Binlog position-based replication

    We are performing binlog position-based replication to cut over the application in this blog

Environment Setup 

Below POC environment setup done for testing 
  • Azure MySQL 5.6.4 General Purpose instance launched with minimal configuration
  • AWS RDS for MySQL 5.7 launched with customized parameter group 
  • Network connectivity between Azure PaaS and AWS PaaS Service

Once environment is up and running, modify the below server parameters in both side 

Below values needs to update in Both environments, 

Azure MySQL: 

  • replicate_wild_ignore_table=mysql.%,information_schema.%,performance_schema.%,sys.%
  • binlog_row_image = FULL
  • binlog_expire_logs_seconds = 1440

AWS RDS MySQL:  

  • time_zone = Same as Azure MySQL
  • binlog_format = ROW
  • collation and character_set parameters = Same as Azure MySQL

Now both server is ready to configure the replication

Pre-Steps to configure the replication


1.Let’s verify and take a note of binlog position in Azure MySQL   

 mysql> show master status.

+------------------+----------+--------------------+------------------+-------------------+

| File                        | Position    | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+----------------------+-----------------+--------------+------------------+-------------------+

| mysql-bin.00865 | 52884741       |              |                  |                   |

+------------------+----------+--------------+------------------+-------------------+

1 row in set (0.00 sec)

2.Create a replication user with necessary privileges

CREATE ‘repluser’@’%’ IDENTIFIED BY ‘datablogs123’.

GRANT REPLICATION SLAVE ON *.* TO ‘repluser’@’%’;

GRANT REPLICATION CLIENT ON *.* TO ‘repluser’@’%’;

GRANT SELECT, CONNECT ON *.* TO ‘repluser’@’%’;

FLUSH PRIVILEGES;

3.Using mysqldump start the schema backup and followed by data backup 

mysqldump -h datablogs.mysql.database.azure.com -u repluser@datablogs -p datablogsprod --no-data --routines --triggers --events > datablogsprodmeta.sql

mysqldump -h datablogs.mysql.database.azure.com -u repluser@datablogs -p datablogsprod --routines --triggers --events --skip-add-drop-table  --max_allowed_packet=1024M --single-transaction  > datablogsproddata.sql

4.Once backup is completed restore the data into MySQL RDS , 

mysql -h datablogs.c5exr7bs8bax.ap-southeast-1.rds.amazonaws.com -u repluser -p datablogsprod  < datablogsprodmeta.sql

mysql -h datablogs.c5exr7bs8bax.ap-southeast-1.rds.amazonaws.com -u repluser -p datablogsprod  --max_allowed_packet=1024M  < datablogsproddata.sql

Configuring the replication 


In Amazon MySQL set the external master with the help of Step 1 binlog file name and binlog position 

CALL mysql.rds_stop_replication;

CALL mysql.rds_reset_external_master;

CALL mysql.rds_set_external_master ('datablogs.mysql.database.azure.com', 3306, 'repluser@datablogs', 'repluser', 'mysql-bin.00865', 52884741, 0);

CALL mysql.rds_start_replication;

Once executed start the replication and check the slave status in RDS ,

    mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: datablogs.mysql.database.azure.com Master_User: repluser@datablogs Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.00865 Read_Master_Log_Pos: 52884741 Relay_Log_File: relaylog.000004 Relay_Log_Pos: 273 Relay_Master_Log_File: mysql-bin.00865 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: datablogsprod Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: mysql.plugin,mysql.rds_monitor,mysql.rds_sysinfo,innodb_memcache.cache_policies,mysql.rds_history,innodb_memcache.config_options,mysql.rds_configuration,mysql.rds_replication_status Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 52884741 Relay_Log_Space: 19742 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1053643462 Master_UUID: 787858b4-a021-11ea-b894-97f71069e028 Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: 1 row in set (0.00 sec)


Post Validation and troubleshooting steps of Azure MySQL Replication


Validate the data and tables counts between Azure MySQL and MySQL RDS 

If we get any errors related with system schema in the Azure use the following method to fix it , we got same error like below ,

Can't find 'mysql.__firewallrules__' doesnt exist 

We can skip the error with below commands in MySQL RDS , 

CALL mysql.rds_stop_replication;
CALL mysql.rds_skip_repl_error;
CALL mysql.rds_start_replication;

But in MySQL RDS Row by row only errors will be skip and it will take long time to do it , follow the below method and check the binlog position until for the DML , DDL queries are captured in binlog 

SHOW BINLOG EVENTS IN 'mysql-bin.00865' FROM 52884567 LIMIT 10 \G

Note : Based on the error modify the binlog file and position in above command

Once found the binlog position repeat the configuring replication step

Now All set for Azure to AWS data sync , now its ready for application cutover with zero downtime !!!