datablogs: Migration
Showing posts with label Migration. Show all posts
Showing posts with label Migration. Show all posts

Sunday, June 25, 2023

Deep dive into Babelfish Compass

        Wow !!! If suppose on the migration projects we need to more stuffs and things to convert when coming to procedures , functions and other database objects 

But AWS is providing good things to migrate with easy steps , Ha Ha ... Don't overthink still you need to do 40% code migration works 

In this part Babelfish Compass is giving various options to support migration the codes from SQL Server to PostgreSQL with bebelfish feature enabled PaaS servers 

Below are easy steps on the Script Conversations

Prerequisites 
  • Install a 64-bit Java Runtime Environment (JRE) version 8 or higher

1.Download Compass Tool in Below 

https://github.com/babelfish-for-postgresql/babelfish_compass/releases/tag/v.2023-03-a

Needs to download .zip file to work with Babelfish Compatibility 


2.Unzip and Place the files in separate folder 


3.Be Ready with you SQL Database Generated Scripts file and Copy it in Same Folder 

Database name has been highlighted in below , 


4.Next , we can start running report with Babelfish Compass 

C:\Users\Admin\Downloads\BabelfishCompass_v.2023-03-a\BabelfishCompass>BabelfishCompass.bat reportfinal datablogsdbprod.sql


5.Finally Reports are generated in Documents Directory 


6.We can review the reports in any format , for me its easy with in HTML browser 

Just double click the HTML document , So like below we will get supported and unsupported features details in depth . 

We can directly go and debug the code . Also bebelfish compass is having plenty of methods to rewrite the code , we will check it in next blog 


Happy Coding !!!



Tuesday, December 6, 2022

How do I troubleshoot the AWS Glue error "VPC S3 endpoint validation failed for SubnetId"?

If you are newbie to AWS Glue its really difficult to run the Crawlers without these failures , Below are basic steps you need to make sure done before running the Crawler

  1. AWS IAM Role and Privileges 
  2. S3 Endpoint 

1.AWS IAM Role and Policies 

We have to atleast attach below policies in IAM Role 

2.S3 Endpoint 

Most of the time you will get below error ,  

VPC S3 endpoint validation failed for SubnetId: subnet-0e0b8e2ad1b85d036. VPC: vpc-01bdc81e45566a823. Reason: Could not find S3 endpoint or NAT gateway for SubnetId: subnet-0e0b8e2ad1b85d036 in Vpc vpc-01bdc81e45566a823 (Service: AWSGlueJobExecutor; Status Code: 400; Error Code: InvalidInputException; Request ID: 0495f22f-a7b5-4f74-8691-7de8a6a47b42; Proxy: null)



To fix this error , you need understand the issue first . Its saying 

Create Endpoints for S3 not for Glue , Some worst cases people create Nat Gateway and they loss huge money for simple thing . So you have to create S3 Endpoint Gateway as like below ,


Once we created your job will run like flight :)




Migrate and Sync RDS PostgreSQL to Amazon Redshift Serverless using AWS Glue

Always customer prefers cost less solutions to run the business . To help them their business and the requirements we also needs to provide efficient solutions 

Some cases cloud vendors provides good solutions for analytics load but cost will be very high , most of the time we don't want to recommend that but we need to do  

Like that one of the solution in AWS , its cost much but works much faster like anything 

We are talking about Amazon Redshift Solutions only , So recently they have launched Amazon Redshift Serverless solutions for few regions .

Whatever new comes , before customer catches the features we need to find and deliver the best approach to them . So ,

What is Amazon Redshift Serverless ? 

Amazon Redshift Serverless automatically provisions data warehouse capacity and intelligently scales the underlying resources. Amazon Redshift Serverless adjusts capacity in seconds to deliver consistently high performance and simplified operations for even the most demanding and volatile workloads.

With Amazon Redshift Serverless, you can benefit from the following features:

  • Access and analyze data without the need to set up, tune, and manage Amazon Redshift provisioned clusters
  • Use the superior Amazon Redshift SQL capabilities, industry-leading performance, and data-lake integration to seamlessly query across a data warehouse, a data lake, and operational data sources
  • Deliver consistently high performance and simplified operations for the most demanding and volatile workloads with intelligent and automatic scaling
  • Use workgroups and namespaces to organize compute resources and data with granular cost controls
  • Pay only when the data warehouse is in use

So , Overall no need of human interventions in the Redshift Serverless 

Everything is Fine , How to we migrate and sync Amazon RDS / EC2 Postgres / Aurora Postgres to utilize this Redshift Serverless 

What are the options available to migrate and Sync ?

  • DMS - Still Redshift Target is not available to migrate the Data
  • Export/Import - Yes we can perform , how to handle zero downtime migration . Syncing real-time data is not possible 
  • AWS Glue - Its Good Option , We can migrate and Sync real-time data from RDS to Redshift Serverless  
Lets start sample data migrate and sync into Amazon Redshift Serverless ,

Environment Setup Go Through ,

  1. RDS PostgreSQL
  2. AWS Glue
  3. Amazon Redshift Serverless 
  4. VPC S3 Endpoint
  5. IAM Role

RDS PostgreSQL : 


Amazon Redshift Serverless :


VPC S3 Endpoint :


IAM Role : 


Once Environment is completed , we can start adding connections and jobs in AWS Glue

How to add connections in AWS Glue , 

In AWS Glue Console --> Click Connections --> Create Connections 





Create source and target databases ,



For testing sample schema and data inserted into RDS PostgreSQL before creating the crawler 

Lets start to create separate crawlers for Source and Target to update catalog ,

Below data source is mapped to RDS PostgreSQL 


Also required policies updated role needs to attached ,


Choose appropriate databases for crawler , below is the for source 


Below is the for Target , 


Once its completed , we have already deployed sample schema scripts in both side to transfer the data . Lets run the crawler and check it 


So both source and target tables are updated 


Lets Create Glue Job in AWS Console ,




After all , Lets schedule the job every 5 Minutes and sync the data

Whatever data is inserted in RDS PostgreSQL will be sync into Redshift Serverless Every 5 Minutes ,


So Quickly we can migrate and save our cost 

Any Troubles and issues Please contact me immediately !!!

Thursday, November 10, 2022

Friday, March 18, 2022

Oracle REST Data Services (ORDS) : Upgrade ORDS from 5.1.4 to 19.4

 We have upgraded Oracle Apex in the previous blog and next have to plan for below things 

  • Apex static images update for upgraded version
  • ORDS Upgrade to stable version 


Prerequisites 

  • Downloads the upgraded apex version in oracle RDS ( Apex 21.1.2 )
  • Download latest stable version of ORDS 19.4
  • Make sure you have already configured Oracle Client 19C and Java 1.8 is installed 

Apex Static Image Upgrade : 

Before upgrade apex images , we will receive below error while login apex workspace page 


Once downloaded the Apex bundle , 
  • Rename the existing apex folder from D:\apex to D:\apex_5.1.4
  • Unzip the apex_21.1_en.zip folder into apex folder
  • Go to apex utilities folder 
  • Stop the ORDS running service 
Connect Oracle 19C in SQL Plus

C:\Windows\system>D:

D:\>cd apex

D:\apex>cd utilities

D:\apex\utilities>

D:\apex\utilities>sqlplus dataadmin@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dataadmin.cwt1dgu2zxag.ap-southeast-1.rds.amazonaws.com)(PORT=1521))(CONNECT_DATA=(SID=ORCL)))

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

Enter Password:

SQL> @reset_image_prefix.sql

Enter the Application Express image prefix [/i/] https://static.oracle.com/cdn/apex/21.1.2/ 

...Changing Application Express image prefix

NEW_IMAGE_PREFIX
------------------------------------------
https://static.oracle.com/cdn/apex/21.1.2/

PL/SQL procedure successfully completed.

...Recreate APEX global

PL/SQL procedure successfully completed.

Commit complete.

Image Prefix update complete

Disconnected from Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.13.0.0.0

D:\apex\utilities>

Once its completed , we will able to browse the apex pages without any issues 

ORDS Upgrade Paths 
Step 1 : 
Rename existing ords folder as ords_3.0.13 . As mentioned in prerequisites , unzip the downloaded ords-19.4.6.142.1859 into ords folder 

Step 2 :
Copy the D:/apex/images folder into D:/ords/

Step 3:
Uninstall the existing ORDS version from Oracle RDS ,

D:\ords>java -jar ords.war uninstall 

Specify the database connection type to use. 

Enter number for [1] Basic  [2] TNS  [3] Custom URL [1]: 

Enter the name of the database server [localhost]:dataadmin.cwt1dgu2zxag.ap-southeast-1.rds.amazonaws.com

Enter the database listen port [1521]: 

Enter 1 to specify the database service name, or 2 to specify the database SID [1]: 

Enter the database service name:ORCL 

Requires to login with administrator privileges to verify Oracle REST Data Services schema. 

Enter the administrator username:dataadmin 

Enter the database password for dataadmin: 

Confirm password: 

Connecting to database user: dataadmin url: jdbc:oracle:thin:@//dataadmin.cwt1dgu2zxag.ap-southeast-1.rds.amazonaws.com:1521/ORCL

Retrieving information. 

Uninstalling Oracle REST Data Services 

... Log file written to C:\Users\datablogsadmin\ords_uninstall_core_2022-03-16_133300_00152.log 

Completed uninstall for Oracle REST Data Services. Elapsed time: 00:00:07.840 
Step 4 :
Set Config Directory 
java -jar ords.war configdir D:\config

Step 5 :
Update passwords and unlock below accounts in Oracle RDS 
  • APEX_PUBLIC_USER
  • APEX_LISTENER
  • APEX_REST_PUBLIC_USER
Connect Oracle 19C in SQL Plus

D:\apex\utilities>sqlplus dataadmin@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dataadmin.cwt1dgu2zxag.ap-southeast-1.rds.amazonaws.com)(PORT=1521))(CONNECT_DATA=(SID=ORCL)))

Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0

Enter Password:

SQL>alter user APEX_PUBLIC_USER account unlock identified by Admin123;
SQL>exec rdsadmin.rdsadmin_run_apex_rest_config('Admin123','Admin123');
SQL>ALTER USER APEX_REST_PUBLIC_USER account unlock IDENTIFIED BY Admin123;
SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_OBJECTS', 'DATAADMIN', 'SELECT', true);
SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_ROLE_PRIVS', 'DATAADMIN', 'SELECT', true);
SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_TAB_COLUMNS', 'DATAADMIN', 'SELECT', true);
SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('USER_CONS_COLUMNS', 'DATAADMIN', 'SELECT', true);
SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('USER_CONSTRAINTS', 'DATAADMIN', 'SELECT', true);
SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('USER_OBJECTS', 'DATAADMIN', 'SELECT', true);
SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('USER_PROCEDURES', 'DATAADMIN', 'SELECT', true);
SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('USER_TAB_COLUMNS', 'DATAADMIN', 'SELECT', true);
SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('USER_TABLES', 'DATAADMIN', 'SELECT', true);
SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('USER_VIEWS', 'DATAADMIN', 'SELECT', true); 
SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('WPIUTL', 'DATAADMIN', 'EXECUTE', true);
SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('DBMS_SESSION', 'DATAADMIN', 'EXECUTE', true);
SQL>EXEC rdsadmin.rdsadmin_util.grant_apex_admin_role;
SQL>grant APEX_ADMINISTRATOR_ROLE to DATAADMIN;

Once its executed , proceed the ORDS Installation

Step 6 :
Install the required database naming configuration setup for ORDS , By default it will install as apex 

D:\ords>java -jar ords.war setup --database dataadmin
Specify the database connection type to use.
Enter number for [1] Basic  [2] TNS  [3] Custom URL [1]:
Enter the name of the database server [localhost]:dataadmin.cwt1dgu2zxag.ap-southeast-1.rds.amazonaws.com
Enter the database listen port [1521]:
Enter 1 to specify the database service name, or 2 to specify the database SID [1]:
Enter the database service name:ORCL
Enter 1 if you want to verify/install Oracle REST Data Services schema or 2 to skip this step [1]:1
Enter the database password for ORDS_PUBLIC_USER:
Confirm password:
Requires to login with administrator privileges to verify Oracle REST Data Services schema.

Enter the administrator username:dataadmin
Enter the database password for dataadmin:
Confirm password:
Connecting to database user: dataadmin url: jdbc:oracle:thin:@//dataadmin.cwt1dgu2zxag.ap-southeast-1.rds.amazonaws.com:1521/ORCL

Retrieving information.
Enter the default tablespace for ORDS_METADATA [SYSAUX]:
Enter the temporary tablespace for ORDS_METADATA [TEMP]:
Enter the default tablespace for ORDS_PUBLIC_USER [USERS]:
Enter the temporary tablespace for ORDS_PUBLIC_USER [TEMP]:
Enter 1 if you want to use PL/SQL Gateway or 2 to skip this step.
If using Oracle Application Express or migrating from mod_plsql then you must enter 1 [1]:
Enter the PL/SQL Gateway database user name [APEX_PUBLIC_USER]:
Enter the database password for APEX_PUBLIC_USER:
Confirm password:
Enter 1 to specify passwords for Application Express RESTful Services database users (APEX_LISTENER, APEX_REST_PUBLIC_USER) or 2 to skip this step [1]:
Enter the database password for APEX_LISTENER:
Confirm password:
Enter the database password for APEX_REST_PUBLIC_USER:
Confirm password:
Enter a number to select a feature to enable [1] SQL Developer Web [2] REST Enabled SQL [3] None [1]:1
2022-03-16T05:35:15.021Z INFO   reloaded pools: [|dataadmin||, |dataadmin|al|, |dataadmin|rt|, |dataadmin|pu|]
Installing Oracle REST Data Services version 19.4.6.r1421859
... Log file written to C:\Users\datablogsadmin\ords_install_core_2022-03-16_133515_00223.log
... Verified database prerequisites
... Created Oracle REST Data Services proxy user
... Created Oracle REST Data Services schema
... Granted privileges to Oracle REST Data Services
... Created Oracle REST Data Services database objects
... Log file written to C:\Users\datablogsadmin\ords_install_datamodel_2022-03-16_133527_00932.log
... Log file written to C:\Users\datablogsadmin\ords_install_apex_2022-03-16_133530_00191.log
Completed installation for Oracle REST Data Services version 19.4.6.r1421859. Elapsed time: 00:00:17.247

Step 7 :
Once installed ORDS , if you get any errors on ORDS users . Run below command in SQL PLUS

ALTER USER ORDS_PUBLIC_USER account unlock IDENTIFIED BY Admin123; 

ALTER USER ORDS_METADATA account unlock IDENTIFIED BY Admin123; 

Step 8 : 
Start the ORDS Service 

java -jar ords.war



Step 9 :
If you need to map your ords link with multiple schemas , its must have URL mapping . To avoid confusion to access multiple workspaces 

D:\ords>java -jar ords.war map-url --workspace-id ss --type base-path /datablogs01 dataadmin 


Its all completed , if you have any issues and doubts please contact us 


Wednesday, February 2, 2022

Redis to Elasticache Migration in AWS

Comparing Redis , AWS Elasticache giving multiple options to cache the data in cloud . Its enhanced with two ways of Access control options in PaaS Service 

Below are the few mission handling features we loved in elastic cache ,

  • Detecting data failures and recovery the nodes automatically 
  • Multi AZ features with automatic failover to replication nodes 
  • Redis Cluster mode enabled supports data partitioning up to 500 shards in different zones
  • Redis managed backups , automatic failure detection and log files transfer to Kinesis Firehouse and cloud watch 
  • Easy online migration from any Redis source 
  • Encryption at rest and Encryption in transit are supported 
  • Data volume up to peta bytes with micro second response time  
Let start migrating data from Redis to Elasticache ,

Preparing our source and target Redis nodes for migration 


  1. Launched two EC2 machines and installed redis 5.x version 
  2. Configured replication between master and replica
        To avoid downtime , we are doing online migration from replication server . so we have modified below values in replica server . Its based on your source modify the below values

      3.Configured replication between master and replica
  • protected-mode no
  • bind datablogs.xcnphd.ng.0001.use1.cache.amazonaws.com 6379
  • requirepass should be disabled

       Note : Transferring source master or replica should not have AUTH configured and protected mode disabled 

      4.Launch Elastic cache instance with below steps 

Cluster Mode Disabled ( Redis Engine 5.0.5 or Higher ) 


Multi-AZ Should be Enabled 


Encrypt at rest and in-transit should be disabled


If we have not followed above all steps , we will get below popup while we start the migration 
 

Starting Migration 


In Elastic console Click --> Action --> Migrate data from endpoint 



Once clicked , we can type the Source Redis Endpoint ( Private IP ) and Port ( Customized Port ) in below console ,




Then click Start Migration and follow the logs from Events in same console , 




If we get any error , It will be printed in the events log



Once its started , Restart the Master of Elasticache node one time , In our case we need to restart the replica server because its our source



Once restarted verify the info replication in Elasticache nodes 

Once started successfully , we can't perform any other operation in Elasticache nodes ,


Verifying the Data Migration Progress 


We have inserted sample record in the EC2 Master Node and verified the same in the replica and as well as in Elasticache 


Data looks good , once all the data transferred from IaaS to PaaS . Complete the migration in the Elasticache console and point the application endpoints to Elasticache node 


Thanks for Reading !!!

Monday, January 31, 2022

Migrate Azure MySQL Database to Amazon RDS for MySQL Using MySQL Replication

Its Nightmare Story struggled a lot to reduce the cut over time with Azure MySQL to AWS Migration MySQL Migration.


Tested Methods, 

  1. Backup and Restore - Its took 6 hours to complete
  2. AWS DMS - Text Data truncated and timestamp column data was 6 hours behind from the source

But Nothing helped due to Bugs, features unavailable etc.., But by default replication helped us to work. 

Two types replication available in Azure MySQL ,

1.GTID based Replication

    This is we already discussed with multiple topics in our blog for GCP cloud, also we will explain how to configure GTID based replication from Azure MySQL to AWS RDS in next blog

MySQL GTID Replication and lower_case_table_names

Multi Source Replication from GCP Cloud MySQL to Percona XtraDB Cluster

2.Binlog position-based replication

    We are performing binlog position-based replication to cut over the application in this blog

Environment Setup 

Below POC environment setup done for testing 
  • Azure MySQL 5.6.4 General Purpose instance launched with minimal configuration
  • AWS RDS for MySQL 5.7 launched with customized parameter group 
  • Network connectivity between Azure PaaS and AWS PaaS Service

Once environment is up and running, modify the below server parameters in both side 

Below values needs to update in Both environments, 

Azure MySQL: 

  • replicate_wild_ignore_table=mysql.%,information_schema.%,performance_schema.%,sys.%
  • binlog_row_image = FULL
  • binlog_expire_logs_seconds = 1440

AWS RDS MySQL:  

  • time_zone = Same as Azure MySQL
  • binlog_format = ROW
  • collation and character_set parameters = Same as Azure MySQL

Now both server is ready to configure the replication

Pre-Steps to configure the replication


1.Let’s verify and take a note of binlog position in Azure MySQL   

 mysql> show master status.

+------------------+----------+--------------------+------------------+-------------------+

| File                        | Position    | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |

+----------------------+-----------------+--------------+------------------+-------------------+

| mysql-bin.00865 | 52884741       |              |                  |                   |

+------------------+----------+--------------+------------------+-------------------+

1 row in set (0.00 sec)

2.Create a replication user with necessary privileges

CREATE ‘repluser’@’%’ IDENTIFIED BY ‘datablogs123’.

GRANT REPLICATION SLAVE ON *.* TO ‘repluser’@’%’;

GRANT REPLICATION CLIENT ON *.* TO ‘repluser’@’%’;

GRANT SELECT, CONNECT ON *.* TO ‘repluser’@’%’;

FLUSH PRIVILEGES;

3.Using mysqldump start the schema backup and followed by data backup 

mysqldump -h datablogs.mysql.database.azure.com -u repluser@datablogs -p datablogsprod --no-data --routines --triggers --events > datablogsprodmeta.sql

mysqldump -h datablogs.mysql.database.azure.com -u repluser@datablogs -p datablogsprod --routines --triggers --events --skip-add-drop-table  --max_allowed_packet=1024M --single-transaction  > datablogsproddata.sql

4.Once backup is completed restore the data into MySQL RDS , 

mysql -h datablogs.c5exr7bs8bax.ap-southeast-1.rds.amazonaws.com -u repluser -p datablogsprod  < datablogsprodmeta.sql

mysql -h datablogs.c5exr7bs8bax.ap-southeast-1.rds.amazonaws.com -u repluser -p datablogsprod  --max_allowed_packet=1024M  < datablogsproddata.sql

Configuring the replication 


In Amazon MySQL set the external master with the help of Step 1 binlog file name and binlog position 

CALL mysql.rds_stop_replication;

CALL mysql.rds_reset_external_master;

CALL mysql.rds_set_external_master ('datablogs.mysql.database.azure.com', 3306, 'repluser@datablogs', 'repluser', 'mysql-bin.00865', 52884741, 0);

CALL mysql.rds_start_replication;

Once executed start the replication and check the slave status in RDS ,

    mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: datablogs.mysql.database.azure.com Master_User: repluser@datablogs Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.00865 Read_Master_Log_Pos: 52884741 Relay_Log_File: relaylog.000004 Relay_Log_Pos: 273 Relay_Master_Log_File: mysql-bin.00865 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: datablogsprod Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: mysql.plugin,mysql.rds_monitor,mysql.rds_sysinfo,innodb_memcache.cache_policies,mysql.rds_history,innodb_memcache.config_options,mysql.rds_configuration,mysql.rds_replication_status Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 52884741 Relay_Log_Space: 19742 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1053643462 Master_UUID: 787858b4-a021-11ea-b894-97f71069e028 Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: 1 row in set (0.00 sec)


Post Validation and troubleshooting steps of Azure MySQL Replication


Validate the data and tables counts between Azure MySQL and MySQL RDS 

If we get any errors related with system schema in the Azure use the following method to fix it , we got same error like below ,

Can't find 'mysql.__firewallrules__' doesnt exist 

We can skip the error with below commands in MySQL RDS , 

CALL mysql.rds_stop_replication;
CALL mysql.rds_skip_repl_error;
CALL mysql.rds_start_replication;

But in MySQL RDS Row by row only errors will be skip and it will take long time to do it , follow the below method and check the binlog position until for the DML , DDL queries are captured in binlog 

SHOW BINLOG EVENTS IN 'mysql-bin.00865' FROM 52884567 LIMIT 10 \G

Note : Based on the error modify the binlog file and position in above command

Once found the binlog position repeat the configuring replication step

Now All set for Azure to AWS data sync , now its ready for application cutover with zero downtime !!!