Showing posts with label Amazon RDS. Show all posts
Showing posts with label Amazon RDS. Show all posts
Monday, April 8, 2024
Tuesday, November 14, 2023
Selva |
November 14, 2023 |
No comments
|
Amazon RDS, AWS, Database Administration, Development, Performance Tunning, Postgresql
If we grow bigger in the business , seamlessly our customer and transaction data also increases . In the meantime performance needs to consider as well
So in this case of bigger tables indexes will not help us to achieve good performance on peak times . Alternatively we have partitioning to split the tables data into multiple pieces on all the relational database environments
Like wise we are going to do range partition on sample table in PostgreSQL Database , In PostgreSQL three types of partition methods are available ,
Below are the important concern in the PostgreSQL Partition ,
- Possible to attach regular table into partitioned one
- Not Possible to transform regular table to partitioned one
So based on the above formula , we have tried to transform regular table into partitioned one for your reference
Any one can use this example and perform partitioning in AWS PostgreSQL RDS easily
Click GitHub Link for Code : AWS-PostgreSQL-RDS-Table-Partition
Step 1 : Create base datablogspaycheck table and insert some sample records
DROP TABLE IF EXISTS datablogspaycheck CASCADE;
DROP SEQUENCE IF EXISTS public.paycheck_id_seq;
CREATE SEQUENCE public.paycheck_id_seq
START WITH 1
INCREMENT BY 1
NO MINVALUE
NO MAXVALUE
CACHE 1;
create table datablogspaycheck
(
payment_id int NOT NULL DEFAULT nextval('public.paycheck_id_seq'::regclass),
created timestamptz NOT NULL,
updated timestamptz NOT NULL DEFAULT now(),
amount float,
status varchar DEFAULT 'new'
);
CREATE INDEX idx_paycheck ON datablogspaycheck (created);
INSERT INTO datablogspaycheck (created) VALUES (
generate_series(timestamp '2023-01-01'
, now()
, interval '5 minutes') );
Step 2 : Rename base table with new name
ALTER TABLE datablogspaycheck RENAME TO datablogspaycheck_basetable;
Step 3 : Create Partitioned table
create table datablogspaycheck
(
payment_id int NOT NULL DEFAULT nextval('public.paycheck_id_seq'::regclass),
created timestamptz NOT NULL,
updated timestamptz NOT NULL DEFAULT now(),
amount float,
status varchar DEFAULT 'new'
)PARTITION BY RANGE (created);
Step 4 : Create Separate Partition for each create date
CREATE TABLE datablogspaycheck_202303 PARTITION OF datablogspaycheck
FOR VALUES FROM ('2023-01-01') TO ('2023-03-01');
CREATE TABLE datablogspaycheck_20230304 PARTITION OF datablogspaycheck
FOR VALUES FROM ('2023-03-01') TO ('2023-04-01');
CREATE TABLE datablogspaycheck_202304 PARTITION OF datablogspaycheck
FOR VALUES FROM ('2023-04-01') TO ('2023-05-01');
CREATE TABLE datablogspaycheck_202311 PARTITION OF datablogspaycheck
FOR VALUES FROM ('2023-05-01') TO ('2023-11-01');
CREATE TABLE datablogspaycheck_2024 PARTITION OF datablogspaycheck
FOR VALUES FROM ('2023-11-01') TO ('2024-01-01');
Step 5 : Migrate the all records
insert into datablogspaycheck (payment_id,created,updated,amount,status) select payment_id,created,updated,amount,status from datablogspaycheck_basetable;
Step 6 : Validate each partition
select * from datablogspaycheck_202303 order by 2 desc
select * from datablogspaycheck_20230304 order by 2 desc
select * from datablogspaycheck_202311 order by 2 desc
Its done , Easily migrated normal table data into partitioned table
Thanks for Reading !!!
Sunday, June 25, 2023
Wow !!! If suppose on the migration projects we need to more stuffs and things to convert when coming to procedures , functions and other database objects
But AWS is providing good things to migrate with easy steps , Ha Ha ... Don't overthink still you need to do 40% code migration works
In this part Babelfish Compass is giving various options to support migration the codes from SQL Server to PostgreSQL with bebelfish feature enabled PaaS servers
Below are easy steps on the Script Conversations
Prerequisites
- Install a 64-bit Java Runtime Environment (JRE) version 8 or higher
1.Download Compass Tool in Below
https://github.com/babelfish-for-postgresql/babelfish_compass/releases/tag/v.2023-03-a
Needs to download .zip file to work with Babelfish Compatibility
2.Unzip and Place the files in separate folder
Database name has been highlighted in below ,
4.Next , we can start running report with Babelfish Compass
Happy Coding !!!
C:\Users\Admin\Downloads\BabelfishCompass_v.2023-03-a\BabelfishCompass>BabelfishCompass.bat reportfinal datablogsdbprod.sql
5.Finally Reports are generated in Documents Directory
6.We can review the reports in any format , for me its easy with in HTML browser
Just double click the HTML document , So like below we will get supported and unsupported features details in depth .
We can directly go and debug the code . Also bebelfish compass is having plenty of methods to rewrite the code , we will check it in next blog
Happy Coding !!!
Wednesday, June 21, 2023
Oracle Audit Log :
Oracle Audit Log refers to the feature in Oracle
Database that records and stores information about various database activities
and events. It provides a mechanism to track and monitor user activities,
system events, and changes made to the database.
- User Logins: Recording user login attempts and
authentication information.
- Database Activities: Logging SQL statements
executed by users, including select, insert, update, and delete
operations.
- Privilege Usage: Monitoring the usage of
privileges, such as granting or revoking permissions.
- Schema Changes: Tracking modifications to database
objects, such as creating or altering tables, views, or indexes.
- System Events: Recording system-level events, such
as startup and shutdown of the database.
- Security Violations: Detecting unauthorized access
attempts or suspicious activities.
- Administrative Operations: Logging administrative
tasks performed by database administrators, such as user management or
database configuration changes.
The Oracle Audit Log provides an essential tool for security, compliance, and troubleshooting purposes.
Types of Auditing in Amazon RDS for Oracle :
- Standard Auditing
- Unified Auditing
- Fine-grained Auditing
We are going to see , how do we enable Standard auditing in Oracle RDS
How to enable Audit Log in Oracle RDS?
Make sure you have enabled custom parameter group for
Oracle RDS
- Modify below values for Audit_Trail Parameter
Audit_Trail - DB, EXTENDED
- Next ,Just needs to modify below DDL or DML statements to capture the logs from the server
AUDIT DELETE ANY TABLE;
AUDIT DELETE TABLE BY USER_01 BY ACCESS;
AUDIT DELETE TABLE BY USER_02 BY ACCESS;
AUDIT ALTER, GRANT, INSERT, UPDATE, DELETE ON DEFAULT;
AUDIT READ ON DIRECTORY datapump_dir;
Its all done , we have enabled required logs to capture for security purpose
How to we monitor Audit Logs ?
We can just run the below command get the captured audit logs in Oracle RDS ,
SELECT * FROM DBA_AUDIT_TRAIL order by 1 desc
Its just for normal scenario , explained the process . Still we can separate Audit Table space and many further things are available in Oracle . Let see on another blogs
Happy Auditing !!!
Monday, January 30, 2023
Update of Previous post ,
As mentioned in below blog , past years we cant restore database into same SQL Server RDS
https://medium.com/@yogitha.o/backup-and-restore-in-same-sql-server-rds-3e667622f7ff
But now you can able to restore without any issues from recent releases of SQL Server RDS
Thursday, December 15, 2022
Selva |
December 15, 2022 |
No comments
|
Amazon RDS, AWS, Database Administration, Oracle 19C, Restore
Its Easy to achieve it in easy method ,
Tuesday, December 6, 2022
Selva |
December 06, 2022 |
2 comments
|
Amazon RDS, Amazon Redshift Serverless, AWS, Migration, Zero Downtime
Always customer prefers cost less solutions to run the business . To help them their business and the requirements we also needs to provide efficient solutions
Some cases cloud vendors provides good solutions for analytics load but cost will be very high , most of the time we don't want to recommend that but we need to do
Like that one of the solution in AWS , its cost much but works much faster like anything
We are talking about Amazon Redshift Solutions only , So recently they have launched Amazon Redshift Serverless solutions for few regions .
Whatever new comes , before customer catches the features we need to find and deliver the best approach to them . So ,
What is Amazon Redshift Serverless ?
Amazon Redshift Serverless automatically provisions data warehouse capacity and intelligently scales the underlying resources. Amazon Redshift Serverless adjusts capacity in seconds to deliver consistently high performance and simplified operations for even the most demanding and volatile workloads.
With Amazon Redshift Serverless, you can benefit from the following features:
- Access and analyze data without the need to set up, tune, and manage Amazon Redshift provisioned clusters
- Use the superior Amazon Redshift SQL capabilities, industry-leading performance, and data-lake integration to seamlessly query across a data warehouse, a data lake, and operational data sources
- Deliver consistently high performance and simplified operations for the most demanding and volatile workloads with intelligent and automatic scaling
- Use workgroups and namespaces to organize compute resources and data with granular cost controls
- Pay only when the data warehouse is in use
So , Overall no need of human interventions in the Redshift Serverless
Everything is Fine , How to we migrate and sync Amazon RDS / EC2 Postgres / Aurora Postgres to utilize this Redshift Serverless
What are the options available to migrate and Sync ?
- DMS - Still Redshift Target is not available to migrate the Data
- Export/Import - Yes we can perform , how to handle zero downtime migration . Syncing real-time data is not possible
- AWS Glue - Its Good Option , We can migrate and Sync real-time data from RDS to Redshift Serverless
Lets start sample data migrate and sync into Amazon Redshift Serverless ,
Environment Setup Go Through ,
- RDS PostgreSQL
- AWS Glue
- Amazon Redshift Serverless
- VPC S3 Endpoint
- IAM Role
RDS PostgreSQL :
Amazon Redshift Serverless :
VPC S3 Endpoint :
IAM Role :
Once Environment is completed , we can start adding connections and jobs in AWS Glue
How to add connections in AWS Glue ,
In AWS Glue Console --> Click Connections --> Create Connections
Create source and target databases ,
For testing sample schema and data inserted into RDS PostgreSQL before creating the crawler
Lets start to create separate crawlers for Source and Target to update catalog ,
Below data source is mapped to RDS PostgreSQL
Choose appropriate databases for crawler , below is the for source
Below is the for Target ,
Once its completed , we have already deployed sample schema scripts in both side to transfer the data . Lets run the crawler and check it
So both source and target tables are updated
Lets Create Glue Job in AWS Console ,
After all , Lets schedule the job every 5 Minutes and sync the data
So Quickly we can migrate and save our cost
Any Troubles and issues Please contact me immediately !!!
Friday, March 18, 2022
Selva |
March 18, 2022 |
No comments
|
Amazon RDS, Migration, Oracle 19C, Oracle Apex, ORDS, Upgrade
We have upgraded Oracle Apex in the previous blog and next have to plan for below things
- Apex static images update for upgraded version
- ORDS Upgrade to stable version
Prerequisites
- Downloads the upgraded apex version in oracle RDS ( Apex 21.1.2 )
- Download latest stable version of ORDS 19.4
- Make sure you have already configured Oracle Client 19C and Java 1.8 is installed
Apex Static Image Upgrade :
Before upgrade apex images , we will receive below error while login apex workspace page
- Rename the existing apex folder from D:\apex to D:\apex_5.1.4
- Unzip the apex_21.1_en.zip folder into apex folder
- Go to apex utilities folder
- Stop the ORDS running service
Connect Oracle 19C in SQL Plus
C:\Windows\system>D:
D:\>cd apex
D:\apex>cd utilities
D:\apex\utilities>
D:\apex\utilities>sqlplus dataadmin@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dataadmin.cwt1dgu2zxag.ap-southeast-1.rds.amazonaws.com)(PORT=1521))(CONNECT_DATA=(SID=ORCL)))
Connected to:
Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.3.0.0.0
Enter Password:
SQL> @reset_image_prefix.sql
Enter the Application Express image prefix [/i/] https://static.oracle.com/cdn/apex/21.1.2/
...Changing Application Express image prefix
NEW_IMAGE_PREFIX
------------------------------------------
https://static.oracle.com/cdn/apex/21.1.2/
PL/SQL procedure successfully completed.
...Recreate APEX global
PL/SQL procedure successfully completed.
Commit complete.
Image Prefix update complete
Disconnected from Oracle Database 19c Standard Edition 2 Release 19.0.0.0.0 - Production
Version 19.13.0.0.0
D:\apex\utilities>
Once its completed , we will able to browse the apex pages without any issues
ORDS Upgrade Paths
Step 1 :
Rename existing ords folder as ords_3.0.13 . As mentioned in prerequisites , unzip the downloaded ords-19.4.6.142.1859 into ords folder
Step 2 :
Copy the D:/apex/images folder into D:/ords/
Step 3:
Uninstall the existing ORDS version from Oracle RDS ,
D:\ords>java -jar ords.war uninstall Specify the database connection type to use. Enter number for [1] Basic [2] TNS [3] Custom URL [1]: Enter the name of the database server [localhost]:dataadmin.cwt1dgu2zxag.ap-southeast-1.rds.amazonaws.com Enter the database listen port [1521]: Enter 1 to specify the database service name, or 2 to specify the database SID [1]: Enter the database service name:ORCL Requires to login with administrator privileges to verify Oracle REST Data Services schema. Enter the administrator username:dataadmin Enter the database password for dataadmin: Confirm password: Connecting to database user: dataadmin url: jdbc:oracle:thin:@//dataadmin.cwt1dgu2zxag.ap-southeast-1.rds.amazonaws.com:1521/ORCL Retrieving information. Uninstalling Oracle REST Data Services ... Log file written to C:\Users\datablogsadmin\ords_uninstall_core_2022-03-16_133300_00152.log Completed uninstall for Oracle REST Data Services. Elapsed time: 00:00:07.840Step 4 :
Set Config Directory
java -jar ords.war configdir D:\config
Step 5 :
Update passwords and unlock below accounts in Oracle RDS
- APEX_PUBLIC_USER
- APEX_LISTENER
- APEX_REST_PUBLIC_USER
Connect Oracle 19C in SQL Plus D:\apex\utilities>sqlplus dataadmin@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=dataadmin.cwt1dgu2zxag.ap-southeast-1.rds.amazonaws.com)(PORT=1521))(CONNECT_DATA=(SID=ORCL))) Connected to: Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production Version 19.3.0.0.0 Enter Password: SQL>alter user APEX_PUBLIC_USER account unlock identified by Admin123; SQL>exec rdsadmin.rdsadmin_run_apex_rest_config('Admin123','Admin123'); SQL>ALTER USER APEX_REST_PUBLIC_USER account unlock IDENTIFIED BY Admin123; SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_OBJECTS', 'DATAADMIN', 'SELECT', true); SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_ROLE_PRIVS', 'DATAADMIN', 'SELECT', true); SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('DBA_TAB_COLUMNS', 'DATAADMIN', 'SELECT', true); SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('USER_CONS_COLUMNS', 'DATAADMIN', 'SELECT', true); SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('USER_CONSTRAINTS', 'DATAADMIN', 'SELECT', true); SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('USER_OBJECTS', 'DATAADMIN', 'SELECT', true); SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('USER_PROCEDURES', 'DATAADMIN', 'SELECT', true); SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('USER_TAB_COLUMNS', 'DATAADMIN', 'SELECT', true); SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('USER_TABLES', 'DATAADMIN', 'SELECT', true); SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('USER_VIEWS', 'DATAADMIN', 'SELECT', true); SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('WPIUTL', 'DATAADMIN', 'EXECUTE', true); SQL>exec rdsadmin.rdsadmin_util.grant_sys_object('DBMS_SESSION', 'DATAADMIN', 'EXECUTE', true); SQL>EXEC rdsadmin.rdsadmin_util.grant_apex_admin_role; SQL>grant APEX_ADMINISTRATOR_ROLE to DATAADMIN; Once its executed , proceed the ORDS Installation
Install the required database naming configuration setup for ORDS , By default it will install as apex
D:\ords>java -jar ords.war setup --database dataadmin Specify the database connection type to use. Enter number for [1] Basic [2] TNS [3] Custom URL [1]: Enter the name of the database server [localhost]:dataadmin.cwt1dgu2zxag.ap-southeast-1.rds.amazonaws.com Enter the database listen port [1521]: Enter 1 to specify the database service name, or 2 to specify the database SID [1]: Enter the database service name:ORCL Enter 1 if you want to verify/install Oracle REST Data Services schema or 2 to skip this step [1]:1 Enter the database password for ORDS_PUBLIC_USER: Confirm password: Requires to login with administrator privileges to verify Oracle REST Data Services schema. Enter the administrator username:dataadmin Enter the database password for dataadmin: Confirm password: Connecting to database user: dataadmin url: jdbc:oracle:thin:@//dataadmin.cwt1dgu2zxag.ap-southeast-1.rds.amazonaws.com:1521/ORCL Retrieving information. Enter the default tablespace for ORDS_METADATA [SYSAUX]: Enter the temporary tablespace for ORDS_METADATA [TEMP]: Enter the default tablespace for ORDS_PUBLIC_USER [USERS]: Enter the temporary tablespace for ORDS_PUBLIC_USER [TEMP]: Enter 1 if you want to use PL/SQL Gateway or 2 to skip this step. If using Oracle Application Express or migrating from mod_plsql then you must enter 1 [1]: Enter the PL/SQL Gateway database user name [APEX_PUBLIC_USER]: Enter the database password for APEX_PUBLIC_USER: Confirm password: Enter 1 to specify passwords for Application Express RESTful Services database users (APEX_LISTENER, APEX_REST_PUBLIC_USER) or 2 to skip this step [1]: Enter the database password for APEX_LISTENER: Confirm password: Enter the database password for APEX_REST_PUBLIC_USER: Confirm password: Enter a number to select a feature to enable [1] SQL Developer Web [2] REST Enabled SQL [3] None [1]:1 2022-03-16T05:35:15.021Z INFO reloaded pools: [|dataadmin||, |dataadmin|al|, |dataadmin|rt|, |dataadmin|pu|] Installing Oracle REST Data Services version 19.4.6.r1421859 ... Log file written to C:\Users\datablogsadmin\ords_install_core_2022-03-16_133515_00223.log ... Verified database prerequisites ... Created Oracle REST Data Services proxy user ... Created Oracle REST Data Services schema ... Granted privileges to Oracle REST Data Services ... Created Oracle REST Data Services database objects ... Log file written to C:\Users\datablogsadmin\ords_install_datamodel_2022-03-16_133527_00932.log ... Log file written to C:\Users\datablogsadmin\ords_install_apex_2022-03-16_133530_00191.log Completed installation for Oracle REST Data Services version 19.4.6.r1421859. Elapsed time: 00:00:17.247
Step 7 :
Once installed ORDS , if you get any errors on ORDS users . Run below command in SQL PLUS
ALTER USER ORDS_PUBLIC_USER account unlock IDENTIFIED BY Admin123;
ALTER USER ORDS_METADATA account unlock IDENTIFIED BY Admin123;
Step 8 :
Start the ORDS Service
java -jar ords.war
Step 9 :
If you need to map your ords link with multiple schemas , its must have URL mapping . To avoid confusion to access multiple workspaces
D:\ords>java -jar ords.war map-url --workspace-id ss --type base-path /datablogs01 dataadmin
Authors : 👷 Krishna and 👷 Selvackp
Monday, January 31, 2022
Selva |
January 31, 2022 |
No comments
|
Amazon RDS, Azure MySQL, Migration, Troubleshooting, Zero Downtime
Its Nightmare Story struggled a lot to reduce the cut over time with Azure MySQL to AWS Migration MySQL Migration.
- Backup and Restore - Its took 6 hours to complete
- AWS DMS - Text Data truncated and timestamp column data was 6 hours behind from the source
But Nothing helped due to Bugs, features unavailable etc.., But by default replication helped us to work.
Two types replication available in Azure MySQL ,
1.GTID based Replication
This is we already discussed with multiple topics in our blog for GCP cloud, also we will explain how to configure GTID based replication from Azure MySQL to AWS RDS in next blog
MySQL GTID Replication and lower_case_table_names
Multi Source Replication from GCP Cloud MySQL to Percona XtraDB Cluster
2.Binlog position-based replication
We are performing binlog position-based replication to cut over the application in this blog
Environment Setup
Below POC environment setup done for testing
- Azure MySQL 5.6.4 General Purpose instance launched with minimal configuration
- AWS RDS for MySQL 5.7 launched with customized parameter group
- Network connectivity between Azure PaaS and AWS PaaS Service
Once environment is up and running, modify the below server parameters in both side
Below values needs to update in Both environments,
Azure MySQL:
- replicate_wild_ignore_table=mysql.%,information_schema.%,performance_schema.%,sys.%
- binlog_row_image = FULL
- binlog_expire_logs_seconds = 1440
AWS RDS MySQL:
- time_zone = Same as Azure MySQL
- binlog_format = ROW
- collation and character_set parameters = Same as Azure MySQL
Now both server is ready to configure the replication
Pre-Steps to configure the replication
1.Let’s verify and take a note of binlog position in Azure MySQL
mysql> show master status.
+------------------+----------+--------------------+------------------+-------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+----------------------+-----------------+--------------+------------------+-------------------+
| mysql-bin.00865 | 52884741 | | | |
+------------------+----------+--------------+------------------+-------------------+
1 row in set (0.00 sec)
2.Create a replication user with necessary privileges
CREATE ‘repluser’@’%’ IDENTIFIED BY ‘datablogs123’.
GRANT REPLICATION SLAVE ON *.* TO ‘repluser’@’%’;
GRANT REPLICATION CLIENT ON *.* TO ‘repluser’@’%’;
GRANT SELECT, CONNECT ON *.* TO ‘repluser’@’%’;
FLUSH PRIVILEGES;
3.Using mysqldump start the schema backup and followed by data backup
mysqldump -h datablogs.mysql.database.azure.com -u repluser@datablogs -p datablogsprod --no-data --routines --triggers --events > datablogsprodmeta.sql
mysqldump -h datablogs.mysql.database.azure.com -u repluser@datablogs -p datablogsprod --routines --triggers --events --skip-add-drop-table --max_allowed_packet=1024M --single-transaction > datablogsproddata.sql
4.Once backup is completed restore the data into MySQL RDS ,
mysql -h datablogs.c5exr7bs8bax.ap-southeast-1.rds.amazonaws.com -u repluser -p datablogsprod < datablogsprodmeta.sql
mysql -h datablogs.c5exr7bs8bax.ap-southeast-1.rds.amazonaws.com -u repluser -p datablogsprod --max_allowed_packet=1024M < datablogsproddata.sql
Configuring the replication
In Amazon MySQL set the external master with the help of Step 1 binlog file name and binlog position
CALL mysql.rds_stop_replication;
CALL mysql.rds_reset_external_master;
CALL mysql.rds_set_external_master ('datablogs.mysql.database.azure.com', 3306, 'repluser@datablogs', 'repluser', 'mysql-bin.00865', 52884741, 0);
CALL mysql.rds_start_replication;
Once executed start the replication and check the slave status in RDS ,
mysql> show slave status \G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: datablogs.mysql.database.azure.com Master_User: repluser@datablogs Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.00865 Read_Master_Log_Pos: 52884741 Relay_Log_File: relaylog.000004 Relay_Log_Pos: 273 Relay_Master_Log_File: mysql-bin.00865 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: datablogsprod Replicate_Ignore_DB: Replicate_Do_Table: Replicate_Ignore_Table: mysql.plugin,mysql.rds_monitor,mysql.rds_sysinfo,innodb_memcache.cache_policies,mysql.rds_history,innodb_memcache.config_options,mysql.rds_configuration,mysql.rds_replication_status Replicate_Wild_Do_Table: Replicate_Wild_Ignore_Table: Last_Errno: 0 Last_Error: Skip_Counter: 0 Exec_Master_Log_Pos: 52884741 Relay_Log_Space: 19742 Until_Condition: None Until_Log_File: Until_Log_Pos: 0 Master_SSL_Allowed: No Master_SSL_CA_File: Master_SSL_CA_Path: Master_SSL_Cert: Master_SSL_Cipher: Master_SSL_Key: Seconds_Behind_Master: 0 Master_SSL_Verify_Server_Cert: No Last_IO_Errno: 0 Last_IO_Error: Last_SQL_Errno: 0 Last_SQL_Error: Replicate_Ignore_Server_Ids: Master_Server_Id: 1053643462 Master_UUID: 787858b4-a021-11ea-b894-97f71069e028 Master_Info_File: mysql.slave_master_info SQL_Delay: 0 SQL_Remaining_Delay: NULL Slave_SQL_Running_State: Slave has read all relay log; waiting for more updates Master_Retry_Count: 86400 Master_Bind: Last_IO_Error_Timestamp: Last_SQL_Error_Timestamp: Master_SSL_Crl: Master_SSL_Crlpath: Retrieved_Gtid_Set: Executed_Gtid_Set: Auto_Position: 0 Replicate_Rewrite_DB: Channel_Name: Master_TLS_Version: 1 row in set (0.00 sec)
Post Validation and troubleshooting steps of Azure MySQL Replication
Validate the data and tables counts between Azure MySQL and MySQL RDS
If we get any errors related with system schema in the Azure use the following method to fix it , we got same error like below ,
Can't find 'mysql.__firewallrules__' doesnt exist
We can skip the error with below commands in MySQL RDS ,
CALL mysql.rds_stop_replication;CALL mysql.rds_skip_repl_error;CALL mysql.rds_start_replication;
But in MySQL RDS Row by row only errors will be skip and it will take long time to do it , follow the below method and check the binlog position until for the DML , DDL queries are captured in binlog
SHOW BINLOG EVENTS IN 'mysql-bin.00865' FROM 52884567 LIMIT 10 \G
Note : Based on the error modify the binlog file and position in above command
Now All set for Azure to AWS data sync , now its ready for application cutover with zero downtime !!!
Friday, January 21, 2022
Its easy to refresh the schemas in Oracle RDS , We can do one time for Testing or Development Purpose
But if its needs to run on daily basis , here is the automated code for achieve the solution
GitHub Link for Source Code : https://github.com/selvackp/Oracle_RDS_Import_and_Export
Wednesday, October 14, 2020
Objective :
Launch production server clone in to another AWS account using RDS snapshots
We can do ?
There are multiple ways to do RDS data extract in AWS . We can take using manual snapshot , automated snapshot , export data to S3 and AWS backup service
What we can do most of time ?
To create clone of production server , it is easy way to create snapshot and launch new instance
But , Our Requirement to copy the snapshot in to different AWS account and launch copy of production .
Mostly we try with RDS automated snapshots to copy or share into different account . But in this case we can't share automated snapshots to different account
We can export data to S3 as well , but its required some efforts as well .
So possibilities is to take a manual snapshot and share into different account like below steps . Below screenshot refers manual snapshot of MySQL RDS in source AWS account
Snapshot sharing into target AWS account
Shared RDS snapshot on target AWS account
Then we can proceed to launch clone of production instance into different account .
Monday, August 3, 2020
What is RDS Encryption ?
We can encrypt Amazon RDS DB instances and snapshots with enabling encryption option for our any amazon RDS instance . It means data is encrypted at rest including underling storage of DB instances , its read replicas , snapshots and automated backups
Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS DB instances. You don't need to modify your database client applications to use encryption.
For encrypted and unencrypted DB instances, data that is in transit between the source and the read replicas is encrypted, even when replicating across AWS Regions
Possibilities to encrypt RDS Instance :
Using snapshot copy option we have to enable encryption for RDS snapshot , and restore the snapshot with new instance
- Take snapshot of current production instance
- Copy snapshot with enabled encryption
- Restore the snapshot with different instance identifier
- Rename the current production instance identifier to proddb-old
Once renamed production instance , rename instance identifier of encrypted RDS to current production name
Steps to Encrypt RDS instance :
Step 1 : Take a snapshot of unencrypted RDS instance with any specified name
Step 2 : Make a copy of snapshot with encryption enabled option
Here , we can add default master key else if any consumer keys available use the same
Note : Encrypted read replicas must be encrypted with the same key as the source DB instance when both are in the same AWS Region
Step 3 : We are ready with encryption enabled snapshot copy
Step 4 : Restore rdssnapshotwithencryption
We have launch the instance identifier with rdssnapshotwithencryption
Step 5 : Except Naming convention and other network parameters should be same as unencrypted RDS instance
DB Instance identifier : encrypted-instance
Virtual private cloud (VPC) : Same like unencrypted instance
VPC security group : Same like unencrypted instance
DB instance class : Same like unencrypted instance
Step 6 : Once instance has been launched , verify the configuration of encryption
Step 7 : Instance identifier change
Change instance identifier name to _old for unencrypted instance , and change encrypted instance name to production name
Step 8 : verify the application connectivity with encrypted instance
If anything goes wrong , we just need to rename the instance Identifier name in max 2 minutes without major downtime
References :
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html
Sunday, July 12, 2020
While Configure Data Migration Service between Redshift and any kind of heterogeneous source , we come across these issues often .
Error looks like an IAM policy is missing for attached , so we straightly move into IAM console granted AmazonS3FullAccess AmazonRedshiftFullAccess , then restart failed DMS tasks .
But no luck , still we are getting same issue
When we looking into Redshift IAM role , customer have been added custom roles with permissions . so it is missing trusted entities for both redshift.amazonaws.com and dms.amazonaws.com
Move into Trust relationships tab in custom role and edit trust relationship as below ,
{ "Version": "2012-10-17", "Statement": [ { "Sid": "", "Effect": "Allow", "Principal": { "Service": [ "redshift.amazonaws.com", "dms.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] }
Once we update trust policy , able to load data between MySQL to Redshift
Remember to troubleshoot DMS tasks in depth , always enable detailed debug in cloudwatch logs