datablogs

Sunday, January 16, 2022

Upgrade Oracle Apex 5.1.4 to Oracle Apex 21.1.3 in AWS RDS

Its quick story and easy steps too to upgrade Oracle Apex in AWS RDS . But dependent ORDS (Oracle Rest Data Service) in-place upgrade little difficult task , we will solve that in next blog 

Environment Setup ,

1.Oracle RDS launched with 12.1.0 Engine version

C:\Users\datablogs>aws rds describe-db-instances
{
“DBInstances”: [
{
“DBInstanceIdentifier”: “datablogs-db”,
“DBInstanceClass”: “db.t3.medium”,
“Engine”: “oracle-se2”,
“DBInstanceStatus”: “available”,
“MasterUsername”: “dataadmin”,
“DBName”: “ORCL”,
“Endpoint”: {
“Address”: “datablogs-db.cv33pqc8qgq1.us-east-1.rds.amazonaws.com”,
“Port”: 1521,
“HostedZoneId”: “Z2R2ITUGPM61AM”
},
.......
.......
"EngineVersion": "12.1.0.2.v25"

2.Create an option group and add APEX option value as 5.1.4 . Attached the option group into oracle 12c RDS 

C:\Users\datablogs>aws rds describe-option-groups --option-group-name oraapex
{
 “OptionGroupsList”: [
 {
 “OptionGroupName”: “oraapex”,
 “OptionGroupDescription”: “oraapex”,
 “EngineName”: “oracle-se2”,
 “MajorEngineVersion”: “12.1”,
 “Options”: [
{
 “OptionName”: “APEX”,
 “OptionDescription”: “Oracle Application Express Runtime Environment”,
 “Persistent”: false,
 “Permanent”: false,
 “OptionVersion”: “5.1.4.v1”,
 “OptionSettings”: [],
 “DBSecurityGroupMemberships”: [],
 “VpcSecurityGroupMemberships”: []
 }
 .......
}

By default , depends on the Apex version oracle RDS will be create the databases name as like version name ( Exp., Oracle Apex 5.1.4 - APEX_050100 . Oracle Apex 21.1.3 - APEX_210100 )

3.Once we launched the Oracle RDS , we can check Apex version using below command ,

Select comp_name,status,version from dba_registry order by comp_name;

Note : We can keep Oracle Apex schemas in the default user ( APEX_050100 ) else we can create separate schema and maintain it . But even if you upgrade major version database schema wont affect in oracle Apex .

4.Assume we are having workspaces and application connected with the current version

In current version 442 tables available , we are going to upgrade to Oracle 19c

5.Repeat Step 2 , Create an option group and add APEX option value as 21.1.3 . Attached the option group into oracle 19c RDS and begin the upgrade in console

5.1.Modify the Oracle instance in AWS RDS Console

5.2.Modify the additional options with new option group

6.Upgraded the Oracle RDS to 19c

C:\Users\datablogs>aws rds describe-db-instances
{
 “DBInstances”: [
 {
 “DBInstanceIdentifier”: “datablogs-db”,
 “DBInstanceClass”: “db.t3.medium”,
 “Engine”: “oracle-se2”,
 “DBInstanceStatus”: “available”,
 “MasterUsername”: “dataadmin”,
 “DBName”: “ORCL”,
 “Endpoint”: {
 “Address”: “datablogs-db.cv33pqc8qgq1.us-east-1.rds.amazonaws.com”,
 “Port”: 1521,
 “HostedZoneId”: “Z2R2ITUGPM61AM”
 },
.......
.......
"EngineVersion": "19.0.0.0.ru-2021-10.rur-2021-10.r1"
C:\Users\datablogs>aws rds describe-option-groups --option-group-name ora19apex
{
 “OptionGroupsList”: [
 {
 “OptionGroupName”: “oraapex”,
 “OptionGroupDescription”: “oraapex”,
 “EngineName”: “oracle-se2”,
 “MajorEngineVersion”: “12.1”,
 “Options”: [
{
 “OptionName”: “APEX”,
 “OptionDescription”: “Oracle Application Express Runtime Environment”,
 “Persistent”: false,
 “Permanent”: false,
 “OptionVersion”: “21.1.v1”,
 “OptionSettings”: [],
 “DBSecurityGroupMemberships”: [],
 “VpcSecurityGroupMemberships”: []
 }
 .......
}

7.Repeat Step 3 , we have upgraded to Oracle Apex 21.1.3

As we mentioned in previous note in Step 3 , new default schema created for latest version and existing schema tables didn't have any impact

Its easy task in AWS RDS to upgrade Oracle Apex , But in the next blog we will in-place upgrade of ORDS in detail

Thanks for Reading !!!

Wednesday, December 29, 2021

Capture Basic MySQLRemoteServerDetails

Basic Requirement to get the below details from MySQL Remote Host Using Python

  1. OS Version
  2. Find Physical host / VMWare
  3. MySQL Port
  4. Total RAM / Total CPU and Cores
  5. How many Database and Size ?
  6. Disk space

https://github.com/selvackp/MySQLRemoteServerDetails.git

# Required libraries for program
import paramiko
from mysql.connector import connect, Error

# Required libraries to connect
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())

# Remote server details to enter
print('Enter the RemoteHost Login Details')
print('----------------------------------')
print('\t')
re_host_IP = input("RemoteHost IP: ")  # If any IP works , pass that to RemoteHost IP
re_host_user = input("RemoteHost Username: ")
re_host_password = input("RemoteHost Password: ")  # If no password for AWS machine just enter to fill the next details
re_host_key_file = input(
    "RemoteHost keyfile name: ")  # If no keys just enter to move next step . We have tested AWS machine , so its required to pass pem key file to connect .
print('\t')

try:
    # Remote Host Server Connectivity
    client.connect(re_host_IP, username=re_host_user, password=re_host_password, key_filename=re_host_key_file)
    stdin, stdout, stderr = client.exec_command('cat /proc/version')
    print('Remote Server OS and Version')
    print('----------------------------')
    print(stdout.readlines()[0])
    print('\t')
    print('Remote Server Total CPU and Cores')
    print('---------------------------------')
    stdin, stdout, stderr = client.exec_command('lscpu')
    print('Total CPU/Core count for Remote server: ', stdout.readlines()[3])
    print('\t')
    print('Connected Server Total RAM in GB')
    print('--------------------------------')
    stdin, stdout, stderr = client.exec_command('vmstat -s')
    print('Total RAM for Remote server: ', stdout.readlines()[0])
    print('\t')
    print('Connected Server Total Disk Spaces')
    print('----------------------------------')
    stdin, stdout, stderr = client.exec_command('df -h --total')
    print('Total Disk for Remote server: ', stdout.read().decode())
    print('\t')

    # MySQL Basic Details code below
    with connect(
            host=re_host_IP,
            user=input("Enter Remote MySQL Username: "),
            password=input("Enter Remote MySQL Password: "),
    ) as connection:
        print('\t')
        get_database_host = "select @@hostname"
        with connection.cursor() as cursor:
            cursor.execute(get_database_host)
            database_host = cursor.fetchone()
            print("Successfully connected Remote MySQL Server :", database_host[0])
            print('\t')
        print(database_host[0], 'Database Name and Size in MB')
        print('----------------------------------------------')
        get_database_size = "SELECT table_schema AS 'Database', ROUND(SUM(data_length + index_length) / 1024 / 1024, 2) AS 'Size (MB)' FROM information_schema.TABLES GROUP BY table_schema;"
        with connection.cursor() as cursor:
            cursor.execute(get_database_size)
            database_size = cursor.fetchall()
            for row in database_size:
                dbname, dbsize = row
                print('{} - {}'.format(dbname, dbsize))
        select_port = 'SELECT @@port'
        with connection.cursor() as cursor:
            cursor.execute(select_port)
            database_port = cursor.fetchone()
            print('\t')
            print('Database Port :')
            print('---------------')
            print(database_host[0], 'Server MySQL Port is :', database_port[0])
        client.close()

except Error as e:
    print(e)
    exit()

Wednesday, October 14, 2020

RDS Snapshot Share with Automated or Manual Backups ?

Objective :

Launch production server clone in to another AWS account using RDS snapshots

We can do ?

There are multiple ways to do RDS data extract in AWS . We can take using manual snapshot , automated snapshot , export data to S3 and AWS backup service

What we can do most of time ?

To create clone of production server , it is easy way to create snapshot and launch new instance

But , Our Requirement to copy the snapshot in to different AWS account and launch copy of production .

Mostly we try with RDS automated snapshots to copy or share into different account . But in this case we can't share automated snapshots to different account

Automated snapshots

We can export data to S3 as well , but its required some efforts as well .

So possibilities is to take a manual snapshot and share into different account like below steps . Below screenshot refers manual snapshot of MySQL RDS in source AWS account

Source AWS account

Snapshot sharing into target AWS account

share snapshot to target account

Shared RDS snapshot on target AWS account

target AWS account

Then we can proceed to launch clone of production instance into different account .

Monday, August 3, 2020

AWS RDS Encryption

What is RDS Encryption ?

We can encrypt Amazon RDS DB instances and snapshots with enabling encryption option for our any amazon RDS instance . It means data is encrypted at rest including underling storage of DB instances , its read replicas , snapshots and automated backups

Amazon RDS encrypted DB instances use the industry standard AES-256 encryption algorithm to encrypt your data on the server that hosts your Amazon RDS DB instances. You don't need to modify your database client applications to use encryption.

For encrypted and unencrypted DB instances, data that is in transit between the source and the read replicas is encrypted, even when replicating across AWS Regions

Possibilities to encrypt RDS Instance :

Using snapshot copy option we have to enable encryption for RDS snapshot , and restore the snapshot with new instance

  1. Take snapshot of current production instance
  2. Copy snapshot with enabled encryption
  3. Restore the snapshot with different instance identifier
  4. Rename the current production instance identifier to proddb-old

Once renamed production instance , rename instance identifier of encrypted RDS to current production name

Steps to Encrypt RDS instance : 

Step 1 : Take a snapshot of unencrypted RDS instance with any specified name

Take a snapshot

Step 2 : Make a copy of snapshot with encryption enabled option

copy snapshot

Here , we can add default master key else if any consumer keys available use the same

Note : Encrypted read replicas must be encrypted with the same key as the source DB instance when both are in the same AWS Region

encryption enable

Step 3 : We are ready with encryption enabled snapshot copy

snapshots

Step 4 : Restore rdssnapshotwithencryption

restore snapshot

We have launch the instance identifier with rdssnapshotwithencryption

Step 5 : Except Naming convention and other network parameters should be same as unencrypted RDS instance

DB Instance identifier : encrypted-instance

Virtual private cloud (VPC) : Same like unencrypted instance  

new instance launch

VPC security group : Same like unencrypted instance

DB instance class : Same like unencrypted instance

Step 6 : Once instance has been launched , verify the configuration of encryption

configuration validation

Step 7 : Instance identifier change

Change instance identifier name to _old for unencrypted instance , and change encrypted instance name to production name

identifier change

Step 8 : verify the application connectivity with encrypted instance

If anything goes wrong , we just need to rename the instance Identifier name in max 2 minutes without major downtime

References :

https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.Encryption.html

Sunday, July 19, 2020

Cassandra Multi-Node Cluster setup in Ubuntu 16.04

Sharing simple steps to configure Cassandra cluster in few minutes ,

Prerequisite :

Install Java ( Jre ) from oracle site and verify java version , But our installing step will be take care of java installation as well

java --version

Installing Cassandra repository :

What are repositories ?

"Authentication keys" are usually obtained from the maintainer of the software repository. The maintainer will often place a copy of the authentication key on a public key server such as www.keyserver.net. The key can then be retrieved using the command.

Install the Cassandra repository into /etc/apt/sources.list.d/cassandra.sources.list using below command

Depends on requirement , update the 36x values into below command

echo "deb http://www.apache.org/dist/cassandra/debian 36x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list

if we didn't know the public key to authenticate for download packages , just run apt-get update commands and get the authentication key for Cassandra software

root@datablogs:~# sudo apt-get update
Hit:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial InRelease
Hit:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial-updates InRelease
Hit:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial-backports InRelease
Get:5 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB]
Hit:6 http://apt.postgresql.org/pub/repos/apt xenial-pgdg InRelease
Get:4 https://downloads.apache.org/cassandra/debian 36x InRelease [267 B]
Ign:4 https://downloads.apache.org/cassandra/debian 36x InRelease
Get:7 https://downloads.apache.org/cassandra/debian 36x/main amd64 Packages [678 B]
Fetched 113 kB in 1s (79.6 kB/s)
Reading package lists... Done
W: GPG error: https://downloads.apache.org/cassandra/debian 36x InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 749D6EEC0353B12C
W: The repository 'http://www.apache.org/dist/cassandra/debian 36x InRelease' is not signed.
N: Data from such a repository can't be authenticated and is therefore potentially dangerous to use.
N: See apt-secure(8) manpage for repository creation and user configuration details.

Here , 749D6EEC0353B12C is the public key . Add the public key to install Cassandra ,

root@datablogs:~# sudo apt-key adv --keyserver pool.sks-keyservers.net --recv-key 749D6EEC0353B12C
Executing: /tmp/tmp.HUY7Ncexdr/gpg.1.sh --keyserver
pool.sks-keyservers.net
--recv-key
749D6EEC0353B12C
gpg: requesting key 0353B12C from hkp server pool.sks-keyservers.net
gpg: key 0353B12C: public key "T Jake Luciani <jake@apache.org>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)

Installing Apache Cassandra :

Start updating the repository and install Cassandra cluster , Java Packages included with installation

root@datablogs:~# sudo apt-get update
Hit:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial InRelease
Hit:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial-updates InRelease
Hit:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial-backports InRelease
Get:5 http://security.ubuntu.com/ubuntu xenial-security InRelease [109 kB]
Hit:6 http://apt.postgresql.org/pub/repos/apt xenial-pgdg InRelease
Get:4 https://downloads.apache.org/cassandra/debian 36x InRelease [267 B]
Fetched 112 kB in 1s (95.7 kB/s)
Reading package lists... Done
root@datablogs:~# sudo apt-get install cassandra
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  ca-certificates-java fontconfig-config fonts-dejavu-core java-common libavahi-client3 libavahi-common-data libavahi-common3 libcups2 libfontconfig1 libjpeg-turbo8
  libjpeg8 liblcms2-2 libnspr4 libnss3 libnss3-nssdb libopts25 libpcsclite1 libpython-stdlib libpython2.7-minimal libpython2.7-stdlib libxi6 libxrender1 libxtst6 ntp
  openjdk-8-jre-headless python python-minimal python2.7 python2.7-minimal x11-common
Suggested packages:
  cassandra-tools default-jre cups-common liblcms2-utils pcscd ntp-doc libnss-mdns fonts-dejavu-extra fonts-ipafont-gothic fonts-ipafont-mincho fonts-wqy-microhei
  fonts-wqy-zenhei fonts-indic python-doc python-tk python2.7-doc binutils binfmt-support
The following NEW packages will be installed:
  ca-certificates-java cassandra fontconfig-config fonts-dejavu-core java-common libavahi-client3 libavahi-common-data libavahi-common3 libcups2 libfontconfig1
  libjpeg-turbo8 libjpeg8 liblcms2-2 libnspr4 libnss3 libnss3-nssdb libopts25 libpcsclite1 libpython-stdlib libpython2.7-minimal libpython2.7-stdlib libxi6
  libxrender1 libxtst6 ntp openjdk-8-jre-headless python python-minimal python2.7 python2.7-minimal x11-common
0 upgraded, 31 newly installed, 0 to remove and 9 not upgraded.
Need to get 63.1 MB of archives.
After this operation, 167 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial-updates/main amd64 libjpeg-turbo8 amd64 1.4.2-0ubuntu3.4 [111 kB]
Get:2 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial-updates/main amd64 x11-common all 1:7.7+13ubuntu3.1 [22.9 kB]
Get:3 http://us-east-1.ec2.archive.ubuntu.com/ubuntu xenial/main amd64 libxtst6 amd64 2:1.2.2-1 [14.1 kB]
.....
.....
.....

Configuring Apache Cassandra :

Needs to configure required variables in cassandra.yaml file ,

cluster_name: Name of the cluster

cluster_name: 'Test Cluster' ( Should be same in all Cassandra nodes ) 

-seeds: This is a comma-delimited list of the IP address of each node in the cluster

- seeds: "172.31.x.20,172.31.x.21,172.31.x.22,172.31.x.23,172.31.x.24,172.31.x.25"

listen_address: This is IP address that other nodes in the cluster will use to connect to this one. It defaults to localhost and needs changed to the IP address of the node

listen_address: 10.225.x.20 ( Cassandra Node 1 Address )

rpc_address: This is the IP address for remote procedure calls. It defaults to localhost. If the server’s host name is properly configured, leave this as is. Otherwise, change to server’s IP address or the loop back address (127.0.0.1)

rpc_address: 10.225.x.20 ( Cassandra Node 1 Address )

Once updated variables , restart Cassandra cluster in each nodes

Cluster status :

Verify Cassandra running status using below command ,

nodetool status 

Sunday, July 12, 2020

[SQLState XX000] ERROR: User arn:aws:redshift:ap-south-1:316720341464:dbuser:redshift-cluster-1/analytics is not authorized to assume IAM Role arn:aws:iam::323232323:role/aws-access-for-redshift

While Configure Data Migration Service between Redshift and any kind of heterogeneous source , we come across these issues often .

Error looks like an IAM policy is missing for attached , so we straightly move into IAM console granted AmazonS3FullAccess AmazonRedshiftFullAccess , then restart failed DMS tasks .

But no luck , still we are getting same issue

When we looking into Redshift IAM role , customer have been added custom roles with permissions . so it is missing trusted entities for both redshift.amazonaws.com and dms.amazonaws.com

Move into Trust relationships tab in custom role and edit trust relationship as below ,

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "",
      "Effect": "Allow",
      "Principal": {
        "Service": [
          "redshift.amazonaws.com",
          "dms.amazonaws.com"
        ]
      },
      "Action": "sts:AssumeRole"
    }
  ]
}
after adding trusted entities
Attached policies for DMS tasks

Once we update trust policy , able to load data between MySQL to Redshift

Remember to troubleshoot DMS tasks in depth , always enable detailed debug in cloudwatch logs