datablogs: Performance Tunning
Showing posts with label Performance Tunning. Show all posts
Showing posts with label Performance Tunning. Show all posts

Tuesday, November 14, 2023

PostgreSQL Table Partition on AWS RDS

If we grow bigger in the business , seamlessly our customer and transaction data also increases . In the meantime performance needs to consider as well 

So in this case of bigger tables indexes will not help us to achieve good performance on peak times . Alternatively we have partitioning to split the tables data into multiple pieces on all the relational database environments 

Like wise we are going to do range partition on sample table in PostgreSQL Database , In PostgreSQL three types of partition methods are available , 


Below are the important concern in the PostgreSQL Partition ,

  • Possible to attach regular table into partitioned one 
  • Not Possible to transform regular table to partitioned one   

So based on the above formula , we have tried to transform regular table into partitioned one for your reference 

Any one can use this example and perform partitioning in AWS PostgreSQL RDS easily 

Click GitHub Link for Code : AWS-PostgreSQL-RDS-Table-Partition

Step 1 : Create base datablogspaycheck table and insert some sample records 

DROP TABLE IF EXISTS datablogspaycheck CASCADE;

DROP SEQUENCE IF EXISTS public.paycheck_id_seq;

CREATE SEQUENCE public.paycheck_id_seq

    START WITH 1

    INCREMENT BY 1

    NO MINVALUE

    NO MAXVALUE

    CACHE 1;

create table datablogspaycheck

(

    payment_id int NOT NULL DEFAULT nextval('public.paycheck_id_seq'::regclass), 

    created timestamptz NOT NULL,

    updated  timestamptz NOT NULL DEFAULT now(),

    amount float,

    status varchar DEFAULT 'new'

);

CREATE INDEX idx_paycheck ON datablogspaycheck (created);

INSERT INTO datablogspaycheck (created) VALUES (

generate_series(timestamp '2023-01-01'

               , now()

               , interval  '5 minutes') ); 

Step 2 : Rename base table with new name

ALTER TABLE datablogspaycheck RENAME TO datablogspaycheck_basetable;

Step 3 : Create Partitioned table 

create table datablogspaycheck

(

    payment_id int NOT NULL DEFAULT nextval('public.paycheck_id_seq'::regclass), 

    created timestamptz NOT NULL,

    updated  timestamptz NOT NULL DEFAULT now(),

    amount float,

    status varchar DEFAULT 'new'

)PARTITION BY RANGE (created);

Step 4 : Create Separate Partition for each create date 

CREATE TABLE datablogspaycheck_202303 PARTITION OF datablogspaycheck

    FOR VALUES FROM ('2023-01-01') TO ('2023-03-01');

   

CREATE TABLE datablogspaycheck_20230304 PARTITION OF datablogspaycheck

    FOR VALUES FROM ('2023-03-01') TO ('2023-04-01');

    

CREATE TABLE datablogspaycheck_202304 PARTITION OF datablogspaycheck

    FOR VALUES FROM ('2023-04-01') TO ('2023-05-01');

    

CREATE TABLE datablogspaycheck_202311 PARTITION OF datablogspaycheck

    FOR VALUES FROM ('2023-05-01') TO ('2023-11-01');

   

CREATE TABLE datablogspaycheck_2024 PARTITION OF datablogspaycheck

    FOR VALUES FROM ('2023-11-01') TO ('2024-01-01');

Step 5 : Migrate the all records

insert into datablogspaycheck (payment_id,created,updated,amount,status) select payment_id,created,updated,amount,status from datablogspaycheck_basetable;

Step 6 : Validate each partition 

select * from datablogspaycheck_202303 order by 2 desc

select * from datablogspaycheck_20230304 order by 2 desc

select * from datablogspaycheck_202311 order by 2 desc

Its done , Easily migrated normal table data into partitioned table 

Thanks for Reading !!!



Friday, February 17, 2023

How to tune long running Queries on Azure Synapse DW (Dedicated Pool) - Part 2

Its took little long time to derive our approach tunning on azure data factory with Azure Synapse why because we need to run the system atleast two months validate our approach is smooth 

Yes its all running good as expected performance on the ETL loads and Processes 

Here are the major things we need to take care on Azure Synapse Dedicated Pool ,

  1. Dedicated SQL Pool Scaling 
  2. Dedicated SQL Pool Workload Management

Dedicated SQL Pool Scaling : 

We have decide below metrices to optimize the Azure Synapse 

  1. Data Load and Processing timelines
  2. Critical Process running timelines 
  3. Minimal Processing timelines 
  4. Reporting and Visualtions Timelines 
Based on above requirements , we can easily scalable the Dedicated SQL Pool based on the load 

Consider we need to load the data and process with in 2 hours , we can upscale the Dedicated SQL Pool to as like we want 

Key Part to increase the DW is concurrency , Instance Capacity and allocated resource for each queries 


Before start our critical process , we can automate upscale process with ADF Pipelines itself . So many blogs available to configure that 

Best method configure the authentication method with service principle ,


Dedicated SQL Pool Workload Management : 

We have decide below metrices to prepare workload management 

  1. Short Queries
  2. Critical Queries 
  3. High Intensive Queries 
  4. Reporting Queries

Based on the classifications , we have to split workload group for above queries 

Step 1 :

We need to create login and user for workload management in Dedicated SQL Pool 

--CREATE LOGIN [Analyticsprocess] WITH PASSWORD='xxxxxxxxx'

--CREATE USER [Analyticsprocess] FOR LOGIN [Analyticsprocess]

--GRANT CONTROL ON DATABASE::[sql-datablogs-dw] TO 

Analyticsprocess

Step 2 :

Consider you have upscaled instance into DW400c below are the resources allocation for the DW400c instance concurrency requirements 

In the workload group --> New workload group --> Click ELT

Consider analytics process user is used for high intensive queries we have to allocate as much as minimum resource for workload group 

Click Classifiers --> Add Classifiers --> Name it as ELT --> and specify Member should be the login and Label is important to mention 


Once click Add , we will get below concurrency range based on DW400c




By Default , its having system level workload group to handle the queries but its not effective we have to force our workload group 



Step 3 :

This is very important to utilize the workload group properly . We need to specify Label on the heavy processing queries so that it will utilized properly  

CREATE TABLE rpt_datablogs_finalreport.smgreport WITH (HEAP,DISTRIBUTION = REPLICATE) AS select * into rpt_datablogs_finalreport.smgreport_vw from rpt_datablogs_finalreport.vw_smgreport OPTION (LABEL='highintensiveprocess')

 


So these two things given much improvement on performance as well as reduced operational cost below 60% 

Still few more things to learn to optimize it on Azure Data Factory lets see on that next blog 

Happy Troubleshooting !!! 


Sunday, August 28, 2022

How to tune long running Queries on Azure Synapse DW (Dedicated Pool) - Part 1

Three hours of long running pipeline is reduced to run in 35 Minutes when we given right kind of top bottom optimization like our Body !!! 

Yes Sometimes if we give proper core and cross workouts you can run 10km in 55 Minutes !!! 

Its happened to me But Not Now 😆😆😆 

Oh Oh we are away from Data ground !!! Lets Move into Azure Synapse tunning,

In the starting phase every piece of code and pipeline process was really trouble us to optimize it , whatever we do on tunning it will run as same time 

Waited long time until each process to complete as its running 3 hours !!!

But Reecha blog helped something to start with basic check and given some hope to fine tune it 

Remember below piece of monitoring code always help us to too dig more 

--Check Long Running Query in Azure Synapse

SELECT *
FROM sys.dm_pdw_exec_requests
WHERE status not in ('Completed','Failed','Cancelled')
  AND session_id <> session_id()
ORDER BY submit_time DESC;
--Find the distributed query plan steps for long running query

SELECT * FROM sys.dm_pdw_request_steps WHERE request_id = 'xxxxxx'  ORDER BY step_index; 

***Replace your request_id from first query result  
Sample Running Query :


Initially we thought data volume is huge , so we are expecting this much delay on complete . But distributed query Plans are given more finishing time for each queries in Azure Synapse

We will go step by step process of tunning , 

Multiple ways we can see the Execution plan of your Azure Synapse Queries

  1. Using Azure Console --> Click you Dedicated Pool --> Click Query Activity --> Each Queries you have Query Plans 

        2.Using SSMS , you can run below query and get your explain plan in XML Format 

                EXPLAIN select * from dbo_datablogs.vw_fact_transferdata (nolock)



So ,We have figured out the execution plan and further need to fix below things to make it faster 

We need to reduce data movement operations ( Shuffle Move , Trim Move , Partition Move ) and needs to have proper indexing on your business tables 

Based on your data distribution on tables and indexing , queries will be executed on SQL Pool . We need to distribute the tables accordingly . So Major part of the tunning on table level only 

Once we done that in proper way we have achieved your milestone ,

How to we distribute table in Azure Synapse ? 

Two types of distribution is available in Azure Synapse . Round Robin and Hash Distribution 

Round Robin Distribution
  • By Default , If you create table it will create in Round Robin Distribution
  • Table rows are distributed in all distribution 
  • Main purpose of this table to improve loading speed we can use round robin distribution
  • If your table is just junk or doesn't have proper any constraints and keys use round robin distribution
CREATE TABLE [dbo_datablogs].[ControlTable_blogs](
                [Id] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY NONCLUSTERED NOT ENFORCED,
                [SourceObjectSettings] [nvarchar](max) NULL) 
WITH ( CLUSTERED COLUMNSTORE INDEX,  DISTRIBUTION = ROUND_ROBIN )

Hash Distribution
  • If your table is having frequent DML operations ( Insert , Update , Delete ) use hash distribution
  • It will increase performance 200% of your queries 
  • It will distribute the data to nodes based on your distributed column 
CREATE TABLE [dbo_datablogs].[ControlTable_blogs](
                [Id] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY NONCLUSTERED NOT ENFORCED,
                [SourceObjectSettings] [nvarchar](max) NULL) 
WITH ( HEAP,  DISTRIBUTION = HASH(Id))

Replicate 

This is another way to storing the table in SQL Pool 
  • Full copy of the table will be available in all distribution to avoid data movement 
  • Table is having less size use replicate option
CREATE TABLE [dbo_datablogs].[ControlTable_blogs](
                [Id] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY NONCLUSTERED NOT ENFORCED,
                [SourceObjectSettings] [nvarchar](max) NULL) 
WITH ( HEAP,  DISTRIBUTION = REPLICATE )

Thumb of Rules of Creating tables 

First Level : Find the absolute unique column or else use partial unique column for creating table as Hash Distribution 

Second Level : If your table doesn't have proper unique columns , try to create the table with used columns in Join , Group By , Distinct , Over and Having Clauses on Procedures or Select Clauses 

Lets examine First Level , 

When creating the table with Round Robin and created table with below script ,

CREATE TABLE dbt_datablogs.fact_transferdata
WITH
  (
    CLUSTERED COLUMNSTORE INDEX,  
    DISTRIBUTION = ROUND_ROBIN   
  )  
AS SELECT * FROM dbt_datablogs.vw_fact_transferdata

Its taken 28 Minutes to complete


So we figured out good candidate key and created table with below script 

CREATE TABLE dbt_datablogs.fact_transferdata
WITH
  (
    CLUSTERED COLUMNSTORE INDEX,  
    DISTRIBUTION = HASH(tranferaddressid)
  )  
AS SELECT * FROM dbt_datablogs.vw_fact_transferdata

Its taken 4 Minutes to complete 


We will see the huge difference when creating proper keys 

Lets examine Second Level ,

When we don't have proper keys , based on your business logics or select clause join conditions we can try to evaluate with multiple columns and create proper hash function to improve performance 

Lets do the smaller table tunning on this blog , you can try the same in your own environments , 

Below table is small but its having shuffle move and its delaying the process 11 Minutes ,







Then based on shuffle columns , add the proper hash function and recreate the table 

CREATE TABLE dbt_datablogs.fact_memberinfo
WITH
  (
    Heap,  
    DISTRIBUTION = HASH(infoid)
  )  
AS SELECT * FROM dbt_datablogs.vw_memberinfo

Once created the table again examined the Execution plan for the same statement and process is finished with in a seconds 



This is not an end , Also we have tunned few things on Data Factory and Azure Synapse Workload Management to get expected performance , we will see further on next blog 

To learn more in-depth , Please use below references from Microsoft Site as well 

References :