A DATA INTEGRATION APPROACH TO MAXIMIZE YOUR ROIPosted by Technovert Solutions on March 23rd, 2021 The data Integration approach adopted by many data integration projects relies on a set of premium tools leading to cash burnout with RoI less than the standard. To overcome this and to maximize the RoI, we lay down a data integration approach that makes use of open-source tools over the premium to deliver better results and an even more confident return on the investment. Adopt a two-stage data integration approach: Part 1: Setting Up REST API Source with standard NoSQL JSON (with nested datasets) Along with the data movement, it is necessary to facilitate Plug-n-Play architecture, Notifications, Audit data for Reporting, Un-burdened Intelligent scheduling, and setting up all the necessary instances. The landing Data warehouse chosen was AWS Redshift which is a one-stop for the operational data stores (ODS) as well as facts & dimensions. As said, we completely relied on open-source tools over the tools from tech giants like Oracle, Microsoft, Informatica, Talend, etc., The data integration was successful by leveraging Python, SQL, and Apache Airflow to do all the work. Use Python for Extraction; SQL to Load & Transform the data and Airflow to orchestrate the loads via python-based scheduler code. Below is the data flow architecture. data flow architecture Part 2: Execution Challenges: A new column will not break the process. config table which is the heart of the process holding all the jobs needed to be executed, their last extract dates, and parameters for making the REST API call/extract data from RDBMS. Any new table which is being planned for the extraction or any new store being added as part of business expansion needs its entries into the config table. The DAG Generator DAG run will build jobs for you in a snap which will be available in Airflow UI on the subsequent refresh within seconds, and the new jobs are executed on the next schedule along with existing jobs. 2. Dealing with the nested data in JSON. The following approach is adopted to conquer the above problem: Configured AWS Redshift Spectrum, with IAM Role and IAM Policy as needed to access AWS Glue Catalog and associating the same with AWS Redshift database cluster 3. Intelligent Scheduling Time-based – Batch scheduling; MicroELTs ran to time & again within a day for short intervals. A certain number of jobs were automated to keep running asynchronously until all the processes were completed. By using a python routine to do intelligent scheduling. The code reads the set of jobs being executed as part of the current batch into a job execution/job config table and keeps running those four jobs until all the jobs are in a completed/failed state as per the below logical flow diagram. Logical Flow Diagram_data integration approach For Event-based triggering, a file would be dropped in S3 by an application, and the integration process will be triggered by reading this event and starts the loading process to a data warehouse. The configuration is as follows: CloudWatch event which will trigger a Lambda function which in turn makes an API call to trigger Airflow DAG For projects which involve programming, it is necessary to have a version control mechanism. To have that version control mechanism, configure the GIT repository to hold the DAG files in Airflow with Kubernetes executor. Take away: By leveraging any ETL tool in the market, the project duration would be beyond 6 months as it requires building a job for each operational data store. The best-recommended option is using scripting in conjunction with any ETL tool to repetitively build jobs that would more or less fall/overlap with the way it is now executed. Talk to our Data Integration experts: Like it? Share it!More by this author |