0% found this document useful (0 votes)
718 views8 pages

Vinay Kumar Data Engineer

The document provides a summary of Vinay Kumar's professional profile as a Senior Data Engineer. It outlines his 10 years of experience in data engineering, analytics, data modeling, and working with cloud platforms like Azure, AWS, and GCP. It also lists his technical skills including working with databases, data pipelines, cloud services, scripting languages, and data modeling tools. His experience includes designing and developing ETL processes and data pipelines to load and transform data using tools like Azure Data Factory, Databricks, Python, and Spark.

Uploaded by

kevin711588
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
718 views8 pages

Vinay Kumar Data Engineer

The document provides a summary of Vinay Kumar's professional profile as a Senior Data Engineer. It outlines his 10 years of experience in data engineering, analytics, data modeling, and working with cloud platforms like Azure, AWS, and GCP. It also lists his technical skills including working with databases, data pipelines, cloud services, scripting languages, and data modeling tools. His experience includes designing and developing ETL processes and data pipelines to load and transform data using tools like Azure Data Factory, Databricks, Python, and Spark.

Uploaded by

kevin711588
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 8

VINAY KUMAR

SENIOR DATA ENGINEER


VINAYKUMARTHADAKA91@GMAIL.COM
602-845-9694
PROFESSIONAL PROFILE:
 Around 10 years of IT experience in Data Engineering, Analytics, Data Modeling and Azure Cloud as a MS SQL BI
Developer and Azure Data Engineer.
 Experience in design, development, and implementation of database systems using MS-SQL Server for both
OLTP & Data Warehousing Systems applications.
 Experience in building data pipelines using Azure Data Factory, Wintel / Linx, Azure Databricks, and loading data
to Azure Data Lake, Azure SQL Database, Azure SQL Datawarehouse (Synapse Analytics), snowflakes.
 Experience with Implementing Databricks Delta Lake Architecture (Bronze, Silver and Gold layers) and the Delta
Live Tables (DLT) and creating pipelines for DLT and using Auto Loader to incrementally and efficiently process
new data files.
 Good Understanding of other AWS services like S3, EC2 IAM, RDS Experience with Orchestration and Data
Pipeline like AWS Step functions/Data Pipeline/Glue.
 Good working experience on using Sqoop to import data into HDFS from RDBMS and vice-versa.
 Experienced in developing scripts using Python, PySpark, Scala, Teradata, Shell Scripting to do Extract, Load, and
Transform data working knowledge of Azure Databricks, MLops, GCP, snowflakes.
 Hands-on experience on working with AWS services like Lambda function, Athena, DynamoDB, Step functions,
SNS, SQS, etc.
 Strong experience in writing applications using Python using different libraries like Pandas, NumPy, SciPy etc.
 Expertise in using PySpark, Spark SQL, Scala, U-SQL with various data sources like JSON, Parquet and Hive.
 Experience in developing Spark applications using SQL and PySpark in Azure Databricks for data extraction,
transformation, and aggregation from multiple file formats for Analyzing & transforming the data to uncover
insights into the customer usage patterns.
 Ability to apply the spark DataFrame API to complete Data manipulation within spark session.
 Good experience with Azure services like Storage Account (ADLS Gen2), Key Vault, Data Factory (ADF V2), Logic
App, Azure Databricks (ADB), Active Directory (AAD), Storage Explorer and Data Studio.
 Created Pipelines in ADF using Linked Services, Datasets and Dataflows to Extract, Transform and load data from
different sources into Azure SQL, Blob storage, Azure SQL Data warehouse.
 Good understanding of NoSQL databases and hands on work experience in writing applications on NoSQL
databases like Cassandra and mongo DB.
 Experience in Informatica as ETL tool to transfer the data from source to staging and staging to target.
 Experienced in Migrating SQL database to Azure Data Lake, GCP, Azure data lake Analytics, Azure SQL Database,
Databricks, and Azure SQL Data warehouse and controlling and granting database access and Migrating On-
premises databases to Data Lake store using Azure Data factory, Wintel / Linx, snowflakes, Teradata.
 Extensively used regular expressions and core features in Python using lambda, map, reduce etc. and effectively
implemented logging feature using python logging library and profiling using profile
 Working knowledge with IAAS (Infrastructure as a Service) tools such as Terraform and Pulumi.
 Experience with creating and using CI/CD (Continuous Integration and Continuous Delivery) tools such as
Jenkins, Octopus and Azure Devops pipelines.
 Experience including management of resources, off - shore software applications development, project
management, system architecture, data architecture and data modeling, vendor management, business area
analysis,
 Experience with working with repo (Devops and GIT) to create dev branches and commit to master using Pull
Request (PR).
 Expert in T-SQL development to write complex queries by joining multiple tables, great ability to develop and
maintain stored procedures, triggers, user defined functions.
 Experience in developing and implementing ETL process using SQL Server Integration Service (SSIS) to support
Global Sales and Incentives project.
 Some Knowledge in some of UNIX/Linux commands.
 Build interactive Power BI dashboards and publish Power BI reports utilizing parameters, calculated fields and
table calculations, user filters, action filters and sets to handle views more efficiently.
 A Self-starter with a positive attitude, willingness to learn new concepts and accept challenges.

TECHNICAL SKILLS:
Azure Services Storage Account (ADLS Gen2), Key Vault, Data Factory (ADF V2), Logic
App, Databricks, Active Directory (AAD), Storage Explorer Data Studio,
DevOps and Cosmos NoSQL DB
Version Control Azure DevOps Git, Subversion, TFS, Git.
Continuous Integration (CI) Jenkins, Azure DevOps Pipelines, Splunk.
Continuous Delivery (CD) Octopus, Azure DevOps Pipelines.
Data Modeling Tools ERStudio Data Architect, Visio.
Cloud Platforms AWS, Azure, GCP.
Containerization Docker, ECS, Kubernetes, Artifactory, OpenShift.
Operating Systems Linux (Red Hat 5.x, 6.x, 7.x, SUSE Linux 10), VMware ESX, Windows NT/
2000/2003/2012, Centos, Ubuntu.
Database SQL, Azure SQL, RDS, Oracle 10g/11g, MySQL, MongoDB, Cassandra DB.
Scripting Python, Bash, Ruby, Groovy, Perl, Shell, HTML, JSON, YAML, XML.
Project management Jira, Confluence, Azure DevOps Boards
SDLC Methodologies Agile, Scrum, Waterfall, Kanban.
Data File Types JSON, CSV, PARQUET, AVRO, TEXTFILE

PROFESSIONAL EXPERIENCE:

James Hardie Industries, Cleburne, TX Aug 2021- Present


Sr. Data Engineer
Responsibilities:
 Met with business/user groups to understand the business process, gather requirements, analyze, design,
development, and implementation according to client requirement.
 Exposure in overall SDLC including requirement gathering, data modeling, development, testing, debugging,
deployment, documentation, production support.
 Designed and Developed extensive Azure Data Factory (ADF) pipelines for ingesting data from different source
systems like relational/non-relational to meet business functional requirements.
 Implemented Databricks Delta Lake Architecture (Bronze, Silver and Gold layers) and the Delta Live Tables (DLT)
and creating pipelines for DLT and using Auto Loader to incrementally and efficiently process new data files
 Created pipelines, data flows and complex data transformations and manipulations using ADF and PySpark with
Databricks.
 Developed Azure Databricks notebooks to apply the business transformations and perform data cleansing
operations.
 Developed Spark Applications by using Scala and Implemented Apache Spark data processing project to handle
data from various RDBMS and Streaming sources.
 Ingested data from RDBMS and performed data transformations, and then export the transformed data to
Cassandra as per the business requirement.
 Extensively worked on Spark Context, Spark-SQL, GCP, Scala, RDD's Transformation, Actions and Data Frames.
 Worked with various formats of files like delimited text files, Wintel / Linx, GCP, click stream log files, Apache log
files, Avro files, JSON files, XML Files. Mastered in using different columnar file formats like Parquet formats.
 Designed and developed Batch processing and real-time processing solutions using ADF, Wintel / Linux, GCP,
Databricks.
 Developed and Executed scripts on AWS Lambda to generate AWS Cloud Formation template.
 Designed, built, and deployed a multitude application utilizing almost all AWS stack (Including EC2, R53, S3, RDS,
HSM Dynamo DB, SQS, IAM, and EMR), focusing on high-availability, fault tolerance, and auto-scaling
 Developed AWS Cloud Formation templates and set up Auto scaling for EC2 instances.
 Developed custom scripts and transformations in Python within AWS Glue to handle complex data scenarios and
support specific business requirements.
 Created and managed data catalogs in AWS Glue to provide a unified metadata repository, improving data
governance and discoverability across the organization.
 Led design, development, and implementation of data processing workflows using AWS Glue for seamless ETL
operations, resulting in improved data accuracy and efficiency.
 Used AWS glue catalog with crawler to get the data from S3 and perform SQL query operations
 Collaborated with cross-functional teams to optimize data pipelines and automate data transformations,
reducing manual effort and enhancing data quality.
 Developed ETL’s using PySpark. Used both DataFrame API and Spark SQL API, Scala, MLops, snowflakes.
 Used Spark, Scala, performed various transformations and actions and the result data is saved back to HDFS.
 Used Broadcast variables and joins fact tables, dimensional tables efficiently. Used various optimizations in Spark
to run the job effectively.
 Ingested data in mini-batches and performs RDD transformations on those mini-batches of data by using Spark
Streaming to perform streaming analytics in Data bricks, Teradata.
 Used Azure Databricks, created Spark clusters and configured high concurrency clusters to speed up the
preparation of high-quality data.
 Responsible for analysis of requirements and designing generic and standard ETL process to load data from
different source systems.
 Designed, developed, implemented, delivered and managed ETL processes or data pipelines to enable data
accessibility to application, reporting and analytics.
 Created several types of data visualizations using Python and Tableau.
 Wrote and executed various MYSQL database queries from Python using Python-MySQL connector and MySQL
dB package.
 Worked with relational SQL and NoSQL databases, including PostgreSQL and Hadoop.
 Worked with various formats of files like delimited text files, click stream log files, Apache log files, Avro files,
JSON files, XML Files. Mastered in using different columnar file formats like Parquet formats.
 Developed Spark applications using PySpark and Spark-SQL for data extraction, transformation, and aggregation
from multiple file formats for analyzing & transforming the data to uncover insights into the customer usage
patterns
 Developed Collections in Mongo DB and performed aggregations on the collections.
 Created, provisioned different Databricks clusters needed for batch and continuous streaming data processing
and installed the required libraries for the clusters.
 Responsible for Designing Logical and Physical data modelling for various data sources on Confidential Redshift
 Designed and Developed ETL jobs to extract data from Salesforce replica and load it in data mart in Redshift.
 Worked on NoSQL databases like MongoDB, Document DB and Graph Databases like neo4j.
 Created numerous pipelines in Azure using Azure Data Factory v2 to get the data from disparate source systems
by using different Azure Activities like Copy Data, filter, for each, Get Metadata etc.
 Automated jobs using different triggers like Events, Schedules and Tumbling in ADF.
 Used Azure Logic Apps to develop workflows which can send alerts/notifications on different jobs in Azure.
 Used Azure DevOps to build and release different versions of code in different environments.
 Well-versed with Azure authentication mechanisms such as Service principal, Managed Identity, Key vaults.
 Experience with creating Secrets/Keys/Certificates in Azure Key Vault to store sensitive information.
 Used Polybase to load tables in Azure synapse, snowflakes.
 Worked on data analysis and reported using Power BI on customer usage metrics. I used this analysis to present
to the leadership towards a product growth to motivated team of engineers and product managers.
 Worked with complex SQL views, Stored Procedures, Triggers, and packages in large databases from various
servers. Experience in working on Agile/SCRUM methods in a fast-paced manner.

Environment: Azure SQL Server, Azure Data Factory, GCP, AWS Glue, AWS S3, EC2, Visual Studio Code, Azure Databricks,
Apache Spark, Azure Synapse, Teradata, Azure Dev-ops, Power BI, Azure Logic Apps and Azure Cloud Services, Azure
functions Apps, Azure Monitoring, Azure Search, Key Vault, Power BI, snowflakes, Python, Data Migration Assistant.
Amalgamated Bank, Chicago, IL Mar 2020 – Jul 2021
Data Engineer
Responsibilities:
 Extracted Transform and Load data from Sources Systems to Azure Data Storage services using a combination of
Azure Data Factory, T-SQL, Spark SQL, Scala, and Azure Databricks, snowflakes.
 Worked on Spark using Python as well as PySpark and Spark SQL for faster testing and processing of data.
 Worked on Azure Databricks cloud to organizing the data into notebooks and making it easy to visualize data
using dashboards.
 Used Azure Databricks, created Spark clusters and configured high concurrency clusters to speed up the
preparation of high-quality data, snowflakes.
 Data Ingestion to one or more Azure Services - (Azure Data Lake, Azure Storage, Azure SQL, Azure DW) and
processing the data in In Azure Databricks.
 Worked with Spark Session Object on Spark SQL and Data-Frames for faster execution of Hive queries.
 Used Broadcast Join in Spark for making smaller datasets to large datasets without shuffling data across nodes.
 Designed and developed data flows (streaming sources) using Azure Databricks features.
 Built application platforms in the cloud by leveraging Azure Databricks.
 Developed Python scripts, UDF's using both Data frames/SQL and RDD/MapReduce in Spark for Data
Aggregation, queries and writing data back into RDBMS through Sqoop.
 Created Terraform scripts to automate deployment of EC2 Instance, S3, IAM Roles and Jenkins Server.
 Designed, built, and deployed a multitude application utilizing almost all AWS stack (Including EC2, R53, S3, RDS,
Dynamo DB, SQS, IAM, and EMR), focusing on high-availability and auto-scaling
 Used various AWS services including S3, EC2, Athena, RedShift, EMR, SNS, SQS, DMS, Kenesis.
 Was responsible for creating on-demand tables on S3 files using Lambda Functions and AWS Glue using Python
and PySpark.
 Implemented best practices for job scheduling, monitoring, and error handling in AWS Glue to ensure reliable
and resilient data processing.
 Worked closely with data engineers and analysts to understand data requirements and design effective data
solutions using AWS Glue.
 Used AWS data pipeline for Data Extraction, Transformation and Loading from homogeneous or heterogeneous
data sources and built various graphs for business decision-making using Python matplot library
 Responsible for estimating the Cluster size, Monitoring, and troubleshooting of the Spark Data Bricks cluster.
 Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from
different sources, Scheduled Triggers, Mapping data flows using Azure Data factory and using key Vaults to Store
Credentials.
 Wrote various data normalization jobs for new data ingested into Redshift.
 Extracted and updated the data into MONGO DB using MONGO import and export command line utility
interface.
 Responsible for analysis of requirements and designing generic and standard ETL process to load data from
different source systems.
 Involved in developing and documenting the ETL (Extract, Transformation and Load) strategy to populate the
Data Warehouse from various source systems.
 Created Data Sets, Linked Services, Control Flows and Azure Logic Apps for sending Emails and Alerts.
 Developed JSON Scripts for deploying the pipelines in Azure Data Factory (ADF) that process the data using SQL
Activity.
 Done data migration from an RDBMS to a NoSQL database, and gives the whole picture for data deployed in
various data systems.
 Experienced in developing audit, balance and control framework using SQL DB audit tables to control the
ingestion, transformation, and load process in Azure.
 Created Parameterized datasets and pipelines for reusability and avoid duplicates code.
 Used Informatica as ETL tool to transfer the data from source to staging and staging to target.
 Worked with customers to deploy, manage, and audit best practices for cloud products.
 Designed, developed, and delivered large-scale data ingestion, data processing, and data transformation
projects on Azure.
 Mentored and share knowledge with customers as well as provide architecture reviews, discussions, and
prototypes.
 Worked closely with other data engineers, software engineers, data scientists, data managers and business
partners.
 Designed and developed Business intelligence dashboards, Analytical reports and Data visualizations using
power BI by creating multiple measures using DAX expressions for user groups like sales, operations, and finance
team teams.
 Responsible for the management of Power BI assets, including reports, dashboards, workspaces, and the
underlying datasets that are used in the reports.
 Used Power BI and Power Pivot to develop data analysis prototype and used Power view and Power map to
visualize reports.

Environment: Azure Data factory, Azure Data Lake, Wintel / Linux, AWS Glue, AWS S3, EC2, Azure SQL Database, Azure
Synapse Analytic, Application Insights, Azure Monitoring, Azure Search, snowflakes, Data factory, Key Vault, Azure
Analysis services, Spark, Power BI, Python Scripting, Data Migration Assistant, Azure Database Migration services.

Sentara, MA Apr 2019 – Feb 2020


SQL BI Developer
Responsibilities:
 Working as a MSSQL BI developer to generate the reports as per the business requirements.
 Created/Updated Views, Stored Procedure, Triggers, User Defined Functions (UDF) and Scripts using T-SQL Code.
 Worked on different environments like Development, QA, Stage and Production servers
 Debugging the codes which is developed by other developers in the team.
 Experience in conversion of Oracle codes to T-SQL and conversion of T-SQL codes to Oracle.
 Experience in analyzing the base requirement and build the initial draft reports for the demo to the business.
 Involved in data Extraction, aggregations and consolidation of Adobe data within AWS Glue using PySpark.
 Create data ingestion modules using AWS Glue for loading data in various layers in S3 and reporting using
Athena and Quick sight.
 Worked on Ingesting data by going through cleansing and transformations and leveraging AWS Lambda, EC2, S3,
AWS Glue and Step Functions.
 Extract Transform Load (ETL) development using SQL Server 2008, SQL 2012 Integration Services (SSIS).
 Worked on various possible performance-related issues including Database Tuning with the sole intention of
achieving faster response times and better throughput.
 Extensively used SSIS Import/Export Wizard, for performing the ETL operations.
 Create and optimize complex T-SQL queries and stored procedures to encapsulate reporting logic
 Created SQL Server Reports using SSRS 2012.Identified the data source and defined them to build the data
source views.
 Generated parameterized reports, sub reports, tabular reports using SSRS 2008R2.
 Designed, Developed and Deployed reports in MS SQL Server environment using SSRS-2008R2.
 Generated Sub-Reports, Cross-tab, Conditional, Drill down reports, Drill through reports and parameterized
reports using SSRS 2012.
 Created Dashboards and visualizations from HIPAA auditing data by doing analysis with threshold values.
 Created reports from HIPAA database from audit reports and number of times accessed patient data.
 Worked on the complex stored procedures and scripts supporting HIPAA transactions.
 Worked on implementing the automation of PHI/PII analysis, and execution of CA Javelin jobs.
 Coordinated with multiple stakeholders to ensure that test data needs are addressed proactively, appropriately
testing critical business systems with production like data which does not comprise of any PHI/PII.
 Developed various Informatica jobs to improve the process of identifying the PHI/PII across several NON-PROD
environments which involves bulk amount of data movement/data analysis in database objects.
 Developed Analysis Services Project. Developed, deployed and monitored SSIS Packages including upgrading DTS
to SSIS. Responsible for identifying and defining Data Source and Data Source views.
 Worked with, HL7 V2, HL7 V3, and CDA, RESTful Webservices, Healthcare Patient Data, HIPAA compliance
audits.
 Designed different type of Star schemas using ERWIN with various Dimensions like time, services, customers and
Fact tables.
 Used Performance Point Server to create interactive reports with an ability to drill down into details.
 Wrote T-SQL procedures to generate DML scripts that modified database objects dynamically based on user
inputs.
 Performed Documentation for all kinds of reports and DTS and SSIS packages.
 Designed and implemented stored procedures and triggers for automating tasks.
 Tested applications for performance, data integrity and validation issues.
 Worked on gathering requirements from Business Users, Analysts by scheduling meeting at regular basis.

Environment: SQL Server 2014/2012, SSIS, SSRS, MDX, OLAP, XML, MS PowerPoint, AWS Glue, AWS S3, EC2,MS
SharePoint, MS Project, MS Access 2007/2003,Agile, Shell Scripting, Oracle, Crystal Reports, SVN Tortoise, Tidal, DART
Tool

Volkswagen Credit, FL Nov 2017 - Mar 2019


SQL SSIS Developer
Responsibilities:
 Worked in migration of SQL Server 2008 to 2012/2014.
 Used T-SQL in constructing User Functions, Views, Indexes, User Profiles, Relational Database Models, Data
Dictionaries, and Data Integrity.
 Create Master Package to run different other ETL in one Master package.
 Use DDL and DML for writing triggers, stored procedures, and data manipulation.
 Performance tuning of SQL queries and stored procedures using SQL Profiler and Index Tuning Wizard.
 Trouble shoot any kind of data issues or validation issues
 Developed complex T-SQL queries, common table expressions (CTE), stored procedures, and functions used for
designing SSIS packages.
 Designed dynamic SSIS Packages by passing project/Package parameters to transfer data crossing different
platforms, validate data during transferring, and archived data files for different DBMS.
 SQL Server 2008 Reports Writer and performed complex SQL report writing and design.
 Used SQL Server 2012 tools like SQL Server Management Studio, SQL Server Profiler, and SQL Server Visual
Studio.
 Create different SSRS Reports based on the user’s requirement. All kinds like ad-hoc reports
 Used Jasper Reports as a reporting tool.
 Created Tablix Reports, Matrix Reports, Parameterized Reports, Sub-reports, Charts, and Grids using SQL Server
Reporting Services 2012.
 Design the ETL for the data load.
 Designed SSIS Packages to transfer data from various sources of the Company into the database that was
modeled and designed.
 Bug fixes in SSIS, SSRS and stored procedures as necessary.
 Scheduling the jobs to run for ETL using SQL Server Agent.
 Worked on writing the T-SQL code for the historical data to pull per specification requirement.
 Created Informatica mappings with T-SQL procedures to build business rules to load data.
 Played roles like production support of SSIS and SSRS jobs.
 Perform T-SQL tuning and optimizing queries for and SSIS packages.
 Created SSIS process design architecting the flow of data from various sources to target.
 Created numerous simple to complex queries involving self joins, correlated sub queries, CTE's for diverse
business requirements.
 Worked on Incremental Loads using concepts like Checksum and Column to Column compare.
 Use of SharePoint 2014 and TFS to store all the related work and report the same by creating Tasks in TFs.

Environment: SQL Server 2012/2014 Enterprise Edition, SQL BI Suite (SSAS, SSIS, SSRS), VB Script, ASP.NET, T-SQL,
Enterprise manager, XML, MS PowerPoint, OLAP, OLTP, MDX, Erwin, Informatica, MOSS 2007, MS Project, MS Access
2008 & Windows Server 2008, Oracle.
Baxter Healthcare, Broken Arrow OK Sep 2016-Oct 2017
Data Engineer
Responsibilities:
 Improved consumer retention degrees with the aid of using 40% with the aid of using correctly coordinating
among the economic team and the customers and organizing rapport with providers thereby lowering general
charge with the aid of using 30%.
 Calculated the Variance (YOY) in Sales of modern-day yr over final yr the usage of superior desk calculations.
 Forecast of spare components to plot inventory.
 Initiated procedure upgrades operating with distinct stakeholders that decreased time to shipping for customers
from suppliers. Experience in interacting with freight forwarders to barter price, and time of delivery.
 Proactively satisfied control to put into effect a brand-new ERP machine and labored with distinct groups
including the advertising crew, industrial crew, money owed crew and the IT crew to make certain a success
implementation
 Increased income through 20% through revamping alternate display substances and figuring out new alternate
indicates that focus on our area of interest customers.
 Developed data ingestion modules using AWS Step Functions, AWS Glue and Python modules.
 Developed the PySpark code for AWS Glue jobs and for EMR.
 Identified KPI’s to be protected on buy reports, empowering buy group to lessen final minute ordering prices via
way of means of 5%.
 Produced reviews on accounts, buy and deliver that helped perceive troubles and thereby arrive at a possible
answer to enhance operational efficiency.
 Designed and evolved Seller and Customer picture focusing at the energetic and inactive clients of every product
category.
 Designed and created diverse analytical reviews and Dashboards to assist the Senior Management to perceive
vital KPIs

Selinis Software, India May 2012-Jun 2014


Data Engineer/Analyst
Responsibilities:
 Gathered requirements, analyzed and wrote the layout documents.
 Involving in complete Agile Requirement Analysis, Development, System and Integration Testing.
 Built various graphs for business decision making using Packages like NumPy, Pandas, Matplotlib, SciPy and
ggplot2 for Numerical analysis.
 Involve in records mining, transformation and loading from the supply structures to the goal system.
 Working with various Integrated Development Environments (IDEs) like Visual Studio Code and PyCharm.
 Designing and growing reviews from more than one statistic reassets with the aid of using mixing statistics on a
unmarried worksheet in Tableau.
 Perform slicing and dicing of data using SQL and Excel for data cleaning and data preparation.
 Performing data analysis and maintenance on information store in MySQL database.
 Created information fashions for AWS Redshift and Hive from dimensional information fashions.
 Working with Data Cleaning, Wrangling, Data Analytics, Modelling, Integration,
 Providing End-consumer Training and documentation for client reporting services.
 Participated with inside the Analysis, Design and Development Phases of file development, overall performance
tuning and manufacturing rollout for each file of Information Technology Department.
 Queried the databases, wrote check validation scripts and done the System testing.
 Worked with the builders in the course of coding and at the same time as doing the remediation of the software.
 Documental statistics mapping and transformation techniques with inside the Functional Design files primarily
based totally on the
 Enterprise requirements Performed Data Profiling and Data Quality checks.
 Worked on exporting reviews in a couple of codecs which include MS Word, Excel, CSV and PDF.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy