0% found this document useful (0 votes)
10 views2 pages

Vidya Resumee

Sri Vidya Neelam is a data engineer with expertise in various big data technologies, cloud platforms, and programming languages. She holds an M.S. in Data Science and has professional experience at Pearson, Byjus, and Unacademy, where she developed data engineering solutions, optimized data pipelines, and ensured data integrity. Her technical skills include Apache Spark, AWS, Azure, SQL, and business intelligence tools like Tableau and Power BI.

Uploaded by

AMARNATH YADAV
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views2 pages

Vidya Resumee

Sri Vidya Neelam is a data engineer with expertise in various big data technologies, cloud platforms, and programming languages. She holds an M.S. in Data Science and has professional experience at Pearson, Byjus, and Unacademy, where she developed data engineering solutions, optimized data pipelines, and ensured data integrity. Her technical skills include Apache Spark, AWS, Azure, SQL, and business intelligence tools like Tableau and Power BI.

Uploaded by

AMARNATH YADAV
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Sri Vidya Neelam

Chandler | AZ | neelamsrividya20@gmail.com | 7633446521 | Linkedin

TECHNICAL SKILLS

Apache Spark, Hive, MapReduce, HDFS, Kafka, Apache NIFI, Azure Data Factory, AWS Glue,
Big Data Ecosystem:
AWS Athena, AWS EMR, Snowflake, Google Big Query, DBT (Data Build Tool), SQL Databases.
Cloud Platforms: Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform (GCP)
Databases: MS SQL, MySQL, PostgreSQL, MongoDB, Redshift, Snowflake
Programming Languages: Python, SQL, Shell, Bash
Operating Systems: Windows, Linux, Ubuntu
Business Intelligence (BI) Tableau, Power BI (DAX, Power Query, Report Builder, Paginated Reports), Looker Studio, Apache
Tools: Superset
Monitoring & Logging: AWS CloudWatch, Grafana, Kibana
Version Control: Git, GitHub, GitLab, Bitbucket, JIRA, ALM
CRM Tools: Salesforce, Lead Squared, Order Management System (OMS), Order Hive
DevOps & CI/CD Tools: Docker, Kubernetes, Jenkins, GitLab, Terraform
Office & Productivity Tools: MS Excel, Google Sheets, MS PowerPoint, MS Word, SAP GUI, RapidMiner, KNIME

EDUCATION

 M.S. in Data Science, Concordia University St Paul, St Paul, MN |GPA: 3.75


 BTech in Electronics and Communication, JNTU, Hyderabad, India |GPA: 3.73

PROFESSIONAL EXPERIENCE

Pearson, Chandler, AZ Oct 2023 to Present


Data Engineer

Roles and Responsibilities:


 Developed tailored data engineering solutions to support internal product and analytics teams, aligning with business objectives
and ensuring seamless data system integration.
 Ensured data integrity and optimized large datasets for advanced analysis using DBT (primarily SQL), leveraging GCP Big
Query for efficient data storage and processing.
 Performed data quality validation and defect resolution through advanced SQL querying, ensuring integrity and traceability
throughout ingestion layers.
 Addressed performance bottlenecks within data pipelines, enhancing throughput, reliability, and scalability by fine-tuning SQL
queries and leveraging DBT optimization techniques.
 Utilized Fivetran for seamless data integration, pushing data to Snowflake, Big Query, Redshift, and Databricks, improving
ETL workflows and data pipeline efficiency.
 Integrated data from third-party APIs (REST and SOAP) and internal systems into the GCP ecosystem, enhancing real-time
and batch analytics capabilities.
 Automated and optimized GCP virtual machine management using Python scripting, improving infrastructure performance,
and scaling compute resources to meet AI assistant workloads.
 Implemented Kubernetes-based CI/CD pipelines using Terraform and GitHub Actions, automating deployment of
infrastructure and data models to support scalable and reliable data processing environments.
 Translated business requirements and technical architecture specifications into robust data ingestion and transformation
pipelines on GCP, aligning with enterprise design standards and scalability needs.
 Provided estimates for coding tasks and development timelines in Agile settings, ensuring alignment with project goals and
milestones. Collaborated with cross-functional teams to track and update sprint progress and timelines.

Byjus-the learning App, Bangalore, India July 2021 to July 2023


Data Engineer/Analyst

Roles and Responsibilities:


 Built scalable data pipelines using AWS Glue, Step Functions, and Data Pipeline to integrate sales performance data from
multiple sources (RDS, SQL Server, DynamoDB, Salesforce, Lead Squared, OMS), with AWS Kinesis enabling real-time
data ingestion for timely analytics.
 Optimized cloud data infrastructure across Amazon S3, SQL Server, and Snowflake, implementing efficient ETL workflows
using SQL and PowerShell on AWS EMR, reducing compute costs by 30% and improving processing speed.
 Led data modeling and warehousing efforts, designing Star and Snowflake schemas and contributing to logical and physical
data models to support large-scale reporting through Tableau, Power BI, and SSRS.
 Enhanced data integration and transformation by collaborating with Salesforce teams to refine SOQL/SOSL queries and
streamline workflows for improved reporting and analytics.
 Utilized DAX functions in Power BI to build dynamic KPIs and aggregations that enabled sales and operations teams to track
performance metrics, conversion rates, and business growth trends.
 Ensured secure, compliant data practices by working with governance teams to define retention, audit, and access policies,
implementing AWS Lake Formation controls, and establishing clear data ownership models.
 Contributed to data infrastructure optimization efforts that supported cost-effective scaling, improving system performance
while aligning with cloud budgeting constraints and goals.
 Engaged in Agile methodologies to estimate coding tasks and delivery timelines, contributing to sprint planning and ensuring
timely project completion.
 Managed cross-functional data initiatives, overseeing project timelines, risk mitigation, and stakeholder engagement while
supporting CRM-to-Teams integration to improve collaboration and drive data-informed decision-making.

Unacademy, Hyderabad, India August 2018 to July 2021


Data Engineer

Roles and Responsibilities:


 Progressed from Data Engineer Intern to Full-Time Data Engineer, gaining hands-on expertise in cloud-based data engineering
and orchestration within the Azure ecosystem.
 Utilized Azure Data Factory for orchestrating data ingestion into Azure Data Lake Storage, creating external tables with SQL
scripts via Azure Databricks to facilitate project-wide data reuse and accessibility.
 Collaborated with Azure database and system administrators to optimize ETL workflows in Azure Data Factory, ensuring
efficient data integration and performance across Azure SQL Database and Azure Synapse Analytics.
 Designed and implemented data transformation solutions using Azure Data Factory, employing PySpark within Azure
Databricks for processing, and Azure Stream Analytics for real-time data streaming to Azure Data Lake.
 Managed complex data migration projects from MySQL to Azure Data Lake, using Azure Data Factory for both initial data
transfer and subsequent data transformations to prepare data for analytics and reporting in Azure environments.
 Ensured secure handling and processing of sensitive data by implementing role-based access control (RBAC) and following
compliance standards aligned with GDPR and internal policies.
 Oversaw source code versioning and automated deployment using Azure Repos and Azure Pipelines, while continuously
monitoring data processing operations within Azure Data Factory through Azure Monitor to enhance system reliability and
efficiency.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy