Sandeep Reddy Resume PDF
Sandeep Reddy Resume PDF
AREAS OF E X PE RTIS E
Data Pipelines - Data Consolidation - On/Offshore Development Models - Legacy Database Integration - Client Relations
End-to-End Project Management - Technology Liaison - Sandbox Environments - Data Parsing - Risk Management
Technical Solutions - Data Warehousing - Project Planning (SDLC, Agile) - Systems Development - Team Coordination
P R OF IC IE N C IE S
Python - Spark - SparkSQL - Scala - Amazon S3 - Amazon EC2 - Nifi - Jupyter Notebook - Airflow - AutoSys Maestro–
Protegrity - Oracle Teradata Hadoop Snowflake PostgreSQL - OFSA - Java - HTML - XML - JavaScript - PHP - PL/SQL -
GitHub Anthill uDeploy – ServiceNow- Ab-Initio 3.3.1 GDE - Co>op 3.3.1.4 - Express>It - Tableau
P R OF E S S ION A L E X PE RIE N C E
GOODRX 2022 to Present
Data Engineer
Collaborate with product managers, data scientists, data analysts and engineers to define requirements and data
specifications to build Enterprise data platform to enable effective reporting and analytics. Act as in house data expert and
make recommendations regarding standards for code quality and timeliness, Architect cloud-based data infrastructure
solutions to meet stakeholder needs.
Responsibilities:
• Develop, deploy and maintain data processing pipelines using cloud technology such as SQL, Python, AWS,
Kubernetes, Airflow, Redshift, EMR.
• Develop, deploy, and maintain serverless data pipelines using Event Bridge, Kinesis, AWS Lambda, S3 and Glue.
• Define and manage overall schedule and availability for a variety of data sets.
• Work closely with other engineers to enhance infrastructure, improve reliability and efficiency.
• Make smart engineering and product decisions based on data analysis and collaboration.
DISCOVER FINANCIAL 2019 to 2022
Delivered actionable insights and strategic initiatives by creating the initial framework for CCPA reporting and maintaining
data models, access, integrity, and quality. Improve processes to allow for instant data management. Anticipate needs and
expectations of stakeholders and clients to provide analytical solutions leading to exceptional results. Construct data
architecture for long-term performance, fault tolerance, and security by applying reusable patterns.
Notable Achievements:
• Develop optimized high-performance analytics processes using SQL, Python, Airflow, Teradata, Redshift and
Snowflake to cleanse and produce insights for millions of customer data points.
• Led the team to Migrate Legacy on-prem applications built on SAS to cloud-based solutions.
• Helped optimize the data process by declarative pipeline development, automatic data testing, and deep visibility
for monitoring and recovery
Data Engineer
Contracted to migrate data pipeline solutions created in the previous position. Analyzed complex requirements and
translations to support design and development specifications for analytical systems and data integration applications.
Facilitated goal achievement by using technologies and programming languages including but not limited to SQL, SparkSQL,
PySpark, Databricks, Delta Lake, Airflow, GitHub, JIRA.
Notable Achievements:
• Spearhead End-to-End Extract, Transform, and Load (ETL) code development and review by leading
consultations concerning program logic, data preparation, testing, and debugging to verify quality.
• Developed an ACID-compliant graphical database solution using data from both on-prem hosted (Oracle,
Teradata) and cloud-hosted (PostgreSQL, MongoDB, Snowflake) to create a singular data platform combing data
from 40+ lines of businesses `
Data Engineer
Migrated company legacy databases by creating encrypted in-house data warehouse solutions to manage customer loans
information. Understood client requirements to develop Hive table models with ACID properties, introduce data to Hadoop
environments via NiFi, Scala and maintain end-to-end lineage. Communicated and consulted directly with business owners
to understand needs and provide technical solutions despite no in-house client technical team.
Notable Achievements:
• Led seven onshore/offshore developers to coordinate with multiple Discover cross-functional AGILE teams.
• Optimized Hadoop environment code performance via multiple portioning techniques and data quality checks.
• Increased project workforce to generate solutions and manage risks for ground-up project development,
leading to additional revenue via specialized roles, exceptional task management, and effective job scheduling.
• Consolidated data via Oracle servers to gather data from multiple sources in multiple formats and developed
shells to cleanse data and parse abnormalities.
A DDIT IO N A L P R O FE S S IO N A L E X P E R IE N CE
NAISA GL OBAL , WE B DE VE L OPE R
C O L L A B O R A T E D T O P R O D U C E W E B S I T E W H I L E I M P L E M E N T I N G A GIL E M E T H O D O L O G I E S A N D C A P T U R I N G U S E R
R E Q U I R E M E N T S . I N D E P E N D E N T L Y C R E A T E D L A Y E R C O M P O N E N T S U S I N G HT ML , J A V A S C R I P T , JS P , A N D S T R U T S .
E D U C A TI O N AN D C RE DE N TI AL S
M A S T E R O F S C I E N C E (M S ), I N F O R M A T I O N T E C H N O L O G Y
U N I V E R S I T Y O F N O R T H C A R O L I N A A T C H A R L O T T E | 2015
B A C H E L O R O F I N F O R M A T I O N T E C H N O L O G Y (B . T E C H )
J A W A H A R L A L N E H R U T E C H N O L O G I C A L U N I V E R S I T Y | 20 14