0% found this document useful (0 votes)
117 views2 pages

Srilakshmi M Resume

Srilakshmi Mannemala has over 7 years of experience as a data engineer and SQL developer. She has extensive experience working with big data technologies like Apache Spark, Hive, Hadoop, and AWS services to process and analyze large datasets. She has designed scalable data pipelines and data marts using these tools. Her skills include Python, Spark, SQL, AWS Glue, Redshift, and S3. She has worked on ETL processes, building data pipelines, and developing SQL queries for data extraction and analysis.

Uploaded by

Srilakshmi M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
117 views2 pages

Srilakshmi M Resume

Srilakshmi Mannemala has over 7 years of experience as a data engineer and SQL developer. She has extensive experience working with big data technologies like Apache Spark, Hive, Hadoop, and AWS services to process and analyze large datasets. She has designed scalable data pipelines and data marts using these tools. Her skills include Python, Spark, SQL, AWS Glue, Redshift, and S3. She has worked on ETL processes, building data pipelines, and developing SQL queries for data extraction and analysis.

Uploaded by

Srilakshmi M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Srilakshmi PROFESSIONAL EXPERIENCE

Mannemala
Data Engineer Consultant Data Engineer Hyderabad, IN
Contact#: +0091 7981598455 Sapient (Jan 2022-Apr 2023)
e-mail: Hyderabad, IN
 Performing data ingestion from RDBMS to HDFS using Sqoop for
further processing using Hive and Spark SQL.
 Designing and building scalable Big Data pipelines using PySpark for
data transformations.
 Developing Python scripts as reusable, dynamic & generic utilities
for Spark and ETL jobs.
SUMMARY  Developing ETL scripts using PySpark and Hive
7 years of IT Professional with 4.5 years
 Maintaining and Scheduling of Spark jobs using Oozie Workflow.
experienced & result- oriented Data and
Designing solutions & codes using the Hadoop Framework to
Spark Engineer possessing a proven track
generate multiple reports and integration with dashboards.
record in software development using
Hadoop Apache Spark, Hive, SQL, Python,  Developing SQL queries for data analysis and
AWS. extraction Designing of Shell Scripts for Data
Proficient in processing movement and file processing.
structured/unstructured data, deploying  Consuming Data from Microsoft SQL Server and
Apache Spark to analyze huge datasets. Oracle as a data extraction layer.

KEY SKILLS
Data Engineer Hyderabad, IN
SQL
CapGemini (Feb 2018-Dec 2021)
Apache Spark (Python)
Big Data- Hadoop  Built scalable and configurable applications on Big
ETL Data/Spark Framework on AWS EMR Cluster.
Python  Built Data Marts on AWS cloud for Business Applications and
AWS Glue, Red Shift, S3 Reporting Deployment and Execution of Spark Batches on EMR
Clusters.
TECHNICAL SKILLS
 Designed and Built Hive External Tables with S3 buckets as data
Big Data Ecosystem: Hadoop, Hive,
source location.
Sqoop, Apache Spark, Oozie
 Extensively used AWS ' services for data engineering tasks.
Spark Framework: Spark RDD's,
DataFrames, Spark SQL Performance  Building Data Marts on AWS cloud for Business Applications
Optimization and Tuning & Reporting.
 Developing ETL scripts using PySpark.
 Maintaining the data pipeline using Apache Airflow.
 Working on PySpark framework to build data pipelines for
ETL processes.
 Leveraging AWS cloud platform for accomplishing Data
Engineering goals.
 Developing SQL queries for data analysis and extraction.
PROFESSIONAL EXPERIENCE Contd..

SQL Developer
Cloud: AWS WIPRO TECHNOLOGIES Hyderabad IN
(Jan 2017-Jan 2018)

RDBMS : SQl Server


2005,2008
 Involved in the whole life cycle of the project.
Reporting Tools : Rally,Jira,
 Wrote SQL statements to show the pending verifications
Utilities Tools : S3,SQL*Plus, SQL and eligibility information from the database.
*Loader, WinScp,
 Programmed Packages, Stored procedures, Functions and
Putty
SQL Queries.

 Understanding the requirements and carrying out impact analysis.


Operating System : UNIX ,Windows
 Convert the functional requirements into technical documents.
Programming : SQL, Python  Involved Explain plan and investigate the performance issues.
Languages
 Data migration for maintaining reports.

 Developed PL/SQL Packages, procedures, functions to


ACADEMIC QUALIFICATION
ensure integrity of loaded data, based on pre-defined table
driven validations.
Bachelor of Engineering from Jawaharlal
Nehru Technological University (JNTU),  Code validation as per client requirement.
Ananthapur, India
 Responsible for SQL Statements using DDL, DML, DCL and TCL.

 Used cursor and Exception handling while writing the


Stored Procedure.

 Used PL/SQL advanced concepts like Collections, Bulk collect &


Ref cursor.

 Strong understanding of Data Dictionary tables.

 Experience in Dynamic SQL,Autonomous Transaction


and Materialized Views.

 Having basic knowlegde in UNIX Shell scripting.

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy