0% found this document useful (0 votes)
115 views7 pages

Lakshmi Sampath Potluri AI ML Engineer

Uploaded by

Ch An Du
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
115 views7 pages

Lakshmi Sampath Potluri AI ML Engineer

Uploaded by

Ch An Du
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 7

Lakshmi Sampath Potluri

Sr AI/ML Engineer
+1 (469) 207-1646 | potluri947@gmail.com

SUMMARY:
 Senior AI/ML Engineer with about a decade of hands-on experience in the dynamic and evolving realms of
machine learning, deep learning, big data, natural language processing, and artificial intelligence.
 Around 9 years of experience in turning data-driven insights into actionable strategies and played a pivotal
role in solving complex problems, optimizing processes, and driving innovation across diverse industries.
 Addressed natural language processing (NLP) challenges by employing transformer-based neural networks,
leading to improved sentiment analysis, chatbots, and content recommendation systems.
 Designed and developed advanced AI models for our LLM application, focusing on natural language
understanding (NLU), natural language processing (NLP), and machine learning (ML) algorithm
 Trained, validated, and optimized machine learning models using frameworks and libraries such as Python,
TensorFlow, and PyTorch. Tuned hyperparameters and experimented with different architectures to
improve model performance.
 Continuously expanded skills and knowledge, exploring generative AI, programming languages (SQL, R, Scala,
Java), and industry backgrounds in retail, telecom, pharma, financial, health care, and insurance domains.
 Evaluated and recommended appropriate Azure, AWS and GCP services and OpenAI technologies based on
project objectives, scalability, performance, and cost considerations.
 Preferred experience with LangChain, RAG, LLM Fine Tuning, and LoRA, emphasizing a proactive approach to
staying abreast of emerging technologies and frameworks within the machine learning and AI community.
 Leveraged Large Language Models (LLMs) and Open Source LLMs such as ChatGPT, LLama, Falcon, Mistral,
Bard, and others, utilizing Langchain frameworks to develop innovative solutions and natural language
understanding.
 Applied strong knowledge of data pre-processing, feature engineering, and model evaluation techniques to
enhance model accuracy and performance.
 Demonstrated excellent problem-solving skills while working on challenging AI/ML problems an d
collaborated with stakeholders, including product managers, data engineers, and others, to understand
business requirements and translate them into technical solutions.
 Successfully deployed machine learning models into production environments, ensuring scalability,
reliability, and performance. Integrated models into applications, APIs, and microservices as needed.
 Experienced in conducting end-to-end Supervised, Unsupervised machine learning pipelines, with focus on
data wrangling, feature engineering, model training, optimization, and creating data driven visualizations
with Python libraries and Tableau.
 Expertise in the development and deployment of conversational AI chatbots using IBM Watson Assistant,
designing dialog flows, training models, and optimizing responses to improve user engagement and
satisfaction.
 Expertise in Machine Learning model development and deployment, emphasizing Continuous Integration
and Continuous Deployment (CI/CD) processes.
 Strong conceptual knowledge of Azure Data & Analytics PaaS Services: Azure Data Factory, Azure Data Lake,
Azure Synapse Analytics, Azure Databricks, Azure IoT, Azure HDInsight + Spark, Azure Cosmos DB, Azure
Databricks, Azure Stream Analytics, and Azure SQL DB.
 Implemented security policies and API Management strategies in Azure ML platform, contributing to a
secure and well-integrated environment.
 Experience with Azure Cognitive Services and OpenAI technologies, such as GPT-3 and GPT-4, prompt
engineering techniques.
 Strong experience in Software Development Life Cycle (SDLC) including Requirements Analysis, Design
Specification and Testing as per Cycle in both Waterfall and Agile methodologies.
 Proficient in Power Apps, Power Automate, ArcGIS Platform, Docker, Kubernetes, Kibana, Logstash, Node.js,
YARN, Zookeeper, HDFS, Apache Spark, Apache Kafka, Ambari, Git, Unix/Linux, OpenStack, AWS, Azure,
VMWare.
 Expertise in utilizing libraries such as NumPy/SciPy, scikit-learn, pandas, TensorFlow, PyTorch, PyMC3, and
others for efficient data analysis and modelling.
 Proficient in multiple programming languages, including R, Python, Java, Scala, HTML, MATLAB, SQL, C/C++,
and JavaScript.
 Proficient in managing and querying databases, including Dgraph, Elasticsearch, PostgreSQL/PostGIS, MySQL,
MongoDB, Redis, and Accumulo.
 Extensive experience with Microsoft Azure data services, including Azure Data Lake, Azure Data Factory,
Azure Data Lake Gen2 for Data Store, Azure Databricks, Azure SQL, and Azure Data Warehouse.
 Worked closely with NLP research teams to stay up to date with the latest advancements in LLMs, ensuring
that our models incorporated cutting-edge techniques and has very good knowledge on Large Language
Models like GPT, BERT, XLNET, T5, RoBERTa, ELECTRA, Turing-NLG.
 Extensive experience in Text Analytics, developing different Statistical Machine Learning, Data Mining
solutions to various business problems and generating data visualizations using R, Python.
 Excellent Knowledge of Relational Database Design, Data Warehouse/OLAP concepts, and methodologies.
 Adept at prompt engineering, ensuring accurate model responses, and driving actionable results.

TECHNICAL SKILLS:
Machine Learning/Deep Learning R: caret, glmnet, forecast, xgboost, rpart, survival, arules,
sqldf, dplyr, nloptr, lpSolve, ggplot

Python: pandas, NumPy, scikit-learn, scipy, statsmodels,


matplotlib, tensorflow

SAS: Forecast server, SAS Procedures and Data Steps

Spark: MLlib, GraphX

SQL: Subqueries, joins, DDL/DML statements

IDE Tools Databricks, Visual Studio, Jupyter Notebook

Databases/ETL/Query Teradata, SQL Server, Redshift, Postgres and Hadoop


(MapReduce); SQL, Hive, Pig and Alteryx

Visualization Tableau, ggplot2 and RShiny

Prototyping PowerPoint, Shiny and Tableau

PROFESSIONAL EXPERIENCE:
Client: Prime Therapeutics, Minneapolis MN Apr 2022 - Present
Role: Principal GenAI/NLP Engineer

Project: Claims Denial Reason Analysis and RAG Powered Generative AI ChatBot
Prime Therapeutics is a Pharmacy Benefit Management (PBM) company which manages prescription drug benefits
for health insurance plans and the critical aspect of their operations is claims processing. This project aims to
improve the efficiency and accuracy of claims processing by implementing NLP-driven components. The primary
objectives are to gain insights into the common reasons for claims denial, identify patterns, and detect potential
fraud within the claims data.

Roles and Responsibilities:


 Implement advanced NLP algorithms like BERT, GPT, RNN,LSTM to process and analyse the textual data
within claims denial reasons and categorized reasons into common themes for analysis.
 Identified prevalent reasons for claims denial through sentiment analysis and topic modelling techniques and
categorized denial reasons into actionable groups for further investigation and resolution.
 Implemented and fine-tuned NLP models to analyse claims data for potential fraud and abuse and used
supervised and unsupervised machine learning techniques to categorize claims as normal or potentially
fraudulent.
 Build NLP models that focus on extracting and categorizing reasons for claim denials and utilize techniques
like Named Entity Recognition (NER) and topic modelling to identify specific denial reasons and categorize
them into predefined categories.
 Determined what tasks the computer vision system needs to perform, such as detecting items, recognizing
barcodes, or monitoring for theft.
 Gathered a diverse dataset of images or videos representing various scenarios and conditions that the self-
checkout system may encounter.
 Annotated the collected data with labels indicating the objects present in each image or video frame, along
with any other relevant information (e.g., bounding boxes, barcode numbers).
 Prepare the data by resizing, normalizing, and augmenting images to improve model performance and
generalization.
 Used computer vision models such as Convolutional Neural Networks (CNNs) for image classification or
object detection.
 Trained the chosen model on the annotated dataset using techniques like transfer learning, fine-tuning, or
training from scratch.
 Assessed the performance of the trained model on a separate validation or test dataset to ensure it meets
the desired accuracy and reliability metrics.
 Integrated the trained model into the NCR self-checkout system and Deployed the computer vision model to
the self-checkout terminals
 Implemented cognitive search solutions using IBM Watson Discovery, enabling advanced search capabilities
such as natural language query understanding, relevancy ranking, and document clustering for large-scale
document repositories.
 Reduced claims denial rates by 20% through the identification and mitigation of common denial reasons and
enhanced operational efficiency by addressing 25% of frequent denial issues proactively.
 Integrate the NLP models into Prime Therapeutics' existing claims processing workflow and generate reports
and dashboards to provide insights into the detected fraud patterns and denial reason categories.
 Achieved 30% fraud prevention and mitigation through early detection of potentially fraudulent claims and
realized a 15% improvement in overall claims processing and customer satisfaction.
 Developed and fine-tuned convolutional neural networks (CNNs) using TensorFlow and PyTorch, achieving
high accuracy in image classification tasks.
 Integrated Azure Cognitive Services, including Computer Vision, Natural Language Processing, Speech
Recognition, and Decision APIs, into applications to enable advanced AI capabilities such as image analysis,
language understanding, and speech-to-text conversion.
 Worked in designing, developing and deploying production-grade machine learning solutions in NLP (NLTK,
Spark NLP, spaCy, HuggingFace, Flair, NLTK, etc) for a chatbot powered by Retrieval Augmented
Generation(RAG).
 Implemented rigorous testing frameworks to evaluate the performance of AI models and continuously
refined and optimized models based on feedback and performance metrics like Groundedness, Coherence,
Relevance, GPT similarity etc.
 Worked on image segmentation projects using U-Net and Mask R-CNN for applications like tumor detection
and crop monitoring.
 Worked in NLP model architectures and algorithms such as BERT (and derivatives like BioBERT, RoBERTa,
ALBERT etc.), BiLSTM, XLNet, T5, ELECTRA, PaLM.
 Applied generative AI techniques in natural language processing tasks, such as text generation, summarization,
and conversation generation.
 Trained and evaluated custom AI models using IBM Watson Studio, leveraging tools and frameworks for data
preparation, model training, hyperparameter optimization, and performance evaluation.
 Developed “Insurance Insight GPT” and Contract Negotiator GPT” solutions using LLMs for concise document
summarization and Key-Value pair extraction.
 Conducted POCs and fine-tuning of LLMs such as GPT-3.5, LLama2, Flan-T5 to extract information from
unstructured health insurance and financial documents.
 Leveraged Word2Vec and Glove embeddings to enhance the representation of textual data, improving
model performance.
 sEmployed machine learning algorithms (e.g., support vector machines, naïve bayes), including deep learning
models (e.g., neural networks, LSTM, Transformers), to develop NLP solutions and those techniques to train
models for text classification, sentiment analysis, and named entity recognition.
 Fine-tuned hyperparameters and optimized models for improved accuracy and efficiency in NLP tasks and
Conducted model evaluation and selection, using metrics such as accuracy, F1-score, and ROC AUC, BLEU,
Groundedness to ensure the models met project objectives.
 Conducted A/B testing to measure the impact of machine learning-based improvements in NLP applications.
Environment: Python 3.11, TensorFlow, R, scikit-learn, Apache Spark, SQL Server, ADF, Databricks, Couchbase,
CORNERSTONE, Tableau 19.1.2, NLP, IBM Watson, Generative AI, AWS, Redshift, CI/CD, Python, ORACLE, Agile,
Azure, Deep Learning, Tableau and Reference Data Management.

Client: The Hanover Insurance Group, Worcester County, MA Aug 2020 – Mar
2022
Role: Machine Learning Engineer/Principal Generative AI consultant

Project: Policy Recommendation System and Chatbot for Customer Support


The Hanover Insurance Group is embarking on an ambitious initiative to enhance customer engagement and
satisfaction through the implementation of a comprehensive Policy Recommendation System and Chatbot for
Customer Support. This project aims to revolutionize the insurance experience by leveraging advanced data science
and machine learning techniques to provide personalized policy recommendations and streamline customer
interactions.

Roles and Responsibilities:


 Developed an intelligent recommendation system utilizing advanced machine learning algorithms
(collaborative filtering, content-based filtering etc.) to analyse customer profiles, historical data, and
preferences for generating personalized insurance policy suggestions.
 Implemented a state-of-the-art chatbot driven by Natural Language Processing (NLP) to handle customer
queries, provide information on policy details, and assist with claims processing.
 Lead the development and deployment of advanced machine learning models to power the Policy
Recommendation System.
 Explored and experimented with Generative AI models for image generation tasks, including unconditional
image generation, image-to-image translation, or style transfer.
 Utilized supervised learning techniques to analyse historical data and customer behaviour for accurate policy
recommendations.
 Deployed Generative AI models into production environments, integrating them with existing systems and
workflows, and ensuring scalability, performance, and reliability in real-world applications.
 Spearheaded the design and implementation of the NLP-driven chatbot, ensuring it understands natural
language queries and provides context-aware responses.
 Developed custom AI models using Azure Machine Learning service, utilizing frameworks like TensorFlow,
PyTorch, or AzureML Designer to train and deploy models for tasks such as predictive analytics, anomaly
detection, and recommendation systems.
 Collaborated with IT teams to seamlessly integrate the Policy Recommendation System and Chatbot with
backend systems, including policy databases, CRM tools, and claims processing systems.
 Used Google Dialog Flow for entity recognition and machine learning which involves creating and configuring
the chatbot or virtual agent within the Dialog Flow console
 Collaborated with cross-functional teams including data scientists, software engineers, and domain experts
to understand requirements, gather feedback, and deliver effective Generative AI solutions aligned with
business objectives.
 Used the Dialog Flow console to test the chatbot by entering sample user inputs and evaluated how well it
recognizes intents and extracts entities.
 Utilized the analytics and insights provided by Dialog flow to understand user interactions and identified
areas for improvement and refinement of the chatbot based on user feedback and performance data.
 Elevated customer engagement by providing personalized policy recommendations, fostering a more
positive and tailored experience which contributed to a 17% increase in customer satisfaction and
engagement levels.
 Established a continuous improvement process for the machine learning models, involving regular retraining
and optimization to adapt to changing customer preferences and market dynamics.
 Improved the efficiency of customer support operations by automating routine queries through the chatbot,
reducing response times, and increasing customer satisfaction which lead to a 22% reduction in response
times and operational costs.
 Pre-processed and augmented training data for Generative AI models, applying techniques such as
tokenization, normalization, data augmentation, or noise injection to improve model robustness and
generalization.
 Considered ethical and responsible AI principles in Generative AI projects, addressing concerns such as bias,
fairness, safety, and misuse prevention, and adhering to industry guidelines and best practices.
 Worked on creating a conversational and natural interface for the chatbot to enhance user engagement and
satisfaction.
 Overseen the design of an intuitive and user-friendly interface for customers interacting with the Policy
Recommendation System.
 Conducted research and development projects focused on Generative AI techniques, including VAEs, and
Transformers, to generate realistic and coherent text, images, or other data types.
 Incorporated sentiment analysis capabilities to gauge and respond to customer emotions effectively.
 Leveraged data-driven insights generated from customer interactions with the recommendation system and
chatbot to inform strategic decision-making and enhance business operations which contributed to a 25%
improvement in data-driven decision-making and overall business performance.
Environment: TensorFlow, scikit-learn, Apache Spark, Data Governance, CI/CD, SQL Server, ER Studio 9.7, Tableau
9.03, AWS, GCP, Teradata 15, ETL, Generative AI Models, NLP Models, MS Office Suite - Excel (Pivot, VLOOKUP), DB2,
R, Python, Visio, HP ALM, Agile, Azure, Data Quality, Tableau and Reference Data Management.

Client: Nielsen, NY Dec 2018 – Jul 2020


Role: Machine Learning Engineer

Project: Churn Prediction Model and Forecasting Customer Data Usage


Nielsen embarked on a strategic initiative to enhance customer retention and optimize resource allocation through
the development of a Churn Prediction Model and Forecasting Customer Data Usage. This project aims to leverage
advanced data science and machine learning techniques to identify potential churn patterns among customers and
forecast their data usage, enabling proactive measures for improved customer satisfaction and resource
management.

Roles and Responsibilities:


 Collaborated with other teams across Nielsen and developed key metrics to achieve business outcomes
 Collaborated with product managers to perform cohort analysis that identified an opportunity to reduce
churn by 17% for a segment of customers
 Working closely with Sales and Marketing teams to understand the need for analytics
 Build customer churn prediction, partial churn prediction and propensity models using XG-BOOST with an
accuracy of 87%
 Churn Prediction model was built using Jupyter Notebook, Python and SQL
 Aggregated daily device usage data for all the devices in a fleet to analyse fleet performance over time using
Python and SQL
 Utilized TensorFlow on GCP for developing, training, and deploying deep learning models, enhancing model
performance and scalability.
 Developed a forecasting model using time series analysis and machine learning algorithms to predict
customer data usage.
 Explored techniques like ARIMA, LSTM, or Prophet to capture temporal dependencies and fluctuations.
 Utilize classification algorithms, such as logistic regression, decision trees, or ensemble methods, to achieve
accurate churn predictions.
 Used GCP Dataprep for data cleaning and transformation, improving data quality and preparation efficiency
for analysis.
 Implemented various time series forecasting techniques to predict device usage, which helps in predicting
churn
 Used FBProphet to forecast sales and shipping at country and product level with 92% accuracy to avoid
shipping delays
 Working closely with business and engineering teams to encourage statistical best practices with respect to
experimental design, data capture and data analysis
 Build predictive models to predict possibility of data overage for IoT Customers
 Used FBProphet to forecast customers data usage with a 94% accuracy rate
 IoT data usage models were built using Jupyter Notebook, PySpark, SQL, Hadoop, and Hive
 Enhancements to the existing billing and payment models achieving 7% more accuracy and call volume
reduction from 20.2M to 15.4M
 Created Tableau dashboards for quick reviews to be presented to business and end users
 Build data pipelines to extract the customer usage data from relational database and data streaming
applications using Apache Pulsar
 Implemented rigorous model evaluation techniques, including cross-validation and metric analysis, to ensure
model accuracy and generalizability and optimized model hyperparameters and configurations for improved
performance.
Environment: ER Studio 9.7, Tableau 9.03, AWS, Azure, GCP, Teradata 15, MDM, AWS, GIT, Unix, Python 3.5.2,
MLLib, SAS, regression, logistic regression, Hadoop, NoSQL, Teradata, OLTP, random forest, OLAP, HDFS, ODS, NLTK,
SVM, JSON, XML, MapReduce, Google Dialog Flow.

Client: ACI Worldwire, Omaha, NE May 2017 – Nov 2018


Role: Data Scientist/Data Engineer

Project: Real-Time Transaction Processing System and Predictive Analytics for Cash Flow Management
ACI World wire, a pioneer in global payments technology, is embarking on a transformative project to advance our
transaction processing capabilities and cash flow management strategies. This project, led by a dynamic team of
Data Scientists and Data Engineers, aims to develop a cutting-edge Real-time Transaction Processing System coupled
with Predictive Analytics for Cash Flow Management.

Roles and Responsibilities:


 Designed and implemented an agile and scalable real-time transaction processing system to handle the
dynamic nature of global financial transactions and integrated state-of-the-art stream processing
technologies for low-latency and high-throughput transaction processing.
 Developed robust predictive analytics models to forecast cash flow patterns, providing insights into future
financial requirements and implement machine learning algorithms to analyze transaction data, identify
trends, and predict cash flow fluctuations.
 Implement a robust stream processing architecture to handle real-time data ingestion, processing, and
analysis for transaction processing and ensure fault tolerance and scalability to accommodate varying
transaction volumes.
 Lead efforts in integrating diverse data sources, including transaction logs, customer profiles, and external
financial data, into a cohesive data pipeline and applied data transformation techniques to ensure data
quality and consistency.
 Developed and integrated data pipelines using GCP Pub/Sub for efficient and reliable message-based
communication between services.
 Collaborated on the development of machine learning models for cash flow prediction, considering factors
such as transaction history, economic indicators, and seasonality.
 Achieved a 19% improvement in global transaction processing efficiency through the implementation of a
real-time processing system, ensuring rapid and accurate financial transactions.
 Designed and implemented an interactive real-time dashboard that provides stakeholders with intuitive
visualizations of transaction processing and cash flow predictions.
 Utilized visualization tools such as Tableau or Power BI for effective communication of insights.
 Experienced a 21% enhancement in financial forecasting accuracy, empowering the organization with real-
time insights and foresight into future cash flow requirements.
 Implemented automated alerting systems to notify relevant stakeholders of critical events, anomalies, or
deviations in projected cash flow and ensured timely communication to enable proactive decision-making.
Environment: Erwin r7.0, SQL Server 2012/2008, Windows XP/NT/2000, Oracle 10g/9i, MS-DTS,
UML, UAT, SQL Loader, OOD, OLTP, PL/SQL, MS Visio, Informatica.

Client: Sterlite Technologies Ltd, Pune, India Jun 2015 – Apr 2017
Role: Data Analyst

Role and Responsibilities:


 Involved in defining the source to target data mappings, business rules, data definitions.
 Involved in defining the business/transformation rules applied for sales and service data.
 Worked with project team representatives to ensure that logical and physical ER/Studio data models were
developed in line with corporate standards and guidelines.
 Used Python, R, SQL to create Statistical algorithms involving Multivariate Regression, Linear Regression,
Logistic Regression, PCA, Random forest models, Decision trees, Support Vector Machine for estimating the
risks of welfare dependency.
 Managed and maintained large datasets using Google Cloud Storage, ensuring secure and scalable data
storage solutions.
 Conducted complex SQL queries and data analysis using GCP BigQuery, achieving significant performance
improvements in data processing tasks.
 Implementation of Metadata Repository, Maintaining Data Quality, Data Cleanup procedures,
Transformations, Data Standards, Data Governance program, Scripts, Stored Procedures, triggers and
execution of test plans
 Coordinated meetings with vendors to define requirements and system interaction agreement
documentation between client and vendor system.
 Responsible for defining the functional requirement documents for each source to target interface.
 Remain knowledgeable in all areas of business operations in order to identify systems needs and
requirements.
 Document the complete process flow to describe program development, logic, testing, and implementation,
application integration, coding.
Environment: Erwin r7.0, SQL Server 2012/2008, Windows XP/NT/2000, Oracle 10g/9i, MS-DTS, UML, UAT, SQL
Loader, OOD, OLTP, PL/SQL, MS Visio, Informatica.

EDUCATION:
 Bachelor of Engineering, Electrical Engineering, JNTU India — April 2015

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy