0% found this document useful (0 votes)
131 views6 pages

Srikanth - Bellary Architect Resume

Srikanth Bellary is a seasoned Data Engineer and Architect with over 15 years of experience in data engineering, architecture, and cloud computing, holding a master's degree in software engineering. He specializes in AI/ML solutions, big data analytics, and cloud migration, with a strong proficiency in various technologies including Hadoop, Spark, and GCP. Currently, he works as a Sr. Data, ML & AI Engineer at Interas Labs, leading AI solutions for retail clients and optimizing business processes through innovative data pipelines.

Uploaded by

Nasir Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
131 views6 pages

Srikanth - Bellary Architect Resume

Srikanth Bellary is a seasoned Data Engineer and Architect with over 15 years of experience in data engineering, architecture, and cloud computing, holding a master's degree in software engineering. He specializes in AI/ML solutions, big data analytics, and cloud migration, with a strong proficiency in various technologies including Hadoop, Spark, and GCP. Currently, he works as a Sr. Data, ML & AI Engineer at Interas Labs, leading AI solutions for retail clients and optimizing business processes through innovative data pipelines.

Uploaded by

Nasir Ahmed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 6

SRIKANTH BELLARY

SR. DATA ENGINEER/ARCHITECT (GCP Certified Professional Data Engineer)

https://www.linkedin.com/in/srikanth-bellary/
rohitb@interaslabs.com
(614)-992-8709
Professional Summary
 15+ years of consulting experience in Business Systems Analysis, Data Engineering, Data
Architecture, Solution Architecture, Cloud Computing, Machine Learning, AI, and
Functional Programming with master’s degree in software engineering
 Expertise in consolidating, integrating, and migrating Enterprise Data to Cloud-based Data Lakes
 Experienced in designing and developing Hadoop ecosystem solutions for Big Data Analytics
and Business Intelligence.
 Proficiency in AI/ML/LLM solutions (ChatGPT, Gemini, Claude) including prompt engineering
and production integration for significant business optimization
 Expert in implementing RAG architecture with vector databases to enhance LLM capabilities
with custom use cases and internal guardrails in live production systems
 Demonstrated success in developing Agentic AI workflows that segregate business logic
from technical features
 Strong experience developing big data pipelines using Spark (Scala/Python API) with RDD,
SQL, Datasets, Data Frames, Streaming, and MLlib
 Hands-on with Hadoop ecosystem components (HDFS, MapReduce, Hive, Sqoop, Oozie)
 Experience with SOA, Microservices, and Serverless Lambda Architecture
 Deep understanding of RDBMS, OLTP, OLAP, EDW, ETL/ELT, NoSQL, and Data Governance
 Experience in Strategic Enablement including Capability Maturity Modeling (CMM)
 Skilled in Cloud Migration and Platform Automation across major cloud providers
 Effective communication skills with ability to engage effectively with C-level and technical teams.

Core Competencies
 Data Architecture & Engineering: Data Modeling, ETL/ELT Pipelines, Data Lakes/Warehouses,
Data Governance
 Artificial Intelligence: LLM Implementation, RAG Architecture, Agentic AI, Prompt
Engineering, Vector Databases
 Machine Learning: Model Development, Feature Engineering, Model Training/Inference,
Ensemble Methods
 Cloud Technologies: Multi-Cloud Architecture (AWS, Azure, GCP), IaaS, PaaS, SaaS Implementation
 Big Data Ecosystem: Hadoop, Spark, Kafka, Streaming Analytics, Batch Processing, Real-time
Data Processing
 Data Analytics & Visualization: Business Intelligence, Predictive Analytics, Dashboard Development
 Database Management: SQL/NoSQL Solutions, Performance Tuning, Data Migration, Schema Design
 Enterprise Integration: API Development, Microservices Architecture, Event-Driven Systems
 DevOps & ML Ops: CI/CD Pipelines, Infrastructure as Code, Containerization, Orchestration
 Project Leadership: Stakeholder Management, Technical Team Leadership, Architecture
Review Boards
Technical Skills

Gen AI/LLM ChatGPT, Llama, Gemini 2.0 Flash, Claude 3.7 Sonnet, Perplexity & DeepSeek R1
ML/AI PyTorch, TensorFlow, Keras, Random Forests, XGBoost, Clustering, NER, NLP
Hadoop Hadoop, HDFS, MapReduce, YARN, Pig, Hive, Spark, Sqoop, Kafka
GCP: GCS, BQ, DataProc, Vertex AI, Cloud Compose, Data Fusion
Cloud Technologies AWS: S3, EC2, EMR, EKS, Glue, Redshift, Kinesis, Lambda, Athena
Azure: Blob Storage, ADL, Data Factory (ADF), Databricks, Delta Lake
DevOps/Infra Jenkins, Maven, SBT, Terraform, Kubernetes, Cloud Formation
Programming SQL, Java, Scala, Python, Unix, HTML, CSS, JavaScript, XML, JSON, REST
Tools VS Code, Cursor, Copilot, IntelliJ, PyCharm, Git, WinSCP, Putty, Power Shell
Databases AWS RDS, Redshift, MongoDB, PostgreSQL, Oracle, MySQL, Teradata
ETL and VIZ Streamlit, Tableau, Spotfire, Cognos, SAP BO/BI, Informatica

Certification
 Google Cloud Certified Professional Data Engineer (2023-2025)

Education
 Master of Science (MS) in Software Engineering (2009)
 Bachelor of Technology (BTech) in Mechanical Engineering (2007)

Professional Experience

Interas Labs | Remote


Sr. Data, ML & AI Engineer – Solution Architect | May 2024 – Present
Working as a technical consultant with multiple retail clients at Interas Labs
 Lead role to serve as Solution Architect and Product Owner Implementing ML, NLP, LLM, and
AI solutions in production for Retail clients to streamline the business processes
 Secured Google partnership for Interas Labs in GCP Services and Google Workspace domains.
 Provided Solution Design for AI-enabled Retail attribute mapping engine using Vertex AI
 Transformed labor-intensive attribute mapping processes through AI automation,
reducing operational costs by 60% and increasing throughput by 3x
 Built scalable Data Pipelines using GCS, Big Query, Data Proc, Spark and Airflow for data sourcing
 Built optimized ML Data Pipelines to create Training, Labeled and Feature Engineered datasets
 Built intelligent AI Pipelines to map raw attributes to target dictionary without using SQL or ETL
 Introduced Confidence Scoring mechanism and reasoning frameworks at individual
attribute level to support and build trust in AI/LLM based solutions
 Analyzed prompt engineering techniques to enhance confidence scoring accuracy
 Used OCR engine to extract retail product details from product images with confidence scoring
 Implemented NER and NLP algorithms as preprocessing steps to map the attributes to dictionary
 Created Agentic workflows with Vertex AI RAG Engine for mapping, inference, and validation
 Developed multiple POVs/POCs to demonstrate successful integration of ML, AI, GenAI, LLM
and RAG based solutions in production along with business specific custom guard rails.

Environment: GCP, GCS, Big Query, Spark, Airflow, Vertex AI, LLM, RAG, MongoDB, Agents, Python, SQL,
Golang, Streamlit, Claude 3.7 Sonnet, Lang chain

ThermoFisher Scientific | Persistent Systems | Pittsburgh, PA


Sr. ML/AI Consultant – Solution Architect | May 2023 – May 2024
Customer Channels Group (CCG) is ThermoFisher’s large scale initiative to drive real-time data for faster
action and insights and deliver value from the Legacy Mainframe data migrated to AWS cloud.
 Proposed ML/AI based solutions for Customer Channels Group (CCG) at ThermoFisher
Scientific to migrate Legacy Mainframe data to AWS cloud.
 Designed and implemented robust relational data models to host non-relational
mainframes data which is hierarchical and sequential.
 Developed ML/AI pipelines to map complex mainframe data formats like VSAM and ISAM,
flat- file formats to ThermoFisher specific internal business entities.
 Worked closely with Leads from multiple channels at CCG to gather their business use cases,
extract domain specific data from MF and then provide clean and structured data as deliverable
 Extensive analysis on COBOL and DARSTRAN logic using AI/LLM solutions and convert it to SQL
 Built robust data validation framework for reconciliation of data between MF and AWS DataLake.

Environment: Mainframes, VSAM, DARSTRAN, AWS, PostgreSQL, SQL, Erwin, Glue, S3, IAM,
EC2, Databricks, Athena, SageMaker, Kafka, Python

CVS Health | Woonsocket, RI


Sr. ML/AI Advisor – Data Engineering Lead | Aug 2019 – April 2023
Retail Pharmacy Artificial Intelligence (RPhAI) is a major enterprise level business optimization
initiative by CVS Health to leverage Machine Learning and Artificial Intelligence Techniques to Resolve
Pharmacy Claim Rejections by the PBM’s.
 Led data engineering team for Retail Pharmacy Artificial Intelligence (RPhAI) initiative
to optimize pharmacy claim rejections using ML/AI solutions and implementations.
 Led ML Data solutions for RPhAI apps using CDP, EDW, Rx Claims, EPIC EHR and Clinical data
 Secured ARB approvals for solutions which consolidate payer, provider, and patient data
 Migrated data assets from CVS-Aetna's Azure Cloud Platform to Google Cloud Platform (GCP)
 Designed and implemented solutions for cloud migrations with minimal code changes
 Developed and maintained end-to-end ML pipelines and production deployments
 Worked exclusively on ML model training, generating labelled data and feature engineered
data for model training and aggregating real-time look up data for retrieval and model
inference
 Built and maintained look up data collections in Mongo DB used for model inference
 Collaborated with Data Scientists on model selection, hyperparameter tuning and A/B testing
 Rolled out regular ML/AI enabled features to the RPHAI apps like RxOptimizer and RxResolver
 Built and maintained Data and ML pipelines supporting the RPHAI apps for multiple years.
 Implemented Optical Cognitive Recognition (OCR), Name Entity Recognition (NER) and
Natural Language processing (NLP) techniques to extract data from Prescriber notes.
 Built CI/CD pipelines for ML production deployment frameworks using Jenkins, Git, and Airflow
 Sourced real-time data using Kafka and Spark Streaming used for Model training and inference
 Integrated real-time model feedback into ML training, retrieval, and inference.

Environment: Azure, GCP, Spark, Kafka, Python, Scala, Adobe CDP, SaaS, EDW, Gitlab, Jenkins, Data
Robot, Databricks, Synapse, ADF, Conda, Airflow+

Change Healthcare | Chicago, IL


Sr. Cloud Data Engineer/Solution Architect | Aug 2017 – Aug 2019
The IHDP puts CHC in a unique position to be an authoritative source for large datasets for the industry
by incorporating financial, clinical, and operational data combined with Artificial Intelligence and
Machine Learning.
 Key contributor to Intelligent Healthcare Data Platform (IHDP) combining financial, clinical,
and operational data with AI/ML.
 Translated business requirements into technical specifications and application code
 Developed tactical and strategic solutions for the Intelligent Health Data Platform
 Engineered data ingestion from payers, providers, institutions, and clearing houses
 Processed large volumes of EDI claims data for enterprise data lake storage
 Built Spark/Scala applications to transform EDI transactions (837, 835, 270/271) into
tabular formats
 Created XSLT scripts to convert X12 formatted data to structured XML
 Implemented Data Rights Management and RBAC models for secure claims data access
 Developed AWS Glue ETL jobs utilizing Crawlers, Catalogs and Athena
 Conducted research on Machine Learning, AI and Blockchain technologies

Environment: Spark, Scala, Hadoop, Hive, HDFS, AWS, S3, EMR, GLUE, SNS, Talend, SaaS, Confluence,
Gitlab, Jenkins, Artifactory, Stylus Studio

Cars.com | Chicago, IL
Machine Learning Engineer | Mar 2017 – Aug 2017
Project was to research, experiment and productionize Big Data and Machine Learning techniques for
Predictive Analytics and Business Optimization.
 Led Big Data Machine Learning initiatives for Predictive Analytics and Business Optimization.
 Led Big Data Machine Learning team of Data Analysts, Scientists and Engineers
 Integrated Customer Data Platform into ML workflow for customer behavior analysis
 Productionized ML Models for production Cloudera HDFS cluster (CDH 5.5.1)
 Implemented Regression and Classification algorithms using Spark and TensorFlow
 Developed ensemble models including Random Forest, XGBoost & Gradient Boosted Regression
 Developed and deployed production grade ML pipelines using Spark Scala and Python APIs
 Implemented CI/CD for ML pipelines across the environments for periodic model training
 Built customized data pipelines for training data, labeled data and feature engineered data
 Experimentation with hyper parameter tuning to improve model efficiency and accuracy.
Environment: Cloudera, Kerberos, Spark 1.6/2.0, SaaS, Scala, Python, Kafka, Hive, Hue, HDFS, AWS, S3,
Sqoop, Couchbase, Confluence, Adobe CDP, Bitbucket, Jenkins, Artifactory

McDonald's Corporation | Sapient Razorfish | Chicago, IL


Lead Big Data Consultant | Nov 2016 – Mar 2017
Strategic Enablement (SE) and Acceleration for McDonald’s as part of the Enterprise Cloud Migration and
Automation Program. Key aspects include Capability Maturity Model, Big Data Architecture, Global Data
Lake Setup, Streaming Data, IoT Data, Predictive Reporting (Machine Learning) and Global Market
Solution for Rapid Roll-Out to over one hundred Countries.
 Strategic Enablement for Enterprise Cloud Migration and Automation Program to transition
from Monolithic to Microservice architecture.
 Led Big Data Track for Cloud Platform Migration Program
 Captured Capability Maturity Model for Core Customer Platform and Global Data Platform
 Conducted Strategic Enablement workshops with Chief Data Officer at McDonald’s
 Developed POC Spark Applications and replicated them using Talend/Spark, S3 and EMR
 Evaluated technology tools (Waterline Data, Paxata) for Data Science and Global Data Lake
 Worked on Solutions for GDPR Regulations to be implemented in EU region by McDonalds
 Designed framework for Data Integration adaptors in Microservices/API Environment
 Created Agile stories for Data Ingestion, Global Data Lake, ETL Migration, and Data
Science activities

Environment: AWS, S3, EMR, Kinesis, Talend, Tableau, Collibra, Waterline, Paxata, Teradata, Epsilon,
Spark, Kafka, RabbitMQ, Java, ELK, Redshift, MapReduce

Northern Trust Corporation | Chicago, IL


Sr. Big Data Consultant | April 2015 – Nov 2016
The Derivatives Transformation Program at Northern Trust aimed to enhance the data and technology
infrastructure by incorporating cloud and big data technologies into the IT processes.
 Key consultant for Derivatives Transformation Program upgrading enterprise data
exchange format to ISO 20022 (Dodd Frank).
 Led data integration of Hadoop technologies with Enterprise IT Applications and Data Layer
 Engineered data pipeline jobs from production cluster (CDH5) to Central Data Lake (S3)
 Designed EDW and Data Mart architecture using Sqoop, Spark, Redshift, and Tableau
 Created Enterprise Security Monitoring data pipeline using AWS EMR, Spark, Kafka and S3
 Developed Spark 2.0 applications in Scala using Spark SQL, Data Frame, Dataset and Dstream
 Built large volume data processing Spark data pipelines for data migration and enrichment
 Created POCs for Machine Learning applications involving linear regression, logistic
regression, supervised learning, recommendation engines and predictive engines.
 Worked on DevOps, CICD, Role Based Access Controls (RBAC) & Data Rights Management (DRM)

Environment: Windows, Linux, Oracle 11g, Mainframe, JDBC/ODBC, RESTful API, AWS,
Hortonworks, Cloudera, Spark 1.6/2.0, SQL, NoSQL, HDFS, Talend, HBase, EMR, CSV, XML, JSON,
Parquet, Scala IDE, SBT, Oozie
Key Bank | Cleveland, OH
Senior Consultant – Big Data Architect | September 2014 – April 2015
Enterprise scale initiative by Key Bank to consolidate the Enterprise Data from various sources into the
Shared Foundation Data (SFD) and adopt big data technologies for Business Reporting, Management
Reporting, Executive Reporting and Predictive Analysis.
 Served as a Data Architect on Shared Foundation Data (SFD) program using Big Data Technology
 Identified data sources for batch processing and real-time stream processing
 Configured Cloudera Manager for cluster setup in Staging and Test environments
 Developed data pipeline Spark applications in Python using SparkContext and Spark SQL

Environment: Windows, Linux, Java, Eclipse IDE, Hadoop, HDFS, Spark, Flume, Hive, Pig, Cloudera, AWS,
EC2, EMR, Redshift, Teradata, JSON, Parquet, AVRO, ALM, Cognos, MS SQL Server

Previous Professional Experience Summary

Wells Fargo | Des Moines, IA


Senior Consultant – Web Services Data Migration | 2013-2014
WFF Data Migration Program: The project is to migrate the Wells Fargo Financial Home Loan Data from
Wells Fargo Supreme to Mortgage Servicing Big Data Platform.
 Led data migration initiatives for Wells Fargo Financial Home Loan data to Mortgage
Servicing Platform
 Managed Web Services (50+) consumed by various Java, .NET, and mobile applications in
SOA environments

Prudential Financial | Shelton, CT


Business Intelligence Consultant | 2011-2013
Pru Inside - New Policy Administration Platform to manage strategic methods and risk compliance rules.
 Designed and implemented BI reporting solutions and ETL processes for Policy
Administration Platform

Cognizant Technology Solutions | Minneapolis, MN


Business Systems Associate | 2009-2011
 Created data warehousing solutions using Kimball methodology with star schema design

Career Builder Inc. | Reston, VA


ETL Analyst | 2008-2009
 Developed ETL mappings, SQL procedures, and transformation logic for enterprise data systems

UIA Research and Development | Hyderabad, India


ETL Analyst | 2006-2007
 Led cross-functional teams through all SDLC phases and collaborated with business stakeholders

Technologies: Oracle, SQL Server, Informatica, ETL, SSRS, Business Objects, MicroStrategy, Data
Warehousing, SOA, Web Services, UNIX, Windows

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy