Srikanth - Bellary Architect Resume
Srikanth - Bellary Architect Resume
https://www.linkedin.com/in/srikanth-bellary/
rohitb@interaslabs.com
(614)-992-8709
Professional Summary
15+ years of consulting experience in Business Systems Analysis, Data Engineering, Data
Architecture, Solution Architecture, Cloud Computing, Machine Learning, AI, and
Functional Programming with master’s degree in software engineering
Expertise in consolidating, integrating, and migrating Enterprise Data to Cloud-based Data Lakes
Experienced in designing and developing Hadoop ecosystem solutions for Big Data Analytics
and Business Intelligence.
Proficiency in AI/ML/LLM solutions (ChatGPT, Gemini, Claude) including prompt engineering
and production integration for significant business optimization
Expert in implementing RAG architecture with vector databases to enhance LLM capabilities
with custom use cases and internal guardrails in live production systems
Demonstrated success in developing Agentic AI workflows that segregate business logic
from technical features
Strong experience developing big data pipelines using Spark (Scala/Python API) with RDD,
SQL, Datasets, Data Frames, Streaming, and MLlib
Hands-on with Hadoop ecosystem components (HDFS, MapReduce, Hive, Sqoop, Oozie)
Experience with SOA, Microservices, and Serverless Lambda Architecture
Deep understanding of RDBMS, OLTP, OLAP, EDW, ETL/ELT, NoSQL, and Data Governance
Experience in Strategic Enablement including Capability Maturity Modeling (CMM)
Skilled in Cloud Migration and Platform Automation across major cloud providers
Effective communication skills with ability to engage effectively with C-level and technical teams.
Core Competencies
Data Architecture & Engineering: Data Modeling, ETL/ELT Pipelines, Data Lakes/Warehouses,
Data Governance
Artificial Intelligence: LLM Implementation, RAG Architecture, Agentic AI, Prompt
Engineering, Vector Databases
Machine Learning: Model Development, Feature Engineering, Model Training/Inference,
Ensemble Methods
Cloud Technologies: Multi-Cloud Architecture (AWS, Azure, GCP), IaaS, PaaS, SaaS Implementation
Big Data Ecosystem: Hadoop, Spark, Kafka, Streaming Analytics, Batch Processing, Real-time
Data Processing
Data Analytics & Visualization: Business Intelligence, Predictive Analytics, Dashboard Development
Database Management: SQL/NoSQL Solutions, Performance Tuning, Data Migration, Schema Design
Enterprise Integration: API Development, Microservices Architecture, Event-Driven Systems
DevOps & ML Ops: CI/CD Pipelines, Infrastructure as Code, Containerization, Orchestration
Project Leadership: Stakeholder Management, Technical Team Leadership, Architecture
Review Boards
Technical Skills
Gen AI/LLM ChatGPT, Llama, Gemini 2.0 Flash, Claude 3.7 Sonnet, Perplexity & DeepSeek R1
ML/AI PyTorch, TensorFlow, Keras, Random Forests, XGBoost, Clustering, NER, NLP
Hadoop Hadoop, HDFS, MapReduce, YARN, Pig, Hive, Spark, Sqoop, Kafka
GCP: GCS, BQ, DataProc, Vertex AI, Cloud Compose, Data Fusion
Cloud Technologies AWS: S3, EC2, EMR, EKS, Glue, Redshift, Kinesis, Lambda, Athena
Azure: Blob Storage, ADL, Data Factory (ADF), Databricks, Delta Lake
DevOps/Infra Jenkins, Maven, SBT, Terraform, Kubernetes, Cloud Formation
Programming SQL, Java, Scala, Python, Unix, HTML, CSS, JavaScript, XML, JSON, REST
Tools VS Code, Cursor, Copilot, IntelliJ, PyCharm, Git, WinSCP, Putty, Power Shell
Databases AWS RDS, Redshift, MongoDB, PostgreSQL, Oracle, MySQL, Teradata
ETL and VIZ Streamlit, Tableau, Spotfire, Cognos, SAP BO/BI, Informatica
Certification
Google Cloud Certified Professional Data Engineer (2023-2025)
Education
Master of Science (MS) in Software Engineering (2009)
Bachelor of Technology (BTech) in Mechanical Engineering (2007)
Professional Experience
Environment: GCP, GCS, Big Query, Spark, Airflow, Vertex AI, LLM, RAG, MongoDB, Agents, Python, SQL,
Golang, Streamlit, Claude 3.7 Sonnet, Lang chain
Environment: Mainframes, VSAM, DARSTRAN, AWS, PostgreSQL, SQL, Erwin, Glue, S3, IAM,
EC2, Databricks, Athena, SageMaker, Kafka, Python
Environment: Azure, GCP, Spark, Kafka, Python, Scala, Adobe CDP, SaaS, EDW, Gitlab, Jenkins, Data
Robot, Databricks, Synapse, ADF, Conda, Airflow+
Environment: Spark, Scala, Hadoop, Hive, HDFS, AWS, S3, EMR, GLUE, SNS, Talend, SaaS, Confluence,
Gitlab, Jenkins, Artifactory, Stylus Studio
Cars.com | Chicago, IL
Machine Learning Engineer | Mar 2017 – Aug 2017
Project was to research, experiment and productionize Big Data and Machine Learning techniques for
Predictive Analytics and Business Optimization.
Led Big Data Machine Learning initiatives for Predictive Analytics and Business Optimization.
Led Big Data Machine Learning team of Data Analysts, Scientists and Engineers
Integrated Customer Data Platform into ML workflow for customer behavior analysis
Productionized ML Models for production Cloudera HDFS cluster (CDH 5.5.1)
Implemented Regression and Classification algorithms using Spark and TensorFlow
Developed ensemble models including Random Forest, XGBoost & Gradient Boosted Regression
Developed and deployed production grade ML pipelines using Spark Scala and Python APIs
Implemented CI/CD for ML pipelines across the environments for periodic model training
Built customized data pipelines for training data, labeled data and feature engineered data
Experimentation with hyper parameter tuning to improve model efficiency and accuracy.
Environment: Cloudera, Kerberos, Spark 1.6/2.0, SaaS, Scala, Python, Kafka, Hive, Hue, HDFS, AWS, S3,
Sqoop, Couchbase, Confluence, Adobe CDP, Bitbucket, Jenkins, Artifactory
Environment: AWS, S3, EMR, Kinesis, Talend, Tableau, Collibra, Waterline, Paxata, Teradata, Epsilon,
Spark, Kafka, RabbitMQ, Java, ELK, Redshift, MapReduce
Environment: Windows, Linux, Oracle 11g, Mainframe, JDBC/ODBC, RESTful API, AWS,
Hortonworks, Cloudera, Spark 1.6/2.0, SQL, NoSQL, HDFS, Talend, HBase, EMR, CSV, XML, JSON,
Parquet, Scala IDE, SBT, Oozie
Key Bank | Cleveland, OH
Senior Consultant – Big Data Architect | September 2014 – April 2015
Enterprise scale initiative by Key Bank to consolidate the Enterprise Data from various sources into the
Shared Foundation Data (SFD) and adopt big data technologies for Business Reporting, Management
Reporting, Executive Reporting and Predictive Analysis.
Served as a Data Architect on Shared Foundation Data (SFD) program using Big Data Technology
Identified data sources for batch processing and real-time stream processing
Configured Cloudera Manager for cluster setup in Staging and Test environments
Developed data pipeline Spark applications in Python using SparkContext and Spark SQL
Environment: Windows, Linux, Java, Eclipse IDE, Hadoop, HDFS, Spark, Flume, Hive, Pig, Cloudera, AWS,
EC2, EMR, Redshift, Teradata, JSON, Parquet, AVRO, ALM, Cognos, MS SQL Server
Technologies: Oracle, SQL Server, Informatica, ETL, SSRS, Business Objects, MicroStrategy, Data
Warehousing, SOA, Web Services, UNIX, Windows